AI Weekly: DARPA tries to better align AI with human intentions

Missed a session at the Data Summit? View here on demand.

This week, DARPA, the U.S. Department of Defense’s R&D agency for emerging technologies, launched a new program in AI that aims to “align” AI systems with human decision-makers in areas where there is no agreed-upon right answer. Elsewhere, two prominent co-founders of LinkedIn and DeepMind, Reid Hoffman and Mustafa Suleyman, have announced a new AI startup called Inflection AI, which aims to develop software that allows people to talk to computers in everyday language.

In a press release describing the new three-and-a-half-year program, DARPA says its goal is to “evaluate and build trusted algorithmic decision-makers for Defense Department mission-critical operations.” Called “In the Moment” or ITM, it focuses on the process of alignment – building AI systems that achieve what is expected of them.

“ITM differs from typical AI development approaches that require human agreement on the right outcomes,” ITM program manager Matt Turek said in a statement. “The lack of a correct answer in difficult scenarios prevents us from using conventional AI evaluation techniques, which implicitly require human consent to create ground-truth data.”

For example, self-driving cars can be developed against a ground truth for right and wrong decisions based on immutable, relatively consistent traffic rules. The designers of these cars were able to hard-code ‘risk values’ into the cars that prevent them from, for example, going right through a red light in cities where they are illegal. But Turek says these one-size-fits-all risk values ​​don’t work from the Defense Department’s perspective. Combat situations evolve quickly, he emphasizes, and a commander’s intent can change from scenario to scenario.

“The [Defense Department] needs rigorous, quantifiable, and scalable approaches to evaluating and building algorithmic systems for difficult decision-making where objective ground truth is not available,” Turek continued. and uncertainty, time pressure and conflicting values ​​present major challenges in decision-making.”

DARPA is just the latest organization exploring techniques that can help better align AI with a person’s intent. In January, OpenAI, the company behind the text-generating model GPT-3, disclosed an alignment technique that it claims reduces the amount of toxic language GPT-3 generates. Generating toxic text is a known problem in AI, often caused by toxic data sets. Because text-generating systems are trained on data containing problematic content, some of the content slips through.

“Although [AI systems are] pretty smart today, they don’t always do what we want them to do. The goal of tuning is to produce AI systems that do: [achieve] what we want them to do,” OpenAI co-founder and chief scientist Ilya Sutskever told VentureBeat in a phone interview earlier this year. †[T]hat becomes more important as AI systems become more powerful.”

ITM will seek to establish a framework to evaluate algorithmic decision-making in “very difficult domains”, including combat, through the use of “realistic, challenging” scenarios. In these scenarios, ‘trusted people’ will be asked to make decisions and then the results will be compared with decisions made by an algorithm subjected to the same scenarios.

“We’re going to collect the decisions, the responses from each of those decision makers, and present them in a blinded manner to multiple triage professionals,” Turek said. “Those triage professionals won’t know if the response is coming from an aligned algorithm, a baseline algorithm, or from a human. And the question we could ask those triage professionals is which decision maker they would delegate to, which gives us a measure of their willingness to trust those particular decision makers.

Talking to computers

Turning to the issue of alignment, LinkedIn co-founder Hoffman and DeepMind co-founder Suleyman plan with Inflection AI to use AI to help people talk to computers. In an interview with CNBC, Suleyman described that he wanted to build products that eliminate the need for people to write in shorthand or simplify their ideas to interact with machines.

†[Programming languages, mice, and other interfaces] are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness to make a machine do something,” Suleyman told the publication. “It feels like we’re about to generate language for near-human-level performance. It opens up a whole new set of things we can do in the product space.”

Inflection AI’s plans remain vague, but the concept of translating human intentions into a language computers can understand dates back decades. Even today’s best chatbots and voice assistants haven’t lived up to the promise — think Viv Labs, which promised to deliver a “conversational interface for everything” that instead slammed into elements of Samsung’s maligned Bixby assistant. But Suleyman and Hoffman bet that their expertise — and upcoming advances in conversational AI — will enable an intuitive human-computer language interface within the next five years.

“Even at the bigger tech companies, there’s a relatively small number of people who actually build these [AI] models. One of the benefits of doing this in a startup is that we can go much faster and be more dynamic,” Suleyman told CNBC. “My experience of building many, many teams over the last 15 years is that there’s a golden moment when you really have a really tight, small, focused team. I’m going to try to keep that as long as possible.”

Considering that countless visionaries in this field have tried and failed, that would indeed be an impressive feat.

For AI reporting, send news tips to Kyle Wiggers – and subscribe to the AI ​​Weekly newsletter and bookmark our AI channel, The Machine.

Thank you for reading,

Kyle Wiggers

Senior AI Staff Writer

VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more

This post AI Weekly: DARPA tries to better align AI with human intentions

was original published at “”