The future of AI is full of promises and potential pitfalls


We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!

Because it is such a young science, machine learning (ML) is constantly being redefined.

Many view autonomous, self-controlled AI systems as the next major disrupter, or potential developer, in the discipline.

These so-called “foundation models” include DALL-E 2, BERT, RoBERTa, Codex, T5, GPT-3, CLIP, and others. They are already being used in areas such as speech recognition, coding and computer vision, and they are emerging in other areas as well. They evolve in capacity, scope and performance, use billions of parameters and are able to go beyond expected tasks. As such, they inspire awe, anger and everything in between.

“It’s quite likely that the progress they’re making will continue for a long time to come,” said Ilya Sutskever, co-founder and chief scientist at OpenAI, whose work on foundation models has attracted widespread attention. “Their impact will be very significant – every aspect of society, every activity.”

Rob Reich, a professor of political science at Stanford, agreed. “AI is transforming every aspect of life — personal life, professional life, political life,” he said. “What can we do? What should we do to advance the organizing power of humanity in addition to our extraordinary technical progress?”

Sutskever, Reich, and several others discussed the development, benefits, pitfalls, and implications—both positive and negative—of foundation models at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) spring conference. The institute was founded in 2019 to advance AI research, education, policy and practice to “improve the human condition”, and their annual spring conference focused on key advancements in AI.

Far from optimal, making progress

Basic models are based on deep neural networks and self-supervised learning that accepts unlabeled or partially labeled raw data. Algorithms then use small amounts of identified data to determine correlations, create and apply labels, and train the system based on those labels. These models are described as adaptable and task independent.

The term “foundation models” was coined by the newly formed Center for Research on Foundation Models (CRFM), an interdisciplinary group of researchers, computer scientists, sociologists, philosophers, educators and students founded at Stanford University in August 2021. an intentionally double-sided: it denotes the existence of such models as unfinished, but serves as the common foundation from which many task-specific models are built via customization. It also aims to highlight the seriousness of such models as a “recipe for disaster” if poorly constructed, and a “base for future applications” if properly executed, according to a CRFM report.

“Foundation models are super impressive, they’ve been used in a lot of different environments, but they’re far from optimal,” said Percy Liang, director of CRFM and associate professor of computer science at Stanford.

He described them as helpful for general abilities and able to provide opportunities in a wide variety of disciplines, such as law, medicine, and other sciences. For example, they could perform many tasks in medical imaging, the data of which is at the petabyte level.

Sutskever, whose OpenAI developed a GPT-3 language model and DALL-E 2, which generates images from text descriptions, pointed out that much progress has been made with text-generating models. “But the world isn’t just text,” he said.

Solving the inherent problems of foundation models requires real-world use, he added. “These models come from the lab,” Sutskever said. “One way we can think about the progress of these models is that of gradual progress. These are not perfect; this is not the last exploration.”

Questions about ethics and action

The CRFM report sharply points out that foundation models pose “clear and significant societal risks” both at implementation and at the outset, while the resources needed to train them have lowered accessibility standards, leaving the majority excluded from the community.

The center also emphasizes that foundation models must be grounded, emphasize the role of people, and support diverse research. Their future development requires open discussion and must put in place protocols for data management, respect for privacy, standard evaluation paradigms, and mechanisms for intervention and redress.

“Overall, we believe that concerted action during this formative period will determine how foundation models are developed, who controls this development, and how foundation models will affect the wider ecosystem and society,” Liang wrote in a CRFM blog post.

Defining the boundary between safe and unsafe foundation models requires a system to track when and what these models are used for, Sutskever agrees. This includes methods for reporting abuse. But such infrastructure is lacking at the moment, he said, and the focus is more on training these models.

With DALL-E 2, OpenAI has made plans ahead of time to think about the many ways things can go wrong, such as bias and abuse, he claimed. They can also modify training data using filters or train “afterwards” to adjust system capabilities, Sutskever said.

But overall, “neural networks will continue to surprise us and make incredible progress,” he said. “It’s quite likely that the progress they’ve made will continue for a long time to come.”

Reich, however, is more cautious about the implications of foundation models. “AI is a developing immature field of scientific research,” said HAI’s associate director. He pointed out that, formally, computer science has only been around for a few decades, and AI is only a fraction of that.

“I’m suspicious of the idea of ​​democratizing AI,” Reich said. “We don’t want to democratize access to some of the most powerful technologies and put them in the hands of anyone who could use them for hostile purposes.”

While there are opportunities, there are also many risks, he said, questioning what counts as responsible development and what leading AI scientists are doing to accelerate the development of professional standards. He added that social and safety questions require the input of multiple stakeholders and must go beyond the scope of a technical expert or company.

“AI scientists lack a dense institutional footprint of professional standards and ethics,” he said. “They are, to put it more defiantly, like late-stage teenagers who have just recognized their powers in the world, but whose frontal lobes are not yet developed enough to give them social responsibility. They need fast acceleration. We need a rapid acceleration of professional standards and ethics to manage our collective work as AI scientists.”

VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more about membership.

This post The future of AI is full of promises and potential pitfalls

was original published at “https://venturebeat.com/2022/04/20/ais-future-is-packed-with-promise-and-potential-pitfalls/”

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *