We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!
As the adoption of AI continues to grow exponentially, so does the discussion about – and concerns about – responsible AI.
While technology leaders and field researchers recognize the importance of developing AI that is ethical, safe and inclusive, they still grapple with issues surrounding regulatory frameworks and concepts of “ethics washing” or “ethics extracting” that reduce accountability.
Perhaps most importantly, the concept is not yet clearly defined. While many sets of proposed guidelines and tools exist – from the US National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework to the European Commission’s Expert Group on AI, for example – they are inconsistent and often vague and overly complex.
As noted by Liz O’Sullivan, CEO of blockchain technology software company Parity, “We will be the ones teaching our concepts of morality. We can’t just trust that this will come out of nowhere, because it just won’t.”
O’Sullivan was one of several panelists to speak this week on the topic of responsible AI at the Stanford University Human-Centered Artificial Intelligence (HAI) 2022 Spring Conference. The HAI was founded in 2019 to advance AI research, education, policy and practice to “improve the human condition”, and this year’s conference focused on key advances in AI.
Topics included responsible AI, foundation modeling, and the physical/simulated world, with panels moderated by Fei-Fei Li and Christopher Manning. Li is Stanford’s inaugural Sequoia Professor of computer science and co-director of HAI. Manning is the inaugural Thomas M. Siebel professor of machine learning and is also a professor of linguistics and computer science at Stanford as well as an associate director of HAI.
Specifically on responsible AI, panelists discussed progress and challenges related to algorithmic resources, building a responsible data economy, calculating the formulation and conception of privacy and regulatory frameworks, and addressing overarching issues of bias.
AI accountability needs predictive modeling and corrective action
Predictive models are increasingly used in high-stakes decision-making, such as loan approvals.
But just like humans, models can be biased, says Himabindu Lakkaraju, an assistant professor at Harvard Business School and the Department of Computer Science (affiliated) and Harvard University.
As a means of reducing modeling, there is growing interest in post-hoc techniques that appeal to individuals to whom loans have been denied. However, these techniques generate resources assuming that the underlying predictive model does not change. In practice, models are often updated regularly for a variety of reasons, such as data set shifts, making previously prescribed resources ineffective, she said.
To address this, she and fellow researchers looked at cases where redress is not valid, helpful, or does not result in a positive outcome for the affected party, such as general algorithmic issues.
They proposed a framework, Robust Algorithmic Recourse (ROAR), that uses adversarial machine learning (ML) for data augmentation to generate more robust models. They describe it as the first known solution to the problem. Their detailed theoretical analysis also underlined the importance of constructing resources that are robust to model shifts; otherwise there may be additional costs, she explains.
As part of their process, the researchers surveyed customers who applied for a bank loan in the past year. The vast majority of participants said algorithmic help would be extremely helpful to them. However, 83% of respondents said they would never do business with a bank again if the bank called on them and this was incorrect.
That’s why Lakkaraju said, “If we’re going to give someone a story, we’d better make sure it’s really correct and we’re going to keep that promise.”
Building a responsible data economy
Another panelist, Dawn Song, addressed the overarching concerns of the data economy and building responsible AI and machine learning (ML).
AI deep learning has made huge strides, said the professor in the department of electrical engineering and computer science at the University of California at Berkeley — but beyond that, she stressed, it’s essential to ensure the evolution of the responsible AI concept.
Data is the main driver of AI and ML, but much of this exponentially growing data is sensitive, and dealing with sensitive data has presented numerous challenges.
“Individuals have lost control over how their data is used,” Song said. User data is sold or obtained without their knowledge or consent during large-scale data breaches. As a result, companies leave valuable data in data silos and don’t use it for privacy reasons.
“There are many challenges in developing a responsible data economy,” she added. “There is a natural tension between utility and privacy.”
To establish and enforce data rights and develop a framework for a responsible data economy, we must not copy concepts and frameworks used in the analog world, Song said. Traditional methods rely on randomizing and anonymizing data, which is insufficient to protect data privacy.
New technical solutions can provide data protection in use, she explained. Some examples are secure computing technologies and cryptography, as well as differential language model training.
Song’s work in this area has included the development of program rewriting techniques and the development of decision records that ensure compliance with privacy regulations, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
“As we move into the digital age, these problems will only get worse,” Song said, to the extent that they will hinder societal progress and undermine human value and fundamental rights. “There is therefore an urgent need to develop a framework for a responsible data economy.”
De-biasing: a concept still under development
It is true that large corporations and corporations are taking steps in that direction, O’Sullivan emphasized. In general, they take a proactive approach to addressing ethical issues and dilemmas and addressing questions to make AI accountable and fair.
However, the most common misconception of large companies is that they have developed procedures to reduce bias, according to O’Sullivan, the self-proclaimed serial entrepreneur and expert in fair algorithms, surveillance and AI.
In reality, many companies try to “ethically wash with” [a] simple solution that may not go that far,” said O’Sullivan. Editing training data for toxicity is often referred to as negatively impacting freedom of expression.
She also posed the question: how can we adequately manage risk on models that are impossibly high in complexity?
With computer vision models and large language models, the “idea of de-biasing something is really an infinite task,” she said, also pointing out the difficulties in defining bias in language, which is inherently biased.
“I don’t think we agree on this,” she said.
Still, she ended on a positive note, noting that the field of responsible AI is popular and growing every day and organizations and researchers are making strides when it comes to definitions, tools, and frameworks.
“In many cases, the right people are at the helm,” O’Sullivan says. “It will be very exciting to see how things evolve in the coming years.”
VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more about membership.
This post What the recent Stanford AI conference reveals about the state of AI accountability
was original published at “https://venturebeat.com/2022/04/14/what-stanfords-recent-ai-conference-reveals-about-the-state-of-ai-accountability/”