Reliable AI: how do you ensure trust and ethics in AI?

Missed a session at the Data Summit? View here on demand.

A pragmatic and direct approach to ethics and trust in artificial intelligence (AI) – who doesn’t want that? This is how Beena Ammanath describes her new book, Trustworthy AI.

Ammanath is the Executive Director of the Global Deloitte AI Institute. She has held positions at GE, HPE and Bank of America, including vice president of data science and innovation, CTO of artificial intelligence, and head of data and analytics.

AI we can trust

Often, discussions of ethics and trust in AI have a very narrow focus, limited to fairness and bias, which can be frustrating as an industry professional, explains Ammanath. While honesty and bias are relevant aspects, Ammanath says they are not the only or even the most important. There’s a lot of nuance to it and that’s part of what Ammanath wants to tackle.

So what should be talked about when we talk about AI ethics? That can be a daunting question to ponder. For organizations not interested in philosophical quests, but in practical approaches, terms like “AI ethics” or “responsible AI” can feel complicated and abstract.

The term “trustworthy AI” has been used in places ranging from the EU Commission to IBM and from ACM to Deloitte. In her book, Ammanath lists the multiple dimensions she sees that together define reliable AI because it meets these criteria.

Honest and impartial, robust and reliable, transparent, accountable, safe, secure, responsible, responsible and with privacy. While we cannot possibly do justice to all of these dimensions, as there is an entire chapter in the book devoted to each of these dimensions, we can try to convey the thinking behind this definition, as well as the general approach.

In her role at Deloitte, Ammanath works with organizations that want to apply AI in practice. Because AI is growing so fast, there is also a lot of noise around it. However, business leaders have specific issues that they want to apply AI to, such as optimizing a supply chain.

There are many options to consider: maybe there’s an existing product that can be bought off the shelf, or a startup working on the problem that could potentially be acquired, or a collaboration with academia. The book of Ammanath tries to bring all of that together.

Ammanath is quick to point out that while there is still a lot of research into AI and the technology is not yet fully mature, AI is being used in the real world because it brings a lot of value. That means there are often unforeseen side effects.

While Ammanath strived to identify and elaborate the dimensions of trustworthy AI, that doesn’t necessarily mean her definition is complete. There may be additional dimensions that are relevant to specific use cases. And in general, prioritizing those dimensions is also very usage specific.

Honesty and bias are a good example. While these are probably the first things that come to mind when people talk about AI ethics, they aren’t always the most pertinent things Ammanath points out.

“When you build an AI solution that diagnoses patients, honesty and bias are super important. But if you’re building an algorithm that predicts jet engine failure, fairness and bias aren’t that important. Trustworthy AI is really a structure to get you started thinking about the dimensions of trust within your organization. To start those business discussions about what are the ways this can go wrong and how we can reduce it,” Ammanath said.

The importance of asking the right AI questions

Ammanath says what’s happening in most companies these days is that ethics tends to be put in a separate bucket. What she wants to do is help move the topic from the philosophical arena to the real world and equip everyone from CIOs to data scientists to see their role in the big picture and ask the right questions.

The structure of the book reflects that goal. A fictitious company is used as the background for the analysis of each dimension that includes trustworthy AI. At the beginning of each chapter, a scenario is presented where business leaders apply AI solutions to solve problems and create value for the business.

Those leaders may go out with the best of intentions, but as the scenarios evolve, they discover that many things can go wrong when applying AI in the real world. Those scenarios are inspired by Ammanath’s own experiences and are used to work out the dimensions of trustworthy AI in a problem-analysis-solution manner.

One of the main points of the book, however, is precisely the fact that there are no ready-made solutions. Instead, Ammanath advocates learning to ask the right questions — and that includes everyone in the organization:

“There is a belief that ethics is only for data scientists, or only for the IT team, and that is not true. It is relevant to the CHRO who may purchase a recruiting tool. Or the CFO whose team uses AI in account management or document management. It is relevant for every C suite executive. It’s not just limited to the IT team, or just the CEO,” said Ammanath. “Everyone in the organization needs to know what trusted AI means and how it applies to their organization. Even if you’re a marketing intern who is part of a vendor discussion, you need to know what questions to ask in addition to the functionality, such as: what kind of datasets did you train on?”

In most AI projects, Ammanath added, the focus is on value creation and ROI — cost savings, new products and so on. She suggests that people spend time thinking about the ways this could go wrong.

How this translates in terms of staff – again, it depends. For organizations building AI products themselves, it would probably make sense to appoint a lead ethicist role. For others who simply use AI products, access to the right expertise may be enough. However, it is important to remember that AI ethics is something that permeates organizations – it is not a burden that can be shoved onto a single role.

Ammanath sees reliable AI under the lens of socio-technical systems and proposes specific approaches that organizations can use to embrace it. Those approaches are based on identifying the relevant dimensions of cultivating trust through people, processes and technologies.

Because this builds on existing practices, organizations don’t have to start from scratch. Ammanath advocates adding existing training materials and processes. Simple things can be achieved, such as a hotline to access experts or adding AI risk factors in project management and software sourcing:

“Those kinds of fundamental changes in the process are super important to make it relevant. Having these different bins is a great way to operationalize trust, but you’re never going to get it quite right,” Ammanath said. “Because it will all be a learning process” [process] and there are so many different angles, you need so many different perspectives. But I think you will at least get started.”

VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more

This post Reliable AI: how do you ensure trust and ethics in AI?

was original published at “”