Investigating the ethics of AI development


Missed a session at the Data Summit? View on demand here.

This article is contributed by Brian Gilmore, director of IoT Product Management at InfluxData.

Artificial intelligence (AI) is a concept that is talked about freely today. In general, the public tends to think of campy popular sci-fi depictions of AI. But even among those working in technology, there isn’t always an agreed-upon definition of AI. We tend to use AI as a category title, an umbrella term that encompasses several technologies of enormous complexity. This underlying, often abstracted complexity is why it is important to consider how ethics should play a role in the development and implementation of AI.

It can be easy to judge technology in simple binary terms; for example, “Does it do what I want or not?” Ultimately, that comes down to a yes or no answer. In an emerging field like AI, technologists are investigating these “can it” questions first. But as we begin to adopt autonomous and AI-driven technology into the mainstream, other questions arise. As a society, we cannot rely solely on technologists to make sustainable decisions every time. We need a paradigm shift that includes ethicists focusing on those important “would it” questions.

AI technologies will undoubtedly influence and influence our lives, beliefs and culture. The role of AI ethicists should be to ensure that these technologies exert that impact in a fair and benevolent manner. To succeed in this, AI technologies must be considered beyond the limited scope of their primary and planned use cases. It also requires awareness of the potential side effects of these technologies. The ethical development and delivery of AI technologies must balance risks and rewards for everyone, not just the intended beneficiaries.

Care versus nature

Artificial intelligence differs slightly from standard technology in development and implementation. There’s almost a parental aspect to building AI, and you can think of it as raising a child. On the one hand, there’s a ‘nurture’ component in the way we design and program algorithms, the samples we choose to train the AI, and the tests we do to validate the output in a controlled environment. As humans lead and learn AI through these developmental steps, they are naturally subject to the same human biases, intentional or not, of the people who build them. If this is not taken into account or measures are taken to mitigate the effects of as many introduced biases as possible, there is a significant risk of unintended consequences.

We should also not ignore the ‘nature’ component of AI. The underlying technologies used to build AI, including programming languages, algorithms, architectures, deployment models, and physical and digital inputs, can yield unexpected results when combined and applied to real-world situations. As a result, it is possible that even despite good design and ethical intentions, AI technology can generate unethical results or output. This challenge requires transparency. For example, a good first step could be comprehensive documentation of code and developer intent, along with implementing detailed monitoring and alert systems that drive compliance with those intents.

Applying transparency to the nurture and nature aspects of building AI is not an easy task. To do this, developers and data scientists must strike a balance between individual rights and wishes and the ‘greater good’. Of course, this dilemma extends beyond AI or even technology in the broadest sense of the word. These are issues that societies struggle with in many ways, so it’s no surprise that AI reflects the same issues. Therefore, our ability to develop, implement and apply AI ethically is part of a much deeper conundrum. Rather than considering whether the technology or its implementation is inherently ‘ethical’, we should also consider whether the decisions AI makes and the actions it takes are in themselves ‘ethical’. Who should make these decisions?

Leading the way for ethical AI and related technology

There is an inherent challenge in judging right and wrong with AI. If we rely on AI to make decisions where someone ultimately wins or loses, how can we ensure that AI always makes the ethically “right” decision? Is it even possible to completely encode human empathy, emotion and intent in AI? We must now decide how we will react if we disagree with technology in the gray areas of ‘right’ and ‘wrong’. Ultimately, we need to consider the population that the AI ​​really serves: the creators, operators, or the greater good. Honest answers to these questions are not easy, but they are critical to the long-term health and success of AI technology.

Here are a few predictions for where things might go. First, we’re likely to see significant formalization in the field of “digital ethics,” which logically should include AI ethics. If history holds, we will see the development of multiple standards and regulatory bodies and a parade of key technology executives signing pledges to declare accountability and make progress.

We will see the rise of Chief Ethics and Chief Principles. These executives are responsible for ethical and equitable creation, implementation and adoption of all technology within an organization. Legal and Compliance leaders will initially fill these roles; however, this may prove ineffective. Hopefully, we’ll see leaders emerge in the executive suite from disciplines not typically associated with technology, such as philosophy, theology, and psychology. Bringing these new stakeholders to the table will transform organizations in ways that go well beyond AI governance.

it comes down to

We need to kick-start the discourse around ethical AI and technology now, while we’re still a long way from that science fiction AI vision. In reality, most organizations are still focused on transforming their operations into the digital realm and becoming data-driven. A company cannot simply ‘upgrade’ to AI. Getting the right people in place is essential. Hire a Chief Data Officer and staff a diverse team of statisticians and analysts. Balance the team with academics and domain experts. Give those experts the tools and infrastructure they need to be effective. Build pipelines for data collection, processing, and storage using technologies such as time series, charts, documents, and relational databases, taking advantage of both open source tools and commercial platforms that your data scientists, analysts, and developers know and love. Connect the team with business stakeholders facing challenging issues and the data that fits those issues, and watch the magic happen!

Brian Gilmore is director of IoT product management at InfluxData, the makers of InfluxDB.

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

This post Investigating the ethics of AI development

was original published at “https://venturebeat.com/2022/03/19/examining-the-ethics-of-ai-development/”

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *