AI algorithms can disrupt our thinking

Health1

We’re excited to bring Transform 2022 back in person on July 19 and virtually July 20 – August 3. Join AI and data leaders for insightful conversations and exciting networking opportunities. Learn more about Transform 2022

Last year, the US National Security Commission on Artificial Intelligence concluded in a report to Congress that AI is ‘world-changing’. AI is also mind-blowing as the AI-powered machine now becomes the mind. This is an emerging reality of the 2020s. As a society, we are learning to rely on AI for so many things that we could become less curious and rely more on the information we get from AI-powered machines. In other words, we would already be outsourcing our thinking to machines and losing part of our desk as a result.

The trend towards more AI adoption shows no signs of slowing down. According to the Stanford Institute for Human-Centered Artificial Intelligence, private investment in AI is at an all-time high, reaching $93.5 billion in 2021 — double the amount from the previous year. And the number of patent applications related to AI innovation in 2021 is 30 times greater than the applications in 2015. This is proof that the AI ​​gold rush is running at full speed. Fortunately, much of what is being accomplished with AI will be beneficial, as evidenced by examples of AI helping to solve scientific problems ranging from folding proteins to exploring Mars and even communicating with animals.

Most AI applications are based on machine learning and deep learning neural networks that require large data sets. For consumer applications, this data is collected from personal choices, preferences, and selections on everything from clothing and books to ideology. Based on this data, the applications find patterns, leading to informed predictions of what we would likely need or want or find most interesting and engaging. So the machines provide us with many useful tools, such as recommendation engines and 24/7 chatbot support. Many of these apps seem helpful — or at worst, benign.

One example that many of us can relate to is AI-powered apps that give us directions. These are undoubtedly helpful in preventing people from getting lost. I’ve always been very good at directions and reading physical maps. After driving to a location once, I have no problem getting there without assistance. But now I have the app on for almost every ride, even destinations I’ve driven many times. Perhaps I am not as sure of my direction as I thought; maybe I just want the company of the soothing voice that tells me where to go; or maybe I’m getting dependent on the apps to give direction. I do worry now that if I didn’t have the app, I might not be able to find my way.

Perhaps we should pay more attention to this not-so-subtle shift in our reliance on AI-powered apps. We already know that they invade our privacy. And if they also diminish our human agency, there could be serious consequences. If we trust an app to find the fastest route between two places, we will likely trust other apps and in the not-too-distant future we will increasingly go through life on autopilot, just like our cars. And if we also unconsciously process what is presented to us in news feeds, social media, searches and recommendations, possibly without questioning it, do we lose the ability to form our own opinions and interests?

The dangers of digital groupthink

How else could one explain the completely unfounded QAnon theory that there are elite Satan-worshipping pedophiles in the US government, business and media trying to harvest the blood of children? The conspiracy theory started with a series of posts on the 4chan message board that then quickly spread through other social platforms via recommendation engines. We now know – ironically using machine learning – that the first posts were probably created by a South African software developer with little knowledge of the US. Yet the number of people who believe in this theory continues to grow; and it rivals some mainstream religions in popularity.

According to a story published in the Wall Street Journal, intellect weakens as the brain becomes more dependent on telephone technology. The same is probably true for any information technology where content flows our way without us having to work to learn or discover ourselves. If true, AI, increasingly presenting content tailored to our specific interests and reflecting our biases, could create a self-reinforcing syndrome that simplifies our choices, meets immediate needs, weakens our intellect, and locks us into a existing way of thinking.

NBC News correspondent Jacob Ward argues in his new book The Loop that we have entered a new paradigm through AI apps, one repeated with the same choreography. “The data is sampled, the results are analyzed, a shrunken picklist is presented and we choose again, continuing the cycle.” He adds that by “using AI to make choices for us, we will eventually reprogram our brains and our society… we are ready to accept what AI tells us.”

The Cybernetics of Conformity

An important part of Ward’s argument is that our choices are narrowed because the AI ​​presents us with options that are similar to what we have preferred in the past or probably prefer based on our past. So our future is more narrowly defined. Essentially, we could be stuck in time — a form of mental homeostasis — by the apps theoretically designed to help us make better decisions. This reinforcing worldview is reminiscent of Don Juan explaining to Carlos Castaneda in A Separate Reality that “the world is such and such, or so-and-so only because we delude ourselves that it is so.”

Ward echoes this when he says, “The human brain is built to accept what it is told, especially if what it is told meets our expectations and saves us tedious mental work.” The positive feedback loop presented by AI algorithms that broke out our desires and preferences adds to the information bubbles we already experience, reinforces our existing views, contributes to polarization by opening us up less to different points of view, less able to change, and turns us into people we didn’t want to be aware of. This is essentially the cybernetics of conformity, where the machine becomes the mind while adhering to its own internal algorithmic programming. This, in turn, will make us – as individuals and as a society – more predictable and vulnerable to digital manipulation at the same time.

It’s not really AI doing this, of course. The technology is simply a tool that can be used to achieve a desired goal, be it selling more shoes, persuading a political ideology, regulating the temperature in our homes, or talking to whales. . There is intent involved in its application. To keep our agency, we must push for an AI Bill of Rights, as proposed by the US Office of Science and Technology Policy. More than that, we will soon need a regulatory framework that protects our personal data and our ability to think for ourselves. The EU and China have taken steps in this direction, and the current administration is taking similar steps in the US. Obviously, now is the time for the US to get more serious in this pursuit – before we become non-thinking automatons.

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

This post AI algorithms can disrupt our thinking

was original published at “https://venturebeat.com/2022/04/03/ai-algorithms-could-disrupt-our-ability-to-think/”