Executives discuss key challenges in deploying AI – and how to solve them

Missed a session at the Data Summit? View here on demand.

Hastened by a widespread move to digitize operations, the company is enthusiastically embracing AI. According to IDC’s 2022 AI InfrastructureView survey, 31% of companies say they now have AI in production, with the majority actively testing AI technologies. Increasingly, the adoption of AI is driving higher profitability, with 27% of companies responding to a December 2021 McKinsey survey claiming that at least 5% of their earnings before interest and tax (EBIT) are now attributable are to AI.

But there are still many hurdles to successfully deploy AI. Of the companies participating in the AI ​​InfrastructureView survey, only a third claim to have reached a “mature” adoption stage where their entire organization benefits from an enterprise-wide AI strategy. In addition, while nearly two-thirds of companies in the McKinsey survey say they will continue to increase their investments in AI over the next three years, half admit they are experiencing higher-than-expected AI project costs.

Unlink data science

Why is it so challenging to get AI projects into production? The reasons vary, according to Jeff Boudier, head of product and growth at AI language startup Hugging Face. But often, companies fail to build systems that allow their data science teams — the teams responsible for deploying AI technologies — to properly adorn and share AI models, code and datasets, he says. This creates more work for AI project managers, who must keep track of all models and datasets created by teams, so they don’t have to reinvent the wheel for every business request.

“Today, data science is largely done in ‘single player’ mode, with code living in notebooks on local machines,” Boudier told VentureBeat via email. “It’s how enterprise software was made 15 years ago, before modern version control systems and … collaboration workflows changed the day.”

The emerging discipline of MLOps, which stands for “machine learning operations” (a term coined by Gartner in 2017), aims to address the disparate and compartmentalized nature of AI development by identifying practices for collaboration among data scientists . By simplifying AI management processes, the goal of MLOps is to automate the implementation of AI models in an organization’s core software systems.

For example, startups like ZenML enable data scientists to express their workflows as pipelines that, with configuration changes, can accommodate various infrastructure and development tools. These can be built into a framework to resolve reproducibility and version control issues, reducing the need for coordination between DevOps teams and data scientists.

Growing Size — and Data Requirements

But collaboration isn’t the only hurdle for companies to adopt AI. Others are the result of machine learning models growing exponentially, Boudier said. Large models often do not fit standard hardware and can be slow and expensive to use. Or they are tied to proprietary APIs and services and dubiously touted as universal problem solvers.

†[Proprietary models hamper] AI adoption because… teams can’t dig into the code and properly evaluate or improve the models, and continues to create confusion about how to approach AI problems pragmatically,” Boudier said. Applying amounts of data requires a dive from the model graph down to the hardware, which requires skills that most companies don’t have.”

Sean Hughes, ecosystem director at ServiceNow, says companies often expect too much from AI models without doing the work necessary to customize them for their business. But that can lead to other problems, including a lack of available data to refine the models in each context in which they will be used. In a 2019 Dun & Bradstreet survey, companies rated a lack of data as the biggest setback in further deploying AI in their organizations.

“Hype and sensation that arise when AI research scientists open source work that yields new state-of-the-art benchmark results can be misinterpreted by the general public as the same as ‘problem solved’. But the reality is that state-of-the-art -the-art for a specific AI solution can only achieve 78% accuracy for a well-defined and controlled configuration,” Hughes told VentureBeat via email. †[A major challenge is] the business user’s expectation that: [an off-the-shelf] model will understand the nuances of the business environment to be useful for decision making… [Without the required data,] Even with the potential for AI to suggest a directionally correct next best action, it can’t because it doesn’t understand the context of user intent in that enterprise.”

On the same page

Feiyu Xu, SVP and Global Head of AI at SAP, agrees, adding that AI projects are most likely to succeed when there is alignment between business units and AI technology teams. This alignment can lead to “targeted” and “scalable” solutions for delivering AI services, she claims, and address ethical issues that may arise during conception, development or deployment.

“The best use cases of AI-powered applications ensure that the AI ​​technologies are fully embedded and automated for end users. AI systems also work best when experts securely use real business data to train, test and deploy the AI ​​services,” said Xu. “Companies need to define clear guidelines and guardrails to ensure ethical issues are carefully considered from the outset when developing new AI services. In addition, it is important to engage external, independent experts to regularly review cases and topics.”

In terms of data-related challenges in AI implementation, Xu points to the emergence of platform-as-a-service solutions designed to help developers and non-developers alike connect data sources across different backend systems. For example, Torch.AI connects apps, systems, services, and databases to enable reconciliation and processing of both unstructured and structured data for AI applications.

“AI plays a key role in enabling businesses and industries to become intelligent enterprises,” said Xu. “Most AI users have little experience with software development to design, change and improve their own workflows and business applications. This is where an intuitive, no-code development environment for features like intelligent process automation, workflow management, and robotic process automation can really help.”

VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more

This post Executives discuss key challenges in deploying AI – and how to solve them

was original published at “https://venturebeat.com/2022/03/12/executives-discuss-top-challenges-in-deploying-ai-and-how-to-solve-them/”