We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!
Artificial intelligence (AI) is bringing many changes to the enterprise, none of which are more important to success than infrastructure. Changing the nature of workloads — not just how they’re generated and processed, but how they apply to operational goals — requires changes in how raw data is handled, and this extends to the physical layer of the data stack.
As VB noted earlier this year, AI is changing the way infrastructure is designed to the edge. On a more fundamental level, base hardware is optimized to support AI workloads, not just at the processor level. But it will take a coordinated effort, and not little vision, to configure hardware to properly handle AI — and indeed, there probably isn’t one right way to do it anyway.
Fundamental change for AI infrastructure
In a recent survey by IDC of more than 2,000 business leaders, one of the key findings was the growing realization that AI must be on purpose-built infrastructure if it is to add real value to the business model. The lack of proper infrastructure was even cited as one of the main drivers of failed AI projects, which continues to hinder development in more than two-thirds of organizations. However, as with most technology initiatives, the main hurdles to a more AI-centric infrastructure are cost, the lack of clear strategies, and the sheer complexity of legacy data environments and infrastructure.
All hardware is interconnected in the enterprise, whether in the data center, cloud or edge, making it difficult to easily deploy and put new platforms to work. But as tech author Tirtajyoti Sarkar points out, there are plenty of ways to extract real value from AI without the latest generation of optimized chip-level solutions entering the channel.
For example, advanced GPUs may be the solution of choice for advanced deep learning and natural language processing models, but a number of AI applications — some quite advanced, such as game theory and large-scale reinforcement learning — are better suited to the CPU. And since much of the heavy lifting in AI development and use is typically done by front-end data conditioning tools, choices about cores, acceleration technologies, and cache can have more ramifications than processor type.
Memory architectures could also play a critical role in future AI platforms, says Jeffrey Burt of The Next Platform. After all, even the fastest chip in the world is of little use if it has no access to data, and for that you need a high capacity and high bandwidth in the memory module. To that end, the research focuses on AI-optimized memory solutions that can be used in tandem by CPUs, GPUs, and even custom ASICs, along with their own on-chip memory cores. An important aspect of this development revolves around the open Compute Express Link (CXL) that enables coherence between multiple memory cores.
Application Specific AI
It also seems likely that the infrastructure will be optimized not just around AI, but around all the different flavors of AI. For example, as NVIDIA points out, Natural Language Processing (NLP) requires such tremendous computing power to process massive amounts of data, so constructs like network fabrics and the advanced software used to manage them will be vital. Again, the idea isn’t just to give AI more brute force, but to streamline workflows and coordinate the operations of large, highly scaled, and highly distributed data sources to ensure projects can be completed on time and on budget.
All this, of course, will not happen overnight. It has taken decades to bring the data infrastructure to the state it is today, and it will take years to adapt it to the needs of AI. But the incentive to achieve this is strong, and with most enterprise infrastructure being positioned by cloud providers as a core, revenue-generating resource, the need to be at the forefront of this transition is likely to drive all new deployments going forward.
Despite all the changes AI is about to bring to the enterprise, one thing remains the same: AI is all about data, and data lives in the infrastructure. The only way to ensure that AI’s promises can be turned into reality is to create the right physical underpinnings for the intelligence to work its magic.
VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more about membership.
This post The success of AI is in the infrastructure
was original published at “https://venturebeat.com/2022/04/18/the-success-of-ai-lies-in-the-infrastructure/”