Why most AI implementations fail, and what enterprises can do to beat the odds

Presented by BeyondMinds

In recent years, AI has gained strong market traction. Enterprises across all industries began examining ways in which AI solutions can be deployed on top of their legacy IT infrastructure, addressing a host of pain points and business needs, from boosting productivity to reducing production defects, streamlining cumbersome processes, identifying fraudulent transactions, optimizing pricing, reducing operational costs, and delivering hyper-customized service.

The reason that AI can power such a rich variety of use cases is that AI in itself is a very broad term. It covers diverse domains such as natural language processing (NLP), computer vision, speech recognition, and time series analysis. Each of these domains can serve as the base for developing AI solutions tailored to a specific use case of one company, utilizing its particular datasets, environment, and desired outputs.

Despite AI’s immense potential to transform any business, oftentimes this potential is not realized. The somber reality is that most AI projects fail. According to a Gartner research, only 15% of AI solutions deployed by 2022 will be successful, let alone create ROI positive value.

This disparity between the big promise of revolutionizing businesses and the high failure rate in reality, should be of interest to any enterprise embarking on the digital transformation journey. These enterprises should ask themselves two key questions:

“Why do most AI projects fail?” and “Is there any methodology that can overcome this failure rate, paving the way to successful deployments of AI in production?”

The answer to the first question starts with data. Specifically, the challenge of processing data in a real-life production environment, as opposed to a controlled lab environment. AI solutions are based on feeding algorithms with input data, which is processed into outputs in the desired form that serves the business case, such as data classification, predictions, anomaly detection, etc.

To be able to produce accurate outputs, AI models are trained with the company’s historical data. A well-trained model should be able to deal with data that is similar to the samples it was trained with. This model may keep running smoothly in a controlled lab environment. However, as soon as it’s fed with data originating from outside the scope of the training data distribution, it will fail miserably. Unfortunately, this is what often happens in real-life production environments.

This is perhaps the core reason most AI projects fail in production: the data used to train the model in sterile lab environments is static and fully controlled, while data in real-life environments tends to be much messier.

Let’s take, for example, a company that deploys a text analysis model at its support center, with the aim of automatically analyzing emails from customers. The model is trained with a massive pool of text written in proper English and reaches high accuracy levels in extracting the key messages of each document. However, as soon as this model is deployed in the actual support center it runs into text riddled with typos, slang, grammar errors, and emojis. Facing this kind of “dirty data,” most AI models are generally not robust enough to produce meaningful outputs, let alone provide long-term value.

Launching an AI solution in production is only half the battle. The second pitfall in AI implementation is maintaining the solution on course once deployed. This requires continuous control of data and model versions, optimization of a human-machine feedback loop, ongoing monitoring of the robustness and generalization of the model, and constant noise detection and correlation checks. This ongoing maintenance of an AI solution in production can be an extremely challenging and expensive aspect of deploying AI solutions.

The combined challenge of launching an AI solution in a noisy, dynamic production environment and keeping it on the rails so it continues to deliver accurate predictions is the core reason that makes AI implementation profoundly complex. Whether trying to tackle this technological feat in-house or turning to an external provider — companies struggle with AI implementation.

So, what can companies do in order to overcome these challenges and beat the 85% failure rate? While the textbook on guaranteed AI implementations has yet to be written, the following four guidelines can be used to curb down the risk factors that typically jeopardize deployments:

1. Customizing the AI solution for each environment

Every AI use case is unique. Even in the same vertical or operational area, each business uses specific data to achieve its own goals, according to a particular business logic. For an AI solution to provide the perfect fit for the data, environment, and business needs, all these specificities need to be translated and built into the solution. Off-the-shelf AI solutions, in contrast, are not customized to specific needs and constraints of the business and will be less effective in creating accurate outputs and value.

2. Using a robust and scalable platform

The robustness of AI solutions can be measured by their ability to cope in extreme data scenarios, facing noisy, unlabeled and constantly changing data. In evaluating AI solutions, enterprises should ensure that the anticipated outcomes will withstand their real-life production environments, instead of relying just on performance tests in lab conditions.

3. Staying on course once in production

AI solutions must also be evaluated on their stability over time. Companies should familiarize themselves with the process of retraining AI models in live production environments, constantly fine-tuning them with feedback by human inspectors.

4. Adding new AI use cases over time

An AI solution is always a means to an end, not a goal in itself. As such, the contribution of AI solutions to the business should be evaluated considering the broad perspective of the enterprise’s business needs, goals, and digital strategy. One-point solutions may deliver on a specific use case but will create a painstaking patchwork once additional AI solutions will be deployed to cover additional use cases.

The potential that lies in AI may very well transform businesses and disrupt entire industries by the end of this decade. But as with any disruptive technology, the path to implementing AI is treacherous, with most projects falling by the wayside of the digital transformation journey. This is why it’s crucial for enterprises to learn from these past failures, identify the pitfalls in deploying AI into production, and familiarize themselves with the technology to a point in which they have a solid understanding of how specific AI solutions can deliver on their expectation.

Sharon Reisner is the CMO of BeyondMinds, an enterprise AI software provider delivering AI solutions and accelerating AI transformations

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].

Source: Read Full Article