The costs and limitations of implementing AI
This is the third of a NewtonX three-part series on AI for senior executives. Part 1 examined the history of AI and what led to the breakthroughs of the past decade. Part 2 took a look at the two factors that enable machine learning and deep learning, and what each of these terms really means. In Part 3, we use the foundational knowledge established in Parts 1 and 2 to examine the pros and cons of enterprise implementation of AI.
At this point in the development of AI, not every business can profit from it. Not only are there high associated costs, but there are also limitations to the technology that may make it impractical for certain applications. Here, we will outline the top limitations and prohibitive cost factors associated with the implementation of AI.
There are three primary limitations for enterprise applications of AI, each of which affects associated costs. (Note that the following content is taken in part from an article written by NewtonX CEO Germain Chastel for Forbes).
1. Human labor for initial data labeling
Successful automation necessitates that humans train algorithms through a labor intensive data labeling process. For instance, in the supervised learning process used to train self-driving cars, autonomous vehicle companies hire hundreds of people to manually annotate hours of video feeds from prototype vehicles. Manual data labeling is also being used by Facebook, which employs over 7,000 “content reviewers”, who flag content with the goal of giving an algorithm an ever-increasing corpus of data to learn from.
Even after the algorithm is developed, the need for human labor doesn’t disappear. For instance, the Facebook content flagging algorithm is already built, but the need for content reviewers will continue until the algorithm performs as well or better than humans. There’s a continual push-pull between humans and algorithms that continues years before a process becomes truly, fully automated.
2. The black box effect
The black box effect occurs when algorithms learn and behave in ways that humans cannot understand — a phenomenon that is common in multi-layer neural network models. AlphaGo is a good example of this type of learning — the algorithm was given a goal (to win a game of the ancient Chinese game, Go) and then figured out through self-play how to accomplish that goal. For most moves though, the expert Go players thought AlphaGo was making bad decisions because they could not understand the rationale for each individual move. But in the end, through this series of apparently suboptimal moves, AlphaGo ended winning against a world master. In controlled environments, this obfuscation is unimportant — as long as AlphaGo wins, there’s nothing too terrible it can do.
In uncontrolled environments, though, this effect can be hugely detrimental. Self-driving cars, for instance, cannot operate in a black box. If the car crashes into a group of pedestrians, humans need to be able to understand, on a step-by-step basis, why the car performed this action. This means that training a self-driving car can only happen through supervised learning — which again, is why there’s such a high upfront human labor cost.
3. Training data size limitations
For AI to be effective, it needs massive data sets to learn from. In some areas, that’s no problem — as we mentioned earlier, image recognition, for instance, is a relatively easy concept to teach an algorithm because of the billions of images available online. In other areas, though, a dearth of training data makes the teaching process much more drawn out, and hence much more expensive.
In certain industries— space travel, for instance — there are so few data points that an algorithm couldn’t possibly learn from them in any meaningful way. For the algorithm to be more precise than a human, it typically needs millions of data points from which to learn. Any task that is only performed intermittently will be difficult to teach an algorithm, unless the task is extremely simple with very clear rules and goals.
By the numbers: What senior execs are investing in (and what they’re getting from their investments)
In 2017, NewtonX conducted a comprehensive survey of senior executives from companies that generated $1B or more in revenue. The survey targeted large, established companies with the intent of gleaning how AI is actually viewed in the enterprise world, and what payoffs it’s giving.
The survey results revealed three key findings:
- Most senior level executives are investing in AI in some form
- The majority did not see payoff in 2017, but expect to by 2019
- The most common forms of AI investment are in predictive analytics and chatbots
Upon receiving the initial survey data, NewtonX did a qualitative deep dive to understand precisely how companies are using predictive analytics and chatbots. The results of this second survey revealed that the majority of investment in chatbots is occurring in the customer service space, with notable exceptions in the insurance and banking industries. Predictive analytics is most widely applied in marketing and sales. Multiple executives noted that their marketing and/or sales teams use predictive analytics for scheduling and optimizing messages. Secondary use cases for predictive were largely around predictive maintenance for product supply chain.
Of the executives investing in AI, only 7% said that AI had provided cost savings in 2017. Qualitative interviews revealed that most companies that invested in AI spent a minimum of three months establishing the necessary infrastructure to support new AI-driven systems. This process involved heavy upfront costs, but executives say they expect it to result in cost savings by 2019.
Key Report Takeaways
- AI refers to the ability of a machine to perform cognitive functions previously performed exclusively by the human mind
- Deep learning refers to a multilayer neural network in which an algorithm is given a vast amount of training data until it can process previously unseen inputs accurately.
- Machine learning refers to an algorithm that predicts or prescribes. It is very commonly used across different business applications
- The two phenomena that allowed for AI were the explosion of big data and lowered cost of computing power and storage
- Most senior level executives report that they have implemented AI-powered tools
- Of the senior level executives who have implemented AI, the majority expect it to result in increased efficiency and cost savings by 2019