Is Industrial Artificial Intelligence destined for an “AI Winter”?

  /  Artificial Intelligence & ML   /  Artificial Intelligence   /  Is Industrial Artificial Intelligence destined for an “AI Winter”?
Artificial intelligence

Is Industrial Artificial Intelligence destined for an “AI Winter”?

Few areas in computer science have, over the years, repeatedly created as much interest, promise, and disappointment, as the field of artificial intelligence. The manufacturing industry, now the latest target application area of “AI”, puts much hype on AI for predictive maintenance. Will AI deliver this time, or is disappointment inevitable?

In engineering, the development of AI was arguably driven by the need for automated analysis of image data from air reconnaissance (and later satellite) missions at the height of the Cold War in the 1960s. A novel class of algorithms emerged that applied back-propagation to non-binary decision trees to force convergence of input data towards previously undefined output clusters. For the first time, these algorithms, dubbed “neural networks”, had the ability to self-develop a decision logic based on training input, outside the control of a (human) designer. The results were often spectacular, but occasionally, spectacularly wrong: since the learnt concepts could not be inspected, they could also not be validated, leading to systems being “untraceable” – failures could not be explained.

What does AI winter mean?

AI winter refers to periods in the history of artificial intelligence research when enthusiasm and funding for AI projects significantly waned due to unmet expectations or technological limitations.

In the early days the computational complexity of these algorithms often exceeded available processing power of contemporary computer hardware, at least outside of classified government use. Applying AI to solve real problems proved difficult; virtually no progress was made for more than a decade – a decade that was later referred to as the first “AI Winter”, presumably in analogy to the “Nuclear Winter” and in keeping with the themes of the time. Engineers were forced to wait for Moorse’ law (which stipulated that processing power doubles every 1.5 years – a law that held through much of the second half of the 20th century) to catch up with the imagination of 1960s mathematicians.

What is the AI winter of the 80s?

The AI winter of the 80s refers to a period during the 1980s when the funding and interest in artificial intelligence research significantly decreased due to unmet expectations and a perception of over-promising without substantial progress.

It finally did, and in the 1980s, “expert systems” emerged that revived the concepts of AI and found some notable real-world applications, although the concept of fully autonomous “learning” was often replaced by explicit human-guided “teaching”. This alleviated some of the issues posed by algorithmic untraceability, but also took much of the luster of “intelligence”. Temperatures again fell to winter levels – the 2nd AI Winter.

Which year did the second AI winter start in?

The second AI winter is generally considered to have started in the late 1980s and continued through the early 1990s. It followed the initial AI winter of the 1970s. Fast-forward another 30 years, and processing power, storage capacity, and the amount of available data have advanced to a level that might pave the way for yet another attempt at applying AI to real world problems, based on the hypotheses that more than enough data is available within any given domain to feed relatively simple clustering algorithms running on cheap and plentiful processors to create something of value.

Industry heavyweights are betting significant resources on the promise of AI and have, without a doubt, demonstrated significant achievements: machines are winning against human contestants in televised knowledge quizzes and the most complex strategy games. Robot vehicles navigate highways with impressive success. It is curious that progressing these achievements to broader adoption appear to be spotty at best: applying to quiz-show knowledge management to assist doctors to diagnose medical issues appear to have failed. Taking the robot vehicle from the highway to the city high-street is fraud with autopilot upsets. The list of failed attempts at AI is longer and growing faster than the list of success stories. Is the next “AI Winter” inevitable?

The fear of another winter is so pervasive among the AI research community that many avoid the two-letter acronym altogether, instead using the less loaded term of “machine learning”, or the more general “data science”. Tackling the underlying issues would, of course, be preferable to avoiding the challenges at a purely linguistic level.

Managing Expectations after AI Winter

Confronted with an AI-based project approach, clients typically react in one of two ways. The first possible reaction is fear (the “HAL 9000” response, in reference to the bad-mannered AI protagonist in Arthur C. Clarke’s “Space Odyssey”); if not of a science-fiction induced image of evil machines exterminating mankind, then at least of job losses and unemployment due to automation replacing all machine operators, service technicians, mechanics, or other shop-floor craftsmen. The second is delusion; that there be a general-purpose machine-based intelligence that will solve all problems quickly and cheaply – after all, it also won that TV quiz show, right?

Both responses, while equally wrong, are induced by the same misperception that an Artificial Intelligence and a Human Intelligence share the same type of “Intelligence” – but nothing could be further from the truth: Machines fail miserably at tasks that every 5 year old child can easily master – consider, for example, the game “Jenga”: Machine intelligence leaves us in awe due to the vast amounts of information it can retrieve, categorise, and serve. This works when the problem is contained to a narrow, well-defined domain. It appears clever, but is little more than information retrieval; there is never an “understanding” of the data, the problem, or the question asked. Moreover, there is no “creative act”.

It has been proposed that it might be better to think of “AI” as “Augmented Intelligence”: AI as a means to extend the reach, availability, or precision, or an existing, human intelligence, much like glasses enhance aging human eye sight. AI assists human experts, rather than replace – or exterminate – them!

Controlling the Application Domain

The absence of any creative ability implies that AI systems have to learn exclusively by example, with mathematical interpolation being the only way to “fill in gaps” between examples. For this to work well, the application domain must be narrow, and the training data must be both plentiful and clean.

While the amount of data needed to understand relationships between the variables obviously depends on the complexity of those relationships, the cleanliness of the data is often harder to manage. Real world data sets are full of noise – and most learning algorithms are extremely sensitive to false input in their training sets; many AI algorithms perform well in the lab, only to fail miserably in the real world when subjected to noise input data.

Aside from measurement noise, changing environmental or operating conditions (“operational noise”) are also reasons for concern and failure: algorithms are forced to adapt their baseline continuously, effectively re-entering the training phase whenever such operational change occurs. In such cases, overfitting or co-linearity induced by too much data may eventually be as detrimental to the algorithm performance as too little data.

Best results are therefore achieved for systems that are narrowly defined, stable, and well understood based on a clean data set derived from real-world operation. Results of high accuracy can be achieved for such systems, but be aware that uncertainties – as small as they may be – compound quickly to levels that render the final results useless when systems are composed of several such sub-systems.

Artists, not their tools, make the Art

Although the defining property of artificial intelligence systems is that they are able to learn unknown concepts purely based on training input, guidance by human experts greatly reduces time, the amount of required data, the danger of untraceable findings, and increases accuracy: AI algorithms are a tool in the bag of data scientists and human experts, but the latter drive the project, not the tool. Like a chisel, AI algorithms are tools that will create art only in the hand of an artist.

AI projects, like any other software project, benefit greatly from an agile, iterative approach based on discussion of algorithmic data findings between the guiding data scientist with a domain expert – a physicist, design engineer or maybe the machine repairman. We wrote a previous article about the role of data science in industrial engineering.

It is the skill of those domain experts that the AI system is based on; taking their input throughout the development process is as obvious as it is essential.

Let the heatwave pass

Can another AI winter be avoided? Hype surrounding AI has pushed the industry into a heatwave. Dropping temperatures are not only normal but also desirable and ultimately healthy. Reducing over-inflated expectations and focussing on winning the AI war one battle at a time will establish confidence: simple machine parts – such as bearings, heating elements, etc, hold the key to successful projects: predicting their failure is achievable yet yields disproportionate benefits to the overall machine operation. Optimizing process variables to reduce energy consumption has a fast, measurable positive impact on machine yield – and the operators sustainability record. Applications such as those are great success stories for a promising and valuable technology, and have great financial benefit to the users who adopt them. Ultimately these successes in the real world will ensure the temperatures will only drop to seasonal norms.

FAQs about AI Winter

What is the AI winter season?

The AI winter season refers to the specific periods during which interest, funding, and progress in the field of artificial intelligence experienced a notable decline.

What is the opposite of AI winter?

The opposite of AI winter would be a period of rapid advancement, high enthusiasm, and substantial funding for artificial intelligence research and development.

What happened during the AI winter?

During the AI winter, there was a reduced focus on AI research and development. Many AI projects were discontinued, and funding for new projects became scarce. Researchers shifted their attention to other areas of computer science.

What caused the AI winter?

The AI winter was caused by a combination of factors, including unmet expectations, over-promising, technological limitations, and a lack of clear progress in achieving human-level artificial intelligence.

About The Author

Dr. Stefan G Hild is Head of Data Science at ei3 Corporation. Stefan has been involved with industrial applications of data science and AI for more than a ...