Artificial intelligence has a long and rich history spanning over seven decades. What’s interesting is that AI predates even modern computers, with the search for intelligent machines being some of the starting points for how we came to digital computing in the first place. Early computing pioneer Alan Turing was also an early pioneer in artificial intelligence, developing ideas in the late 1940s and 1950s. Norbert Wiener, the creator of cybernetics concepts, developed the first autonomous robots in the 1940s when transistors didn’t exist, let alone big data or the cloud. Claude Shannon developed devices for mice that could solve mazes without the need for any deep learning neural networks. W. Gray Walter is famous for building an independent electronic turtle They could navigate the world around them and find their way to their shipping point in the late 1940s without coding a single line of Python. Only after these developments and the subsequent coining of the term “AI” at the Dartmouth Conference in 1956 did digital computing become a real thing.
Given all of that, with all our incredible computing power, unlimited internet and data, and cloud computing, we definitely had to fulfill the dreams of the AI researchers who made us orbit planets with autonomous robots and intelligent machines envisioned in 2001: Space Flight, Star Wars and Star Trek and other science fiction genres developed in the 1960s and 1970s. However, our chatbots today are no smarter than those developed in the 1960s, and our image recognition systems are satisfactory but You still can’t recognize the elephant in the room. Are we really achieving AI or are we falling into the same traps over and over again? If artificial intelligence has been around for decades now, why do you still face so many challenges with its adoption? Why do we keep repeating the same mistakes of the past?
AI sets its first trap: AI’s first winter
In order to better understand where we are today with AI, you have to understand how we got here. The first major wave of interest and investment in artificial intelligence occurred from the early 1950s through the early 1970s. Much of the early research and development of artificial intelligence originated from the burgeoning fields of computer science, neuropsychology, brain science, linguistics, and other related fields. AI research is built on massive improvements in computing technology. This, along with funding from government, academic, and military sources, has produced some of the oldest and most impressive advances in artificial intelligence. However, while advances in computing technology continued to mature and advance, AI innovations developed during this window to a near standstill in the mid-1970s. AI financiers realized that they had not achieved what was expected or promised for intelligent systems, and they felt that AI is a goal that will never be achieved. This period of downturn in industry interest, funding, and research is known as the first AI winter, so called because of the chill felt by researchers from investors, governments, universities, and potential clients.
Artificial intelligence showed a lot of hope, so what happened? Why don’t we live like the Jetson family? Where is our HAL-9000 or Star Trek computer? Artificial intelligence has driven great visions of what could be, and people have promised that we are close to realizing those promises. However, these exaggerated promises were met with weakness. While there were some great ideas and tantalizing demos, there were fundamental issues with overcoming difficult issues that were challenging with a lack of computing power, data, and research understanding.
The issue of complexity has become an ongoing problem. The first natural language processing (NLP) systems and even a chatbot called Eliza chatbot were created in the 1960s during the Cold War era. The idea of having machines that could understand and translate texts was very promising, especially for the idea of intercepting cable communications from Russia. But when the Russian language is put into an NLP application, it will return with incorrect translations. People soon realized that literal translations were very complicated. These ideas weren’t totally cool or totally impossible but when it came to development, it turned out to be a lot more difficult than they initially thought. Time, money, and resources were put into other efforts. Over-promising about what AI can do, and then under-delivering that promise, has ushered in our first AI winter.
Trapped Again: The Second AI Winter
There was a renewed interest in artificial intelligence research in the mid-1980s through the development of expert systems. Adopted by enterprises, expert systems have harnessed the emerging power of affordable desktops and servers to do the work previously reserved for expensive mainframe computers. Expert systems have helped industries automate and simplify the decision making process on Main Street and modify Wall Street’s e-commerce systems. Soon people saw the idea of a smart computer on the rise again. If he can be a trusted decision maker in the enterprise, surely we can bring the smart computer back into our lives again.
The promise of AI seemed increasingly positive, but in the late 1980s and early 1990s, AI was still considered a “dirty” word by many in the industry because of its earlier failed capabilities. However, the growth of servers and desktop computing has once again revived the potential interest in artificial intelligence. In 1997, IBM’s Deep Blue defeated chess grandmaster Garry Kasparov. Some people think we did, and smart machines are making a comeback. Organizations in the 1990s were looking for additional ways to adopt artificial intelligence. However, expert systems proved very fragile, and organizations were forced to be more realistic about their finances, especially after the dot com crash that occurred at the beginning of the millennium. People realized that IBM’s Deep Blue was only good for playing chess and was not a transferable implementation of other solutions. These and other factors led us to the second AI winter. Once again, we over-promised and under-delivered in terms of what AI was capable of, and AI once again became a dirty word for another decade.
Winter melting: But are storm clouds gathering?
In the late 2000s and early 2000s, interest in AI fizzled out again, driven by new research and lots and lots of data. In fact, this latest AI wave should be called Big Data/GPU computing wave. Without it, we would not have been able to address some of the earlier challenges of AI, particularly around the approaches based on deep learning that are powering a much larger proportion of the latest wave of AI applications. We now have a lot of data and we know how to manage this big data effectively.
This current wave of AI is clearly data driven. In the 1980s, 1990s, and early 2000s, we figured out how to create huge, queryable databases using structured data. But the nature of data is starting to change with unstructured data such as emails, images and audio files that quickly make up the majority of the data we create. The main driver of this current wave of AI is our ability to handle massive amounts of unstructured data.
Once we were able to do that, we then hit a critical threshold with neural networks that were able to perform at an incredible level and suddenly everything seemed possible. We’ve had this huge boom as AI has been able to find patterns in this sea of unstructured data, use predictive analytics for recommendation systems, NLP apps like chatbots and virtual assistants have taken off, and product recommendations have become frighteningly accurate. With so much progress so fast, people are still preoccupied with the idea that AI can do anything. Another aspect that has helped propel us to where we are today is venture capital. AI is no longer just funded by governments and large corporations. Venture capital funding has allowed AI-focused startups to thrive. The combination of a lot of money, a lot of promises, a lot of data, and a lot of noise has warmed the AI environment. But are we preparing ourselves for another iteration of excessive promises of AI capabilities and underperformance? Evidence points to yes.
The problem of artificial intelligence predictions
The problem of over-promises and not keeping them is not a problem with one specific promise. It’s an tacit promise. People keep this promise in mind for what AI can do and when. For example, most of us want fully autonomous Level 5 vehicles and the promise of this app is enormous. Companies like Tesla, Waymo, and others sell their systems based on their promises and users’ dreams. But the truth is, we are still far from self-driving vehicles. Robots are still falling from the escalators. Chat bots are smarter, but still somewhat dumb. We don’t have artificial general intelligence. It’s not that big thinking itself is the problem, it’s the fact that small innovations are creating game-changing disruption and before you know it, we’ve made exaggerated promises again, and we’re underperforming. And when that happens, you know what’s inevitably next: AI winter.
Organizations today are trying very hard to deliver on their promises. One way to manage this is to scale back those promises. Don’t promise that AI systems will be able to diagnose patients by looking at medical image data when these systems are clearly not up to the task. Don’t promise fully autonomous vehicles when avoiding collisions is an easier task. Don’t promise amazing smart chatbots that fail basic interactions. The key to managing expectations is to set expectations that provide a good return on investment, and to meet those expectations, without exaggerating the promises of the world or conscious machines.
The reason why more sober, agile and repetitive methods of artificial intelligence such as Cognitive Project Management Methodology for Artificial Intelligence (CPMAI) It is adopted because organizations have a lot on the line with their AI projects. Failure this time may not be an option for organizations that have invested a lot of time and effort into their AI projects. We are in great danger of repeating the same mistakes and achieving the same results. For AI to become a reality, we need to become more realistic with our expectations of what we can do incrementally to achieve those visions. Otherwise, we will return to the same old AI story where we keep making over-promises and under-delivering on AI promises.