Artificial Intelligence: The Evolution of the Thinking Machine

How often do you stumble on a new word while reading? If you are like me, you wouldn’t look it up in a dictionary. You will carry on reading as your mind intuitively figures out the meaning. We have to hand it to our brains. Over the course of millennia, the human brain has become adept at many things, including language and object recognition. So even if we don’t know exactly what we are looking at, our brains fill in the gaps by providing a rough interpretation.

What if we could create intelligent machines that could do the same—learn by associating things, just like the human brain? It is a question that has puzzled and driven curious minds for generations. This post takes a look at the evolution of Artificial Intelligence (AI)the making of the thinking machine.

The Beginning of Artificial Intelligence

It all began with the simple calculator that we made out of curiosity rather than necessity. Then we moved on to abacuses, counting machines, and so on. People were ecstatic about the capabilities of these machines and welcomed them with open arms. In executing tasks such as multiplying or subtracting numbers with a hundred digits, these machines surpassed human ability. However, they failed miserably in several others, such as figuring out what something is from its image, which came naturally to the human brain.

The idea that machines could think was popularized by Alan Turing, the famous British codebreaker and mathematician. He proposed the Turing Test, which determines a machine’s ability to think like a human. Thus, in the late 1950s, the term Artificial Intelligence entered the public imagination. Around this period, there was this novel idea to model computational units based on the neurons in the human brain. McCulloch-Pitts Model of Neuron and, later, the Perceptron model emerged. These could do simple logical operations such as AND/OR through trial and error.

Next in line was Marvin Minsky, the co-founder of the AI laboratory at the Massachusetts Institute of Technology. In the late 60s, Minsky and Papert wrote a book called Perceptrons, which pointed to key problems with nascent neural networks. The book was later blamed for directing research away from neural networks for many years.

It was a dark time for AI then on. There were multiple “AI winters,” when research in Artificial Intelligence came to a standstill. However, not all was lost as lone visionaries such as Geoffrey Hinton carried the torch of learning. He developed important algorithms such as backpropagation, which formed the basis for the deep neural networks of the future.

Artificial Intelligence Now

Come 21st century, the ice thawed. Today, the shift away from neural networks may seem like a mistake. Deep learning nets, a kind of neural network, have proven incredibly useful for all sorts of tasks.

Most of the data generated today is unstructured and chaotic. Traditional machine learning systems based on classical models can’t do much with this kind of data without tedious preprocessing. Deep Learning provides a way to leverage this high volume, unstructured data. In this technique, we enable machines to learn on their own by going through the data.

With the recent growth in computational capabilities, the Deep Learning approach has moved on from being a theoretical possibility to practical reality. Publicly available curated datasets and Graphics Processing Unit (GPU) resources for neural network computations are accelerating research in Artificial Intelligence. Also, machine learning frameworks like Tensor Flow and Torch have made implementation and training of neural networks easier today. With these factors in its favor, AI is set to radically change the course of human history.

Various promising developments in AI are already there for us to see:

  • Intelligent Chatbots and Voice Assistants These chatbots can respond to your queries keeping in mind your personal preferences and the context of the conversation.
  • Language Translation Translation is done by converting the sentences into an intermediate representation and then to the required language. This enables more natural translation.
  • Recommendation Systems AI is used to identify what works for you based on your previous preferences. This helps in pushing content that you are most likely to be interested in.
  • AI Game Bots Not since Deep Blue beat Garry Kasparov has there been so much excitement about game bots. AI game bots prove that machines are now capable of transcending human experience.
  • Text Summarization and Analysis Sentiment analysis, text classification, and summarization based on Word2vec model and Long/Short Term Memory (LSTM) networks. Word2vec encodes the concepts represented by words in a high dimensional space. LSTM networks can glean long-term dependencies between various concepts in the text.
  • Power Consumption Optimization The cooling systems of racks in data centers can be optimized based on historical cooling data, leading to more efficient operations.
  • Image Captioning Systems We have image captioning systems and also systems that can generate images based on captions.

The Future of Artificial Intelligence

The human brain is incredibly good at generalizing things. What we learn in one domain can be applied to any other domain. Such is not the case with the deep neural networks now. They can only be used in the domain in which they were trained. A branch of machine intelligence called Reinforcement Learning strives to rectify this shortcoming. The same task is repeated again and again with feedback from the environment for a while, and the machine learns to do that task. However, we are still a long way off from artificial general intelligence.

A glimpse of this future is visible in Universe, a software platform developed by Open AI. Universe provides a general platform to create intelligent neural network-controlled agents (bots) that can operate computer games, websites, and other applications. In future, combining these agents, we can carry out complex tasks. For example, an agent can take your request for flight booking, check out possibilities on various websites, and intelligently decide on the best travel plan based on your inputs and travel history. Thus, in the not so distant future, we could have assistants that can perform tasks on the web on our behalf, and even beyond.

Here are some of the AI-enabled developments we can look forward to in the near future:

  • Fully Automated Cars Within a decade, we would have cars that drive better than any human, thus reducing the risk of accidents.
  • Real-Time Translation Systems These will translate without losing the essence of communication, unlike the current crop of real-time translation systems.
  • Deep Neural Net Architectures Architectures such as Generative Adversarial Networks, LSTMs, and Convolutional Neural Networks will work together to provide a form of generalized intelligence. It could be used to generate art that may be aesthetically pleasing to humans.
  • Internet of Things (IoT) By leveraging the huge amount of data captured by IOT devices, neural networks can be trained to mine granular insights and provide accurate forecasts.
  • Hardware for Machine Learning Dedicated hardware that performs faster than traditional hardware for training neural networks will emerge.
  • Convergence of Symbolic AI and Neural Nets A convergence of symbolic AI and neural networks could take place. With an interface between the two, we can get neural nets to memorize certain things and refer to them later. In this way, we could use traditional algorithms in conjunction with neural networks.
  • Medical Diagnosis We could have neural networks trained on medical cases so that accurate diagnosis of medical problems based on patient history will become commonplace.

Learn about the machine learning services and business solutions provided by the Artificial Intelligence team at QBurst.