
01 Apr 2022
Artificial Intelligence (AI) is a multidisciplinary field of science that aims to create intelligent machines, that is, machines that emulate and then exceed the full range of human cognition.
We have been using AI-based technology for a long time. While some inventions were very popular and in common use, many failed to gain the market acceptance due to different challenges such as scaling, safety, accuracy and ease of maintenance. As a result, a cyclic pattern of highs and lows in AI research investment, commonly referred to as AI summers and AI winters, was seen since its inception in 1956. However, AI was getting better in an incremental way pushing forward the frontier of machine intelligence.

The Beginning
AI first appeared in an overly optimistic project proposal by John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon. In August 1955, they wrote:
‘We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed based on a conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.’
At a time when computers could only take basic instructions and could not even store information, this was indeed an audacious attempt, which obviously was not able to deliver. Yet, the summer project at Dartmouth College in 1956 marked the birth of AI as a new field of study.
In the early stages, computers performed mathematical computations using algorithms and solved simple equations to find the unknown from the known. They were used to search data from a large collection of data, where the search was often based on a definite key and the data was organised in a relatively structured form.
Classical AI
Classical AI was dominated by knowledge-based reasoning. Here knowledge of different domains was represented in some standard form. Then, inference algorithms were used to iteratively invoke knowledge and arrive at a solution or decision. Expert systems of the 70s and 80s are the best examples of this kind of AI. However, they were restricted to chemistry and medicine, in which human experts designed and curated knowledge bases. In a way, expert systems became synonymous with AI as they were designed to replicate a human expert’s decision-making ability.
Expert systems relied primarily on a hand-crafted knowledge base and set of rules created by humans. Their win can be attributed to their computational power, speed and memory rather than cognitive intelligence. This is also the reason for the early advancements in robotics –from self-driving cars to self-landing rockets. However, a system that functions based on only a curated knowledge base or human input cannot scale. Hence, expert systems became constrained, inflexible and expensive to maintain. Besides, many real-world challenges are too complex or subtle to be solved by simplistic logical reasoning that follows a set of rules written by human experts.
Today, knowledge-based reasoning appears under the nickname of Classical AI or Good Old-fashioned AI (GOFAI) and is sometimes utilized as a supplementary technique in DL-based AI ecosystem.
Deep Learning Revolution
Although the field of AI has been actively pursued as an academic discipline for over seven decades, only recently several forces have come together to make it practical and pervasive. Prominent forces driving the rapid advances in AI technology are:
- Internet and Internet-of-Things: Enormous amount of data digitally available
- Computing performance: Faster computers, more storage and cheaper devices and sensors
- DL: Conceptual advances in ML techniques and neural networks
- Commercial interest: Rapidly increasing investment in industrial and academic research in AI
The underlying idea is to learn from experiences and observations. The strategy is to use statistical techniques to construct a predictive model from experiential data. This model is then used to predict the responses on unseen data. Indeed, it is ML that has allowed AI to scale beyond anyone’s expectations and pervade our daily lives in recent times. More specifically, ‘neural networks’ as a predictive model has been found to work exceptionally well in domains such as image recognition, speech recognition, language translation, and game playing. They form the basis of a class of methods called deep learning.
Today, applications of AI are largely based on supervised learning, wherein large amounts of labelled data are used to train models such as neural networks.
Way Ahead
The next generation of AI is expected to deal with more practical situations, where there would be no access to any data. Instead, intelligent agents must self-learn through trial and error to make decisions bearing in mind the long-term payoffs. Thus, the next-generation AI, which is still not fully realized in practice, would have more autonomy and sophistication in decision-making.
The scope of AI is not complete without robotics and autonomous systems, characterised by the physical embodiment of intelligence in the real world. One may view embodiment as an independent facet of AI. Nevertheless, it is the emphasis on embodiment that closes the loop with the real world through sensors and actuators, which helps AI to be in control.
Read the full report on, ‘Understanding the Dynamics of Artificial Intelligence in Intellectual Property’.