Robotics and Artificial Intelligence Sci TechwithEstif PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/344784883

Robotics and Artificial Intelligence

Article  in  International Journal of Artificial Intelligence and Machine Learning · October 2020


DOI: 10.4018/IJAIML.2020070104

CITATIONS READS

4 7,888

1 author:

Estifanos Tilahun Mihret


MeU University
87 PUBLICATIONS   27 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

International Commercial Books View project

International Journal and Article View project

All content following this page was uploaded by Estifanos Tilahun Mihret on 23 January 2021.

The user has requested enhancement of the downloaded file.


International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

Robotics and Artificial Intelligence


Estifanos Tilahun Mihret, Mettu University, Ethiopia

ABSTRACT

Artificial intelligence and robotics are very recent technologies and risks for our world. They are
developing their capacity dramatically and shifting their origins of developing intention to other
dimensions. When humans see the past histories of AI and robotics, human beings can examine and
understand the objectives and intentions of them which to make life easy and assist human beings within
different circumstances and situations. However, currently and in the near future, due to changing the
attitude of robotic and AI inventors and experts as well as based on the AI nature that their capacity
of environmental acquisition and adaptation, they may become predators and put creatures at risk.
They may also inherit the full nature of creatures. Thus, finally they will create their new universe
or the destiny of our universe will be in danger.

Keywords
AI, Destiny of Universe, Intelligence, Robotics

1. INTRODUCTION

Artificial intelligence describes the work processes of machines that would require intelligence
if performed by humans (Wisskirchen et al., 2017). The term ‘artificial intelligence’ thus means
‘investigating intelligent problem-solving behavior and creating intelligent computer systems.
There are two kinds of artificial intelligence:

• Weak Artificial Intelligence: the computer is merely an instrument for investigating cognitive
processes – the computer simulates intelligence.
• Strong Artificial Intelligence: The processes in the computer are intellectual, self-learning
processes. Computers can ‘understand’ by means of the right software/programming and are able
to optimize their own behavior on the basis of their former behavior and their experience. 4 This
includes automatic networking with other machines, which leads to a dramatic scaling effect.

According to the Robot Institute of America (1979) a robot is: “A reprogram able, multi-
functional manipulator designed to move material, parts, tools, or specialized devices through
various programmed motions for the performance of a variety of tasks” (Bansal et al., 2017). A
more inspiring definition can be found in Webster. According to Webster a Robot is: “An automatic
device that performs functions normally ascribed to humans or a machine in the form of a human.”
A robot can be defined as a programmable, self-controlled device consisting of electronic, electrical,
or mechanical units. More generally, it is a machine that functions in place of a living agent. Robots
are especially desirable for certain work functions because, unlike humans, they never get tired; they
can endure physical conditions that are uncomfortable or even dangerous; they can operate in airless
conditions; they do not get bored by repetition; and they cannot be distracted from the task at hand.

DOI: 10.4018/IJAIML.2020070104
This article, originally published under IGI Global’s copyright on June 12, 2020 will proceed with publication as an Open Access article

starting on January 18, 2021 in the gold Open Access journal, International Journal of Artificial Intelligence and Machine Learning (con-
Copyright
verted © 2020,
to gold OpenIGI Global.
Access Copying
January or distributing
1, 2021), and will beindistributed
print or electronic forms
under the termswithout
of the written
Creativepermission
Commonsof IGI GlobalLicense
Attribution is prohibited.
(http://
creativecommons.org/licenses/by/4.0/) which permits unrestricted use,  distribution, and production in any medium, provided the author of
the original work and original publication source are properly credited.
57
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

Robotics can be defined as the field of knowledge and techniques that can be used to create
robots. Robotics is a branch of engineering that involves the conception, design, manufacture, and
operation of robots. This field overlaps with electronics, computer science, artificial intelligence,
nanotechnology, and bio-engineering. Robotics is the field of knowledge and techniques that permit
the construction of robots. Designed to carry out various tasks in place of humans – for example,
on a factory assembly line, or on a mission to Mars or other dangerous place – robots are more than
simple computers: they must be able to sense and react to changes in their environment.
The robotics intelligence can be efficiently used in wide Industrial Applications that is achieved
through Automation of Robotics tasks, and its key expertise in handling utmost requirements in
various arenas that leads to cost effective and secured operational processes by

• Reliable advancements of equipment functioning and its control in order to trigger varying
applications of automation and to strengthen the recycle of equipment and thereby increasing
its competence on demand.
• Incline and controlled mechanized layouts to curtail transportation and to efficiently coalesce
physical and computerized work-cells.
• IT enabled manufacturing apparatus for simultaneous artifact and fabrication in development
and design, indoctrination and servicing of the tools.
• Robotic testing of electronic machinery (computer vision, electronic test equipment) for achieving
100% excellence.
• Advanced industrialized process like gluing, coating, joining, wiring etc; which are key tools
for robot traversal and control the same time suitable for mass products and robot guidance and
control. Here, laser based processes will play an increasing role in terms of joining, coating,
cutting, and finishing.

The paper is organized in nine sections followed by recommendation, conclusion, acknowledgement


and references. Section 2 describes about history of robotics and AI in detail. Section 3 gives a
detailed explanation about robotics and AI. Section 4 and 5 talks about the seasons of robotics and
AI, and AI technologies & disciplines respectively. Section 6 gives a detailed explanation of AI and
robotics limitations. Section 7 and 8 talks about the weak and strong AI and robotics, and the impact
of government on AI and Robotics respectively. Finally, Section 9, 10 and 11 deals about major
technological firms AI and robotics, programming languages for AI and robotics, and risks and fears
of AI and robotics respectively.

2. HISTROY OF ROBOTICS AND AI

The birth of the computer took place when the first calculator machines were developed, from the
mechanical calculator of Babbage, to the elector-mechanical calculator of Torres-Quevedo (Perez et
al., 2017). The dawn of automata theory can be traced back to World War II with what was known
as the “codebreakers”. The amount of operations required to decode the German trigrams of the
Enigma machine, without knowing the rotor’s position, proved to be too challenging to be solved
manually. The inclusion of automata theory in computing conceived the first logical machines to
account for operations such as generating, codifying, storing and using information. Indeed, these
four tasks are the basic operations of information processing performed by humans. The pioneering
work by Ramón y Cajal marked the birth of neuroscience, although many neurological structures
and stimulus responses were already known and studied before him. For the first time in history the
concept of “neuron” was proposed. McClulloch and Pitts further developed a connection between
automata theory and neuroscience, proposing the first artificial neuron which, years later, gave rise
to the first computational intelligence algorithm, namely “the perceptron”. This idea generated great

58
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

interest among prominent scientists of the time, such as Von Neumann, who was the pioneer of
modern computers and set the foundation for the connectionism movement.
The Dartmouth Conference of 1956 was organized by Marvin Minsky, John McCarthy and two
senior scientists, Claude Shannon and Nathan Rochester of IBM. At this conference, the expression
“Artificial Intelligence” was first coined as the title of the field (Perez et al., 2017) . The Dartmouth
conference triggered a new era of discovery and unrestrained conquests of new knowledge. The
computer programmes developed at the time are considered by most as simply “extraordinary”;
computers solve algebraic problems, demonstrate theorems in geometry and learnt to speak English. At
that time, many didn’t believe that such “intelligent” behavior was possible in machines. Researchers
displayed a great deal of optimism both in private and in scientific publications. They predicted that
a completely intelligent machine would be built in the next 20 years. Government agencies, such as
the US Defence and Research Project Agency (DARPA), were investing heavily in this new area. It
is worth mentioning, that some of the aforementioned scientists, as well as major laboratories of the
time, such as Los Alamos (Nuevo Mexico, USA), had strong connections with the army and this link
had a prominent influence, as the work at Bletchley Park (Milton Keynes, UK) had, over the course
of WWII, as did political conflicts like the Cold War in AI innovation.
In 1971, DARPA funded a consortium of leading laboratories in the field of speech recognition.
The project had the ambitious goal of creating a fully functional speech recognition system with a
large vocabulary. In the middle of the 1970s, the field of AI endured fierce criticism and budgetary
restrictions, as AI research development did not match the overwhelming expectations of researchers..
When promised results did not materialize, investment in AI eroded. Following disappointing results,
DARPA withdrew funding in speech recognition and this, coupled with other events such as the
failure of machine translation, the abandonment of connectionism and the Lighthill report, marked
the first winter of AI (Lighthill, 1973). During this period, connectionism stagnated for the next 10
years following a devastating critique by Marvin Minksy on perceptrons (Minsky & Papert, 1969).
From 1980 until 1987, AI programmes, called “expert systems”, were adopted by companies and
knowledge acquisition become the central focus of AI research. At the same time, the Japanese
government launched a massive funding program on AI, with its fifth-generation computers initiative.
Connectionism was also revived by the work of John Hopfield (1982) and David Rumelhart et al.
(1985). AI researchers who had experienced the first backlash in 1974, were sceptical about the
reignited enthusiasms of expert systems and sadly their fears were well founded. The first sign of a
changing tide was with the collapse of the AI computer hardware market in 1987. Apple and IBM
desktops had gradually improved their speed and power and in 1987 they were more powerful than
the best LISP machines on the market. Overnight however, the industry collapsed and billions of
dollars were lost. The difficulty of updating and reprogramming the expert systems, in addition to the
high maintenance costs, led to the second AI winter. Investment in AI dropped and DARPA stopped
its strategic computing initiative, claiming AI was no longer the “latest mode.” Japan also stopped
funding its fifth-generation computer program as the proposed goals were not achieved. In the 1990s,
the new concept of “intelligent agent” emerged (Wooldrige & Jennings, 2009). An agent is a system
that perceives its environment and undertakes actions that maximize its chances of being successful.
The concept of agents conveys, for the first time, the idea of intelligent units working collaboratively
with a common objective. This new paradigm was intended to mimic how humans work collectively
in groups, organizations and/or societies. Intelligent agents proved to be a more polyvalent concept of
intelligence. In the late 1990s, fields such as statistical learning from several perspectives including
probabilistic, frequentist and possibilistic (fuzzy logic) approaches, were linked to AI to deal with
the uncertainty of decisions. This brought a new wave of successful applications for AI, beyond
what expert systems had achieved during the 1980s. These new ways of reasoning were more suited
to cope with the uncertainty of intelligent agent states and perceptions and had its major impact in
the field of control. During this time, high-speed trains controlled by fuzzy logic, were developed
(Zadel, 2015) as were many other industrial applications (e.g. factory valves, gas and petrol tanks

59
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

surveillance, automatic gear transmission systems and reactor control in power plants) as well as
household appliances with advanced levels of intelligence (e.g. air-conditioners, heating systems,
cookers and vacuum- cleaners). These were different to the expert systems in 1980s; the modelling
of the inference system for the task, achieved through learning, gave rise to the field of Machine
Learning. Nevertheless, although machine reasoning exhibited good performance, there was still an
engineering requirement to digest the input space into a new source, so that intelligence could reason
more effectively. Since 2000, a third renaissance of the connectionism paradigm arrived with the
dawn of Big Data, propelled by the rapid adoption of the Internet and mobile communication. Neural
networks were once more considered, particularly in the role they played in enhancing perceptual
intelligence and eliminating the necessity of feature engineering. Great advances were also made in
computer vision, improving visual perception, increasing the capabilities of intelligent agents and
robots in performing more complex tasks, combined with visual pattern recognition. All these paved
the way to new AI challenges such as, speech recognition, natural language processing, and self-
driving cars. A timeline of key highlights in the history of AI is shown in Figure 1.

Figure 1. A timeline highlighting some of the most relevant events of AI since 1950. The blue boxes represent events that have
had a positive impact on the development of AI. In contrast, those with a negative impact are shown in red and reflect the low
points in the evolution of the field, i.e. the so-called ”winters” of AI (Perez et al., 2017).

3. CURRENT STATE OF THE ART OF ROBOTICS AND AI

Building on the advances made in mechatronics, electrical engineering and computing, robotics is
developing increasingly sophisticated sensorimotor functions that give machines the ability to adapt
to their ever-changing environment. Until now, the system of industrial production was organized
around the machine; it is calibrated according to its environment and tolerated minimal variations.
Today, it can be integrated more easily into an existing environment. The autonomy of a robot in
an environment can be subdivided into perceiving, planning and execution (manipulating, navigating,

60
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

collaborating). The main idea of converging AI and Robotics is to try to optimize its level of autonomy
through learning. This level of intelligence can be measured as the capacity of predicting the future,
either in planning a task, or in interacting (either by manipulating or navigating) with the world.
Robots with intelligence have been attempted many times. Although creating a system exhibiting
human-like intelligence remains elusive, robots that can perform specialized autonomous tasks, such
as driving a vehicle (Rogers, 2015), flying in natural and man-made environments (Floreano & Wood,
2015), swimming (Chen et al., 2014), carrying boxes and material in different terrains (Ohmura &
Kuniyoshi, 2007), pick up objects (Kappasson et al., 2015) and put them down Arisumi et al., 2010)
do exist today.
Another important application of AI in robotics is for the task of perception. Robots can sense the
environment by means of integrated sensors or computer vision. In the last decade, computer systems
have improved the quality of both sensing and vision. Perception is not only important for planning but
also for creating an artificial sense of self-awareness in the robot. This permits supporting interactions
with the robot with other entities in the same environment. This discipline is known as social robotics.
It covers two broad domains: human-robot interactions (HCI) and cognitive robotics. The vision
of HCI it to improve the robotic perception of humans such as in understanding activities (Asada,
2015), emotions (Zhang et al., 2013), non-verbal communications (Mavridis, 2015) and in being able
to navigate an environment along with humans (Kruse et al., 2013). The field of cognitive robotics
focuses on providing robots with the autonomous capacity of learning and acquiring knowledge from
sophisticated levels of perception based on imitation and experience. It aims at mimicking the human
cognitive system, which regulates the process of acquiring knowledge and understanding, through
experience and sensorisation (Mochizuki et al., 2013). In cognitive robotics, there are also models
that incorporate motivation and curiosity to improve the quality and speed of knowledge acquisition
through learning (Oudeyer, 2014; Chan et al., 2015).
AI has continued beating all records and overcoming many challenges that were unthinkable less
than a decade ago. The combination of these advances will continue to reshape our understanding
about robotic intelligence in many new domains. Figure 2 provides a timeline of the milestone in
robotics and AI.
Moreover, the contemporary of AI and Robotics have dramatically developing their applicationS
in differnet disciplines. For instance:
In March 2019 (Meet the World’s First Female AI News Anchor, 2019), the Chinese Government-
controlled Xinhua News Agency announced that they have lunched their latest AI news presenter,
a female-gendered system names Xin Xiaomeng. They are working in collaboration with Chinese
search engine company Soguo to produce Xin. As a remembered, in November 2018, the state news
agency introduced Qiu Hao, a male-gendered AI presenter modelled on an actual Xinhua news anchor
during China’s World Internet Conference. Xinhua and Soguo have also announced that they have
built an improved male-gendered AI system named Xin Xiaohao, who is able to gesture, stand, and
move more naturally than Xin Xiaomeng or Qiu Hao.
Space expeditions and discoveries (AI Applications, 2019) always require analyzing vast amounts
of data. Artificial Intelligence and Machine learning is the best way to handle and process data on this
scale. After rigorous research, astronomers used Artificial Intelligence to sift through years of data
obtained by the Kepler telescope in order to identify a distant eight-planet solar system. Artificial
Intelligence is also being used for NASA’s next rover mission to Mars, the Mars 2020 Rover. The
AEGIS, which is an AI-based Mars rover is already on the red planet. The rover is responsible for
autonomous targeting of cameras in order to perform investigations on Mars.
For the longest time, self-driving cars (AI Applications, 2019) have been a buzzword in the AI
industry. The development of autonomous vehicles will definitely revolutionaries the transport system.
Companies like Waymo conducted several test drives in Phoenix before deploying their first AI-based
public ride-hailing service. The AI system collects data from the vehicles radar, cameras, GPS, and
cloud services to produce control signals that operate the vehicle. Advanced Deep Learning algorithms

61
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

Figure 2. A timeline of robotics and AI (Perez et al., 2017)

can accurately predict what objects in the vehicle’s vicinity are likely to do. This makes Waymo cars
more effective and safer. Another famous example of an autonomous vehicle is Tesla’s self-driving
car. Artificial Intelligence implements computer vision, image detection and deep learning to build
cars that can automatically detect objects and drive around without human intervention.
These days Virtual assistants (AI Applications, 2019) have become a very common technology.
Almost every household has a virtual assistant that controls the appliances at home. A few examples
include Siri, Cortana, which are gaining popularity because of the user experience they provide.
Amazon’s Echo is an example of how Artificial Intelligence can be used to translate human language
into desirable actions. This device uses speech recognition and NLP to perform a wide range of
tasks on your command. It can do more than just play your favorite songs. It can be used to control
the devices at your house, book cabs, make phone calls, order your favorite food, check the weather
conditions and so on.
Another example is the newly released Google’s virtual assistant called Google Duplex, that
has astonished millions of people. Not only can it respond to calls and book appointments for you,
but it also adds a human touch. The device uses Natural language processing and Machine learning
algorithms to process human language and perform tasks such as manage your schedule, control your
smart home, make a reservation and so on.
Ever since social media has become our identity, we’ve been generating an immeasurable
amount of data through chats, tweets, posts and so on. And wherever there is an abundance of data,
AI and Machine Learning are always involved. In social media platforms like Facebook, AI is used
for face verification wherein machine learning and deep learning concepts are used to detect facial
features and tag your friends. Deep Learning is used to extract every minute detail from an image by
using a bunch of deep neural networks. On the other hand, Machine learning algorithms are used to
design your feed based on your interests. Another such example is Twitter’s AI, which is being used
to identify hate speech and terroristic language in tweets. It makes use of Machine Learning, Deep
Learning, and Natural language processing to filter out offensive content. The company discovered
and banned 300,000 terrorist-linked accounts, 95% of which were found by non-human, artificially
intelligent machines.

62
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

4. THE SEASONS OF ROBOTICS AND AI

The evolution of AI to date, has endured several cycles of optimism (springs) and pessimism or
negativism (winters):

• Birth of AI (1952-1956): Before the term AI was coined, there were already advances in
cybernetics and neural networks, which started to attract the attention of both the scientific
communities and the public. The Dartmouth Conference (1956) was the result of this increasing
interest and gave rise to the following golden years of AI with high levels of optimism in the field.
• First Spring (1956-1974): Computers of the time could solve algebra and geometric problems,
as well as speak English. Advances were qualified as “impressive” and there was a general
atmosphere of optimism in the field. Researchers in the area estimated that a fully intelligent
machine would be built in the following 20 years.
• First Winter (1974-1980): The winter started when the public and media questioned the
promises of AI. Researchers were caught in a spiral of exaggerated claims and forecasts but the
limitations the technology posed at the time were inviolable. An abrupt ending of funding by
major agencies such as DARPA, the National Research Council and the British Government,
led to the first winter of AI.
• Second Spring (1980-1987): Expert systems were developed to solve problems of a specific
domain by using logical rules derived from experts. There was also a revival of connectionism
and neural networks for character or speech recognition. This period is known as the second
spring of AI.
• Second Winter (1987-1993): Specialized machines for running expert systems were displaced
by new desktop computers. Consequently some companies, that produced expert systems, went
into bankruptcy. This led to a new wave of pessimism ending the funding programs initiated
during the previous spring.
• In the background (1997-2000): From 1997 to 2000, the field of AI was progressing behind the
scenes, as no further multi-million programs were announced. Despite the lack of major funding
the area continued to progress, as increased computer power and resources were developed. New
applications in specific areas were developed and the concept of “machine learning” started to
become the cornerstone of AI.
• Third Spring (2000-Present): Since 2000, with the success of the Internet and web, the Big
Data revolution started to take off along with newly emerged areas such as Deep Learning. This
new period is known as the third spring of AI and for time being, it looks like it is here to stay.
Some have even started to predict the imminent arrival of singularity - an intelligence explosion
resulting in a powerful super-intelligence that will eventually surpass human intelligence. Is
this possible?

5. AI TECHNOLOGIES AND DISCIPLINES

AI is a diverse field of research and the following sub-fields are essential to its development. These
include neural networks, fuzzy logic, evolutionary computation, and probabilistic methods.
Neural networks build on the area of connectionism with the main purpose of mimicking the
way the nervous system processes information. Artificial neural networks (ANN) and variants have
allowed significant progress of AI to perform tasks relative to “perception”. When combined with
the current multicore parallel computing hardware platforms, many neural layers can be stacked to
provide a higher level of perceptual abstraction in learning its own set of features, thus removing the
need for handcrafted features; a process known as deep learning (LeCun et al., 2015). Limitations
of using deep layered ANNs include 1) low interpretability of the resultant learned model, 2) large

63
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

volumes of training data and considerable computational power are often required for the effective
application of these neural models.
Deep learning is part of machine learning and is usually linked to deep neural networks that
consist of a multi-level learning of detail or representations of data. Through these different layers,
information passes from low-level parameters to higher-level parameters. These different levels
correspond to different levels of data abstraction, leading to learning and recognition. A number
of deep learning architectures, such as deep neural networks, deep convolutional neural networks
and deep belief networks, have been applied to fields such as computer vision, automatic speech
recognition, and audio and music signal recognition and these have been shown to produce cutting-
edge results in various tasks.
Fuzzy logic focuses on the manipulation of information that is often imprecise. Most
computational intelligence principles account for the fact that, whilst observations are always exact,
our knowledge of the context, can often be incomplete or inaccurate as it is in many real-world
situations. Fuzzy logic provides a framework in which to operate with data assuming a level of
imprecision over a set of observations, as well as structural elements to enhance the interpretability
of a learned model (Zadeh, 1996). It does provide a framework for formalizing AI methods, as well
as an accessible translation of AI models into electronic circuits. Nevertheless, fuzzy logic does not
provide learning abilities per se, so it is often combined with other aspects such a neural networks,
evolutionary computing or statistical learning.
Evolutionary computing relies on the principle of natural selection, or natural patterns of
collective behavior (Fogel, 2006). The two most relevant subfields include genetic algorithms and
swarm intelligence. Its main impact on AI is on multi-objective optimization, in which it can produce
very robust results. The limitations of these models are like neural networks about interpretability
and computing power.
Statistical Learning is aimed at AI employing a more classically statistical perspective, e.g.,
Bayesian modeling, adding the notion of prior knowledge to AI. These methods benefit from a wide
set of well-proven techniques and operations inherited from the field of classical statistics, as well as
a framework to create formal methods for AI. The main drawback is that, probabilistic approaches
express their inference as a correspondence to a population (Breiman, 2001) and the probability concept
may not be always applicable, for instance, when vagueness or subjectivity need to be measured and
addressed (Senn, 2007).
Ensemble learning and meta-algorithms is an area of AI that aims to create models that combine
several weak base learners in order to increase accuracy, while reducing its bias and variance. For
instance, ensembles can show a higher flexibility with respect to single model approaches on which
some complex patterns can be modeled. Some well-known meta-algorithms for building ensembles
are bagging and boosting. Ensembles can take advantage of significant computational resources to
train many base classifiers therefore enhancing the ability to augment resolution of the pattern search
- although this does not always assure the attainment of a higher accuracy.
Logic-based artificial intelligence is an area of AI commonly used for task knowledge
representation and inference. It can represent predicate descriptions, facts and semantics of a domain
by means of formal logic, in structures known as logic programs. By means of inductive logic
programming hypotheses can be derived over the known background.

6. LIMITATION OF AI AND ROBOTICS

Current AI and robotics technologies are limited to very specific applications (Perez et al., 2017).
One limitation of AI, for example, is the lack of “Common Sense”; the ability to judge information
beyond its acquired knowledge. A recent example is that of the AI robot Toy developed by Microsoft
and designed for making conversations on social networks. It had to be disconnected shortly after
its launch because it was not able to distinguish between positive and negative human interaction.

64
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

AI is also limited in terms of emotional intelligence. AI can only detect basic human emotional
states such as anger, joy, sadness, fear, pain, stress and neutrality. Emotional intelligence is one of
the next frontiers of higher levels of personalisation. True and complete AI does not yet exist. At
this level, AI will mimic human cognition to a point that it will enable the ability to dream, think,
feel emotions and have own goals. Although there is no evidence yet this kind of true AI could exist
before 2050, nevertheless the computer science principles driving AI forward, are rapidly advancing
and it is important to assess its impact, not only from a technological standpoint, but also from a
social, ethical, and legal perspective.

7. WEAK AND STRONG AI & ROBOTICS

When defining the capacity of AI, this is frequently categorized in terms of weak or strong AI (Perez
et al., 2017). Weak AI (narrow AI) is one intended to reproduce an observed behavior as accurately
as possible. It can carry out a task for which they have been precision-trained. Such AI systems can
become extremely efficient in their own field but lack generalization ability. Most existing intelligent
systems that use machine learning, pattern recognition, data mining or natural language processing
are examples of weak AI. Intelligent systems, powered with weak AI include recommender systems,
spam filters, self-driving cars, and industrial robots.
Strong AI is usually described as an intelligent system endowed with real consciousness and
is able to think and reason in the same way as a human being. A strong AI can, not only assimilate
information like a weak AI, but also modify its own functioning, i.e. is able to autonomously reprogram
the AI to perform general intelligent tasks. These processes are regulated by human-like cognitive
abilities including consciousness, sentience, sapience and self-awareness. Efforts intending to generate
a strong AI have focused on whole brain simulations, however this approach has received criticism, as
intelligence cannot be simply explained as a biological process emanating from a single organ but is
a complex coalescence of effects and interactions between the intelligent being and its environment,
encompassing a series of diverse ways via interlinked biological process.

8. THE IMPACT OF GOVERNMENT ON AI & ROBOTICS

Government organizations and the public sector are investing millions to boost artificial intelligence
research. For example, the National Research Foundation of Singapore is investing $150 million
into a new national programme in AI. In the UK alone, £270 million is being invested from 2017 to
2018 to boost science, research and innovation, via the Government’s new industrial strategy and a
further funding of £4.7 billion is planned by 2021 (Yang, 2017). This timely investment will put UK
in the technological lead among the best in the world and ensure that UK technological innovations
can compete. Recent AI developments have triggered major investment across all sectors including
financial services, banking, marketing and advertising, in hospitals and government administration.
In fact software and information technology services have more than a 30% share in all AI
investments worldwide as of 2016, whereas Internet and telecommunication companies follow with
9% and 4%, respectively (Inc, 2016).
It is also important to note that the funding in AI safety, ethics and strategy/policy has almost
doubled in the last three years (Farquhar, 2017). Apart from non-profit organizations, such as the
Future of Life Institute (FLI) and the Machine Intelligence Research Institute (MIRI), other centers,
such as the Centre for Human-Compatible AI and Centre for the Future of Intelligence, have emerged
and they, along with key technological firms, invested a total of $6.6 million in 2016.

65
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

9. MAJOR TECHNOLOGICAL FIRMS AI & ROBOTICS

Major technological firms are investing into applications for speech recognition, natural language
processing and computer vision. A significant leap in the performance of machine learning algorithms
resulting from deep learning, exploited the improved hardware and sensor technology to train artificial
networks with large amounts of information derived from ‘big data’ (Andreu-Perez et al., 2015; Ravi
et al., 2017). Current state-of-the-art AI allows for the automation of various processes and new
applications are emerging with the potential to change the entire workings of the business world
(Figures 3,4, & 5). As a result, there is huge potential for economic growth, which is demonstrated
in the fact that between 2014 and 2015 alone, Google, Microsoft, Apple, Amazon, IBM, Yahoo,
Facebook, and Twitter, made at least 26 acquisitions of start-ups and companies developing AI
technology, totaling over $5 billion in cost.
In 2014, Google acquired DeepMind, a London-based start-up company specializing in deep
learning, for more than $500M and set a record of company investment of AI research to academic
standard. In fact, DeepMind has produced over 140 journal and conference papers and has had four
articles published in Nature since 2012. One of the achievements of DeepMind was in developing
AI technology able to create general-purpose software agents that adjust their actions based only
on a cumulative reward. This reinforcement learning approach exceeds human level performance in
many aspects and has been demonstrated with the defeat of the world Go game champion; marking a
historical landmark in AI progress. IBM has developed a supercomputer platform, Watson, which has
the capability to perform text mining and extract complex analytics from large volumes of unstructured
data. To demonstrate its abilities, IBM Watson, in 2011, beat two top players on ‘Jeopardy!’, a popular
quiz show, that requires participants to guess questions from specific answers. Although, information
retrieval is trivial for computer systems, comprehension of natural language is still a challenge. This
achievement has had a significant impact on the performance of web searches and the overall ability
of AI systems to interact with humans. In 2015, IBM bought AlchemyAPI to incorporate its text and
image analysis capabilities in the cognitive computing platform of the IBM Watson. The system has
already been used to process legal documents and provide support to legal duties. Experts believe

Figure 3. A conservative estimate of venture capital investment in AI technology worldwide according to data presented in (Chen
et al., 2016)

66
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

Figure 4. Total estimated equity investments in ai start-ups, by start-up location 2011-17 and First Semester 2018 (OECD, 2018)

Figure 5. Number of private equity investments in AI start-ups, by start-up location 2011-17 and First Semester 2018 (OECD, 2018)

67
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

that these capabilities can transform current health care systems and medical research. Research in
top AI firms is centered on the development of systems that are able to reliably interact with people.
Interaction takes more natural forms through real-time speech recognition and translation capabilities.
Robo-advisor applications are at the top of the AI market with a globally estimated 255 billion in US
dollars by 2020 (Inc, 2016). There are already several virtual assistants offered by major companies.
For example, Apple offers Siri and Amazon Alexa, Microsoft offers Cortana, and Google has the
Google Assistant.
2016, Apple Inc. purchased Emotient Inc., a start-up using artificial-intelligence technology
to read people’s emotions by analyzing facial expressions. DeepMind created WaveNet, which is a
generative model that mimics human voices. According to the company’s website, this sounds more
natural than the best existing Text-to-Speech systems. Facebook is also considering machine-human
interaction capabilities as a prerequisite to generalized AI. Recently, OpenAI, a non-profit organization,
has been funded as part of a strategic plan to mitigate the risks of monopolizing strong AI. OpenAI
has re-designed evolutional algorithms that can work together with deep neural networks to offer
state-of-the-art performance. It is considered to rival DeepMind since it offers similar open-source
machine learning libraries to TensorFlow, a deep learning library distributed by Google DeepMind.
Nevertheless, the big difference between the technology developed at OpenAI and the other private
tech companies, is that the created Intellectual Property is accessible by everyone. Although several
companies and organizations, including DeepMind and OpenAI, envision the solution to the creation
of intelligence and the so-called Strong AI, developing machines with self-sustained long-term goals
is well beyond current technology. Furthermore, there is vigorous debate on whether or not we are
going through an AI bubble, which encompasses the paradox that productivity growth in USA, during
the last decade, has declined regardless of an explosion of technological progress and innovation. It
is difficult to understand whether this reflects a statistical shortcoming or that current innovations
are not transformative enough. This decline can be also attributed to the lack of consistent policy
frameworks and security standards that can enable the application of AI in projects of significant
impact (Table 1).

10. PROGRAMMING LANGUAGES FOR AI & ROBOTICS

Programming languages played a major role in the evolution of AI since the late 1950s and several
teams carried out important research projects in AI; e.g. automatic demonstration programs and
game programs (Chess, Ladies) (McCarthy, 1959). During these periods researchers found that one
of the special requirements for AI is the ability to easily manipulate symbols and lists of symbols
rather than processing numbers or strings of characters. Since the languages of the time did not offer
such facilities, a researcher from MIT, John MacCarthy, developed, during 1956-58, the definition
of an ad-hoc language for logic programming, called LISP (LISt Processing language). Since then,
several hundred derivative languages, so-called “Lisp dialects”, have emerged (Scheme, Common
Lisp, Clojure); Indeed, writing a LISP interpreter is not a hard task for a Lisp programmer (it involves
only a few thousand instructions) compared to the development of a compiler for a classical language
(which requires several tens of thousands of instructions). Because of its expressiveness and flexibility,
LISP was very successful in the artificial intelligence community until the 1990s.
Another important event at the beginning of AI was the creation of a language with the main
purpose of expressing logic rules and axioms. Around 1972 a new language was created by Alain
Colmerauer and Philippe Roussel named Prolog (PROgramming in Logic). Their goal was to create a
programming language where the expected logical rules of a solution can be defined and the compiler
automatically transforms it into a sequence of instructions.
Prolog is used in AI and in natural language processing. Its rules of syntax and its semantics are
simple and considered accessible to non-programmers. One of the objectives was to provide a tool for
linguistics that was compatible with computer science. Since 2008, the Python community has been

68
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

Table 1. Major companies in AI (Perez et al., 2017)

trying to catch up with specific languages for scientific computing, such as Matlab and R. Due to its
versatility, Python is now used frequently for research in AI. However, although python has some
of the advantages of functional programming, run-time speeds are still far behind other functional
languages, such as Lisp or Haskell, and even more so from C/C++. In addition, it lacks of efficiency

69
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

when managing large amounts of memory and highly-concurrent systems. In the 1990s, the machine
languages with C / C ++ and Fortran gained popularity and eclipsed the use of LISP and Prolog.
Greater emphasis was placed on creating functions and libraries for scientific computation on these
platforms and were used for intensive data analysis tasks or artificial intelligence with early robots.
In the middle of the 1990s, the company Sun Microsystems, started a project to create a language
that solved security flaws, distributed programming and multi-threading of C++. In addition, they
wanted a platform that could be ported to any type of device or platform. In 1995, they presented
Java, which took the concept of object orientation much further than C++. Equally, one of the most
important additions to Java was the Java VM (JVM) which enabled the capability of running the same
code in any device regardless of their internal technology and without the need of pre-compiling for
every platform. This added new advantages to the field of AI that were be introduced in devices such
as cloud servers and embedded computers. Another important feature of Java was that it also offered
one of the first frameworks, with specific tools for the internet, bringing the possibility of running
applications in the form of java applets and javascripts (i.e. self-executing programs) without the
need of installation. This had an enormous impact in the field of AI and a set the foundation in the
fields of web 2.0/3.0 and the internet of things (IoT). From 2010 and mostly driven by the necessity
of translating AI into commercial products, (that could be used by thousands and millions of users
in real time), IT corporations looked for alternatives by creating hybrid languages, that combined
the best from all paradigms without compromising speed, capacity and concurrency. In recent years,
new languages such as Scala and Go, as well as Erlang or Clojure, have been used for applications
with very high concurrency and parallelization, mostly on the server side. Well-known examples are
Facebook with Erlang or Google with Go. New languages for scientific computation have also emerged
such as Julia and Lua. However, the development of AI using purely procedural languages was costly,
time-consuming and error prone. Consequently, this turned the attention into other multi-paradigm
languages that could combine features from functional and procedural object-oriented languages.
Python, although first published in 1991, started to gain popularity as an alternative to C/C++ with
Python 2.2 by 2001. The Python concept was to have a language that could be as powerful as C/C++
but also expressive and pragmatic for executing “scripts” like Shell Script. It was in 2008, with the
publication of Python 3.0, which solved several initial flaws, when the language started to be considered
a serious contender for C++, java and other scripting languages such as Perl. Although functional
programming has been popular in academia, its use in industrial settings has been marginal and mainly
during the times when “expert systems” were at their peak, predominantly during the 1980s. After the
fall of expert systems, functional programming has, for many years, been considered a failing relic
from that period. However, as multiprocessors and parallel computing are becoming more available,
functional programming is proving to be a choice of many programmers to maximize functionality
from their multicore processors. These highly expensive computations are usually needed for heavy
mathematical operations or pattern matching, which constitute a fundamental part of running an AI
system. In the future, we will see new languages that bring simplifications on existing functional
languages such as Haskell and Erlang and make this programming paradigm more accessible. In
addition, the advent of the internet-of-things (IoT) has drawn the attention to the programming of
embedded systems. Thus, efficiency, safety and performance are again matters for discussion. New
languages that can replace C/C++ incorporating tips from functional programming (e.g. Elixir) will
become increasingly popular. Also, new languages that incorporate simplifications as well as a set
of functions from modern imperative programming, while maintaining a performance like C/C++
(e.g. Rust), will be another future development (Table 2).

11. RISKS AND FEARS OF AI AND ROBOTICS

Given the exponential rise of interest in AI, experts have called for major studies on the impact of
AI on our society, not only in technological but also in legal, ethical and socio-economic areas. This

70
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

Table 2. Lists of AI and robotics programming language (Perez et al., 2017).

response also includes the speculation that autonomous super artificial intelligence may one day
supersede the cognitive capabilities of humans. This future scenario is usually known in AI forums
as the “AI singularity” (Spinrad, 2017). This is commonly defined as the ability of machines to
build better machines by themselves. This futuristic scenario has been questioned and is received

71
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

with scepticism by many experts. Today’s AI researchers are more focused on developing systems
that are very good at tasks in a narrow range of applications. This focus is at odds with the idea of
the pursuit of a super generic AI system that could mimic all different cognitive abilities related to
human intelligence such as self-awareness and emotional knowledge. In addition to this debate, about
AI development and the status of our hegemony as the most intelligent species on the planet, further
societal concerns have been raised. For example, the AI100 (One Hundred Year Study on Artificial
Intelligence) a committee led by Stanford University, defined 18 topics of importance for AI (Horvitz,
2014). Although these are not exhaustive nor definitive, it sets forth the range of topics that need
to be studied, for the potential impact of AI and stresses that there are a number of concerns to be
addressed. Many similar assessments have been performed and they each outline similar concerns
related to the wider adoption of AI technology.

11.1 The 18 Topics Covered by the AI100 (Horvitz, 2014)


11.1.1. Technical Trends and Surprises
This topic aims at forecasting the future advances and competencies of AI technologies in the near
future. Observatories of the trend and impact of AI should be created, helping to plan the setting of
AI in specific sectors, and preparing the necessary regulation to smooth its introduction.

11.1.2. Key Opportunities for AI


How advances in AI can help to transform the quality of societal services such as health, education,
management and government, covering not just the economic benefits but also the social advantages
and impact.

11.1.3. Delays With Translating AI Advances into Real-World Values


The pace of translating AI into real world applications is currently driven by potential economic
prospects (Lohr, 2012). It is necessary to take measures to foster a rapid translation of those potential
applications of AI that can improve or solve a critical need of our society, such as those that can save
lives or greatly improve the organization of social services, even though their economic exploitation
is not yet assured.

11.1.4. Privacy and Machine Intelligence


Personal data and privacy is a major issue to consider and it is important to envisage and prepare
the regulatory, legal and policy frameworks related to the sharing of personal data in developing AI
systems.

11.1.5. Democracy and Freedom


In addition to privacy, ethical questions with respect to the stealth use of AI for unscrupulous
applications must be considered. The use of AI should not be at the expense of limiting or influencing
the democracy and the freedom of people.

11.1.6. Law
This considers the implications of relevant laws and regulations. First, to identify which aspects of
AI require legal assessment and what actions should be undertaken to ensure law enforcement for
AI services. It should also provide frameworks and guidelines about how to adhere to the approved
laws and policies.

72
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

11.1.7. Ethics
By the time AI is deployed into real world applications there are ethical concerns referring to their
interaction with the world. What uses of AI should be considered unethical? How should this be
disclosed?

11.1.8. Economics
The economic implications of AI on jobs should be monitored and forecasted such that policies can be
implemented to direct our future generation into jobs that will not be soon overtaken by machines. The
use of sophisticated AI in the financial markets could potentially cause volatilities and it is necessary
to assess the influence AI systems may have on financial markets. Safety and Autonomy: For the safe
operation of intelligent, autonomous systems, formal verification tools should be developed to assess
their safety operation. Validation can be focused on the reasoning process and verifying whether the
knowledge base of an intelligent system is correct (Gonzalez & Barr, 2000) and also making sure that
the formulation of the intelligent behaviour will be within safety boundaries (Ratschan & She, 2007).

11.1.9. AI and Warfare


AI has been employed for military applications for more than a decade. Robot snipers and turrets have
been developed for military purposes (Alston, 2011). Intelligent weapons have increasing levels of
autonomy and there is a need for developing new conventions and international agreements to define
a set of secure boundaries of the use of AI in weaponry and warfare.

11.1.10. Loss of Control of AI Systems


The potential of AI being independent from human control is a major concern. Studies should be
promoted to address this concern both from the technological standpoint and the relevant framework
for governing the responsible development of AI.

11.1.11. Criminal Uses of AI


Implementations of AI into malware are becoming more sophisticated thus the chances of stealing
personal information from infected devices are getting higher. Malware can be more difficult to
detect as evasion techniques by computer viruses and worms may leverage highly sophisticated AI
techniques (Young & Yung, 1997; Kirat et al, 2014). Another example is the use of drones and their
potential to fall into the hands of terrorists the consequence of which would be devastating.

11.1.12. Collaboration With Machines


Humans and robots need to work together and it is pertinent to envisage in which scenarios
collaboration is critical and how to perform this collaboration safely. Accidents by robots working
side by side with people had happened before (Bryant & Waters, 2015) and robotic and autonomous
systems development should focus on not only enhanced task precision but in also being able to
understand the environment and human intention.

11.1.13. AI and Human Cognition


AI has the potential for enhancing human cognitive abilities. Some relevant research disciplines with
this objective are sensor informatics and human-computer interfaces. Apart from applications to
rehabilitation and assisted living, they are also used in surgery (Andreu-Perez, 2016) and air traffic
control (Harrison et al., 2014). Cortical implants are increasingly used for controlling prosthesis, our
memory and reasoning are increasingly relying on machines and the associated health, safety and
ethical impacts must be addressed.

73
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

11.1.14. Safety and Autonomy


For the safe operation of intelligent, autonomous systems, formal verification tools should be
developed to assess their safety operation. Validation can be focused on the reasoning process and
verifying whether the knowledge base of an intelligent system is correct (Gonzalez & Bar, 2000)
and also making sure that the formulation of the intelligent behavior will be within safety boundaries
(ratschan & She, 2007).

11.1.15. Psychology of People and Smart Machines


More research should be undertaken to obtain detailed knowledge about the opinions and concerns
people have, in the wider usage of smart machines in societies. Additionally, in the design of intelligent
systems, understanding people’s preferences is important for improving their acceptability (Broadbent
et al., 2009; Smarr et al., 2012).

11.1.16. Communication, Understanding and Outreach


Communication and educational strategies must be developed to embrace AI technologies in our
society. These strategies must be formulated in ways that are understandable and accessible by non-
experts and the general public.

11.1.17. Neuroscience and AI


Neuroscience and AI can develop together. Neuroscience plays an important role for guiding research
in AI and with new advances in high performance computing, there are also new opportunities to
study the brain through computational models and simulations in order to investigate new hypotheses
(O’Reilly & Munakata, 2002).

11.1.18. AI and Philosophy of Mind


When AI can experience a level of consciousness and self-awareness, there will be a need to understand
the inner world of the psychology of machines and their subjectivity of consciousness.
Moreover, due to the above premises and others, as I have stated on the abstract, currently and
may be on the coming few decades, robotics and AI become the predator and risk of worlds’ creatures
and they may inherit full nature of creatures as well as they might be converged with other natural
creatures. Thus, finally they will create their new universe or the destiny of our universe will be in
danger.

12. CONCLUSION

There are many lessons that can be learnt from the past successes and failures of AI. To sustain the
progress of AI, a rational and harmonic interaction is required between application specific projects
and visionary research ideas. Along with the unprecedented enthusiasm of AI, there are also fears about
the impact of the technology on our society. A clear strategy is required to consider the associated
ethical and legal challenges to ensure that the society as a whole will benefit from the evolution of
AI and its potential adverse effects are mitigated from early on. Such fears should not hinder the
progress of AI but motivate the development of a systematic framework on which future AI will
flourish. Most critical of all, it is important to understand science fiction from practical reality. With
sustained funding and responsible investment, AI is set to transform the future of our society - our
life, our living environment and our economy.

74
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

13. RECOMMENDATIONS

The following recommendations are relevant to the world’s research community, industry, government
agencies and policy makers:
Robotics and AI are playing an increasingly important role in the world’s economy and its
future growth. We need to be open and fully prepared for the changes that they bring to our society
and their impact on the workforce structure and a shift in the skills base. Stronger national level
engagement is essential to ensure the general public has a clear and factual view of the current and
future development of robotics and AI.
A strong research and development base for robotics and AI is fundamental to the countries,
particularly in areas in which we already have a critical mass and international lead. Sustained
investment in robotics and AI would ensure the future growth of the countries research base and
funding needs to support key Clusters/Centers of Excellence that are internationally leading and
weighted towards projects with greater social-economic benefit.
It is important to address legal, regulatory and ethical issues for practical deployment and
responsible innovation of robotics and AI; greater effort needs to be invested on assessing the
economic impact and understanding how to maximize the benefits of these technologies while
mitigating adverse effects.
The government needs to tangibly support the workforce by adjusting their skills and business
in creating opportunities based on new technologies. Training in digital skills and re-educating the
existing workforce is essential to maintain the competitiveness of the countries.
Sustained investment in robotics and AI is critical to ensure the future growth of the countries
research base and its international lead. It is also critical to invest in and develop the younger
generation to be robotics and AI savvy with a strong STEM foundation by making effective use of
new technical skills.

ACKNOWLEDGMENT

This work would not have been possible without the endless support of son of St. Merry, Almighty
God, thus I praise always the name of him and his mother. In addition, this paper is dedicated for my
beloved country, Ethiopia.

75
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

REFERENCES

Alston, P. (2011). Lethal Robotic Technologies: The Implications for Human Rights and International
Humanitarian Law. Journal of Law Inf. & Sci., 21, 35–40.
Andreu-Perez, J., Leff, D. R., Shetty, K., Darzi, A., & Yang, G. Z. (2016, June). Disparity in Frontal Lobe
Connectivity on a Complex Bimanual Motor Task Aids in Classification of Operator Skill Level. Brain
Connectivity, 6(5), 375–388. doi:10.1089/brain.2015.0350 PMID:26899241
Andreu-Perez, J., Poon, C. C., Merrifield, R. D., Wong, S. T., & Yang, G. Z. (2015, July). Big Data for Health.
IEEE Journal of Biomedical and Health Informatics, 19(4), 1193–1208. doi:10.1109/JBHI.2015.2450362
PMID:26173222
Arisumi, H., Miossec, S., Chardonnet, J. R., & Yokoi, K. (2010, Nov). Dynamic Lifting by Whole Body Motion
of Humanoid Robots. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.
Academic Press.
Asada, M. (2015, November). Towards Artificial Empathy. International Journal of Social Robotics, 7(1),
19–33. doi:10.1007/s12369-014-0253-z
Bansal, K. L., Sood, S., & Sharma, S. K. Robotics History, Present Status and Future Trends. Retrived from
http://www.scholar.google.co.in
Breiman, L. (2001, August). Statistical Modeling: The Two Cultures. Statistical Science, 16(3), 199–231.
doi:10.1214/ss/1009213726
Broadbent, E., Stafford, R., & MacDonald, B. (2009, October). Acceptance of Healthcare Robots for the Older
Population: Review and Future Directions. International Journal of Social Robotics, 1(4), 319–330. doi:10.1007/
s12369-009-0030-6
Bryant, C., & Waters, R. (2015). Worker at Volkswagen Plant Killed in Robot Accident. The Financial Times.
Retrived from http://www.ft.com
Chan, M. T., Gorbet, R., Beesley, P., & Kulic, D. (2015, Sep). Based Learning Algorithm for Distributed
Interactive Sculptural Systems. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS). Academic Press. doi:10.1109/IROS.2015.7353856
Chen, N., Christensen, L., Gallagher, K., Mate, R., & Rafert, G. (2016, February). Global Economic Impacts
Associated with Artificial Intelligence. The Analysis Group. Retrived from http://www.analysisgroup.com
Chen, Z., Jia, X., Riedel, A., & Zhang, M. (2014, May). A Bio-Inspired Swimming Robot. In Proceedings of
IEEE International Conference on Robotics and Automation (ICRA). IEEE Press.
Farquhar, S. (2017, Feb). Changes in Funding in the AI Safety Field. The AI IMPACTS. Retrived from http://
www.aiimpacts.org
Floreano, D., & Wood, R. J. (2015, May). Science, Technology and the Future of Small Autonomous Drones.
Nature, 521(7553), 460–466. doi:10.1038/nature14542 PMID:26017445
Fogel, D.B. (2006, May). Evolutionary Computation: Toward a New Philosophy of Machine Intelligence. John
Wiley & Sons.
Gonzalez, A. J., & Barr, V. (2000, September). Validation and Verification of Intelligent Systems-What are They
and How are They Different? Journal of Experimental & Theoretical Artificial Intelligence, 12(4), 407–420.
doi:10.1080/095281300454793
Harrison, J., Izzetoglu, K., & Ahlstrom, U. (2014, July). Cognitive Workload and Learning Assessment During
the Implementation of a Next-Generation Air Traffic Control Technology using Functional Near-Infrared
Spectroscopy. IEEE Transactions on Human-Machine Systems, 44(4), 429–440. doi:10.1109/THMS.2014.2319822
Hopfield, J. J. (1982). Neural Networks and Physical Systems with Emergent Cllective Computational Abilities.
In Proceedings of the national academy of sciences. California Institute of Technology Press.
Horvitz, E. (2014). One Hundred Year Study on Artificial Intelligence: Reflections and Framing (AI100).
Stanford: Stanford University Press.

76
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

Inc, S. (2016, January). Artificial Intelligence (AI). Inc. Retrived from http://www.inc.com
Kappassov, Z., Corrales, J. A., & Perdereau, V. (2015, December). Tactile Sensing in Dexterous Robot Hands
- Review. Robotics and Autonomous Systems, 74, 195–220. doi:10.1016/j.robot.2015.07.015
Kirat, D., Vigna, G., & Kruegel, C. (2014, Aug). Barecloud: Bare-Metal Analysis-Based Evasive Malware
Detection. In Proceedings of 23rd USENIX Security Symposium (USENIX Security 14). USENIX.
Kruse, T., Pandey, A. K., Alami, R., & Kirsch, A. (2013, December). Human-Aware Robot Navigation: A Survey.
Robotics and Autonomous Systems, 61(12), 1726–1743. doi:10.1016/j.robot.2013.05.007
LeCun, Y., Bengio, Y., & Hinton, G. (2015, May). Deep Learning. Nature, 521(7553), 436–444. doi:10.1038/
nature14539 PMID:26017442
Lighthill, I. (1973). Artificial Intelligence: A General Survey. Paper presented at Artificial Intelligence: A paper
Symposium Science Research Council. Academic Press.
Lohr, S. (2012, January). The Age of Big Data. The New York Times. Retrived from http://www.nytimes.com
Mavridis, N. (2015, January). A Review of Verbal and Non-Verbal Human–Robot Interactive Communication.
Robotics and Autonomous Systems, 63, 22–35. doi:10.1016/j.robot.2014.09.031
McCarthy, J. (1959, March). Programs With Common Sense. Stanford University. Retrived from http://www.
jmc.stanford.edu
Meet The World’s First Female AI News Anhor. (2019, March). Interesting Engineering. Retrived from https://
www.interestingengineering.com/meet-the-worlds-first-female-ai-news-anchor
Minsky, M., & Papert, A. P. (1969). Perceptrons: An Introduction to Computational Geometry. Cambridge:
MIT Press.
Mochizuki, K., Nishide, S., Okuno, H. G., & Ogata, T. (2013, June). Developmental Human-Robot Imitation
Learning of Drawing with a Neuro Dynamical System. In Proceedings of IEEE International Conference on
Systems, Man, and Cybernetics (SMC). WASEDA University. doi:10.1109/SMC.2013.399
O’Reilly, R. C., & Munakata, Y. (2002, September). Computational Explorations in Cognitive Neuroscience:
Understanding the Mind by Simulating the Brain. Journal of Mathematical Psychology, 46(5), 636–653.
doi:10.1006/jmps.2001.1408
OECD. (2018, July). Private Equity Investment in Artifical Intelligence. Retrived from http://www.oecd.org
Ohmura, Y., & Kuniyoshi, Y. (2007, May). Humanoid Robot Which Can Lift a 30kg Box by Whole Body Contact
and Tactile Feedback. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.
Academic Press. doi:10.1109/IROS.2007.4399592
Oudeyer, P. Y. (2014, March). Socially Guided Intrinsic Motivation for Robot Learning of Motor Skills.
Autonomous Robots, 36(3), 273–294. doi:10.1007/s10514-013-9339-y
Perez, J. A., Deligianni, F., Ravi, D., & Yang, G. Z. (2017). Artificial Intelligence and Robotics. UK: UK-RAS
Network Robotics & Autonomous Systems. doi:10.31256/WP2017.1
Ratschan, S., & She, Z. (2007, January). Safety Verification of Hybrid Systems by Constraint Propagation
Based Abstraction Refinement. ACM Transactions on Embedded Computing Systems, 6(1), 573–589.
doi:10.1145/1210268.1210276
Ravi, D., Wong, C., Deligianni, F., Berthelot, M., Andreu-Perez, J., Lo, B., & Yang, G.-Z. (2017, January). Deep
Learning for Health Informatics. IEEE Journal of Biomedical and Health Informatics, 21(1), 4–21. doi:10.1109/
JBHI.2016.2636665 PMID:28055930
Rogers, C. (2015, Jan). Google Sees Self-Driving Cars on Road within Five Years. Wall Street Journal. Retrived
from http://www.wsj.com
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1985, Sep). Learning Internal Representations by Error
Propagation. Defense Technical Information Center. Retrived from http://www.apps.dtic.mil

77
International Journal of Artificial Intelligence and Machine Learning
Volume 10 • Issue 2 • July-December 2020

Senn, S. (2007, August). Trying to be Precise About Vagueness. Statistics in Medicine, 26(7), 1417–1430.
doi:10.1002/sim.2639 PMID:16906552
Smarr, C. A., Prakash, A., Beer, J. M., Mitzner, T. L., Kemp, C. C., & Rogers, W. A. (2012, Sep). Older Adults’
Preferences for and Acceptance of Robot Assistance for Everyday Living Tasks. In Proceedings of the Human
Factors and Ergonomics Society Annual Meeting. Boston, MA: HFES.
Spinrad, N. (2017). Mr. Singularity. Nature, 543(7646), 582–582. doi:10.1038/543582a
Top 10 Real World Artificail Intelligence Applications. (2019). Edureka. Reterived from https://www.edureka.
co/blog/artificial-intelligence-applications
Wisskirchen, G., Biacabe, B. T., Bormann, U., Muntz, A., Niehaus, G., Soler, G. J., & Brauchitsch, B. V. (2017,
April). Artifical Intelligence and Robotics and Their Impact on the Workplace. IBA Global Employment Institute.
Wooldridge, M., & Jennings, N. R. (2009, July). Intelligent Agents: Theory and Practice. The Knowledge
Engineering Review, 10(2), 115–152. doi:10.1017/S0269888900008122
Yang, G. Z. (2017, June). Robotics and AI Driving the UK’s Industrial Strategy. INGENIA, 71, 12–13.
Young, A., & Yung, M. (1997, May). Deniable Password Snatching: On the Possibility of Evasive Electronic
Espionage. In Proceedings of Security and Privacy. UK: ACM. doi:10.1109/SECPRI.1997.601339
Zadeh, L. A. (1996, May). Fuzzy Logic=Computing with Words. IEEE Transactions on Fuzzy Systems, 4(2),
103–111. doi:10.1109/91.493904
Zadel, L. A. (2015, December). Fuzzy Logic-a Personal Perspective. Fuzzy Sets and Systems, 281, 4–20.
doi:10.1016/j.fss.2015.05.009
Zhang, L., Jiang, M., Farid, D., & Hossain, M. A. (2013, October). Intelligent Facial Emotion Recognition and
Semantic-Based Topic Detection for a Humanoid Robot. Expert Systems with Applications, 40(13), 5160–5168.
doi:10.1016/j.eswa.2013.03.016

Estifanos Tilahun Mihret received his BSC in Computer Science in 2012 and MSC in Computer Networking in 2017
from Hawassa and Jimma Universities, respectively. He has 8 years experience on lecturing and researching on
computing world. Currently, he is serving as a lecturer in Mettu University, Ethiopia. His research interests include
AI and robotics, unmanned aerial vehicles (UAVs), vehicular ad hoc networks (VANETs), WSNs, IOT, security
system, cellular networks, and future technologies.

78
View publication stats

You might also like