Problems of Implementing Artificial Intelligence in Nigeria
Problems of Implementing Artificial Intelligence in Nigeria
Problems of Implementing Artificial Intelligence in Nigeria
intelligence in Nigeria
2. Data labeling
A few years back, most of our data was structured or
textual. Nowadays, with the Internet of Things (IoT) a
large share of the data is made up of images and videos.
There’s nothing wrong with that, and it may seem like
there’s no problem here, but the thing is that many of the
systems utilizing machine learning or deep learning
are trained in a supervised way, so they require the
data to be labeled. The fact that we produce vast
amounts of data every day doesn’t help either; we’ve
reached a point where there aren’t enough people to label
all the data that’s being created. There are databases that
offer labeled data, including ImageNet which is a
database with over 14 million images. All of them
manually annotated by ImageNet’s contributors. Even
though in some cases, more appropriate data would be
available elsewhere, many computer vision specialists use
ImageNet anyway only because their image data is
already labeled.
There are a few data labeling approaches that you can
adopt. You can do it internally, within your company, or
outsource the work, you can use synthetic labeling or data
programming. All of these approaches have their pros and
cons, as presented in the table below.
3. Explainability
With many “black box” models, you end up with a
conclusion, e.g. a prediction, but no explanation to it. If
the conclusion provided by the system overlaps with what
you already know and think is right, you’re not going to
question it. But what happens if you disagree? You want
to know HOW the decision has been made. In many
cases, the decision itself is not enough. Doctors cannot
rely solely on a suggestion provided by the system when
it’s about their patients’ health.
4. Case-specific learning
Our intelligence allows us to use the experience from one
field to a different one. That’s called the transfer of
learning – humans can transfer learning in one context to
another, similar context. Artificial intelligence continues
to have difficulties carrying its experiences from one set
of circumstances to another. On one hand, that’s no
surprise – we know that AI is specialized – it’s meant to
carry out a strictly specified task. It’s designed to answer
one question only, and why would we expect it to answer
a different question as well? On the other hand, the
“experience” AI acquires with one task can be valuable to
another, related task. Is it possible to use this experience
instead of developing a new model from scratch?
Transfer learning is an approach that makes it possible –
the AI model is trained to carry out a certain task and
then applies that learning to a similar (but distinct)
activity. This means that a model developed for task A is
later used as a starting point for a model for task B.
5. Bias
Bias is something many people worry about: stories of AI
systems being “prejudiced” against women or people of
color make the headlines every once in a while. But how
does that happen? Surely, AI cannot have bad intentions.
Or can it…?
No, it cannot. An assumption like that would also mean
that AI is conscious and can make its own choices when
in reality AI makes decisions based on the available
data only. It doesn’t have opinions, but it learns from the
opinions of others. And that’s where bias happens.
Bias can occur as a result of a number of factors, starting
with the way of collecting data. If the data is collected by
means of a survey published in a magazine, we have to be
aware of the fact that the answers (data) come only from
those reading said magazine, which is a limited social
group. In such a case, we can’t say that the dataset is
representative of the entire population.
The way data is probed is another way to develop bias:
when a group of people is using some system, they may
have favorite features and simply not use (or rarely use)
other features. In this case, AI cannot learn about the
functions that are not used with the same frequency.
But there is another thing we have to consider in terms of
bias: data comes from people. People lie. People spread
stereotypes. This happened in Amazon (!) recruitment
when their AI recruiter turned out to be gender-biased.
Since men dominated the workforce in technical
departments, the system learned that male applicants are
favorable and penalized the resumes that included the
word “women’s”. It also downgraded graduates of all
women’s colleges. You can read more about this case in
my article about AI fails.