Recii | Artificial Intelligence isn't really intelligent at all.…
close
Automate your outreach with Recii

Teaching AI is like educating a baby

Artificial Intelligence seems complicated and only for the uber intelligent.


In reality AI isn’t very intelligent at all. AI learns everything from you, the user.

And it learns in a very similar way to that of a baby.

We're going to attempt to explain the process of Machine Learning and how AI then brings this Machine Learning together with Deep Learning to appear intelligent.

Knowing something means nothing until you apply knowledge.

Take a picture of a dog. You show this to a baby and tell it, “this is a dog”. Similarly you take a picture of a cat and say “this is a cat”.

Now, both are furry animals and therefore fall into the furry animal category but, you need to tell the difference between a cat and a dog.

A baby may make the assumption that a small furry animal is a cat when actually, it’s a small dog.

How do we overcome this?

You have to show the baby all different sizes, shapes, colours and characteristics of a dog so it can learn all the options that would allow it to put a dog in the dog category and not the cat category. Because size and furry, isn’t enough sorting criteria.

Cool, the baby now knows the difference between a dog, cat and other unrecognizable furry animals (we’ll mass sort them into “unnamed animals” for now).

The same way you teach a baby, you teach AI - by showing or exposing the platform to an amount of data, you educate it on what to look for and how to sort it.

Let’s take it a step further.

Take that an AI platform knows how to group and categorise skills - languages for example.

Same as a baby identifying a dog, we want the AI tool to identify French. Every time someone types or says “French” we want to code this as a skill against their candidate file.

Seems pretty simple right? There’s only so many ways to spell French or a certain amount of variations but, what the AI tool doesn’t yet understand, is the intent behind the phrase.

AI can pick up the terminology “French” but what it doesn’t understand is the manner in which it is said.

“I speak French”

“I do not speak French”

All the system initially recognises is “French” classing both answers as french speakers.

The same as the baby classing a small furry animal as a cat, the AI tool needs to learn that although both answers fall into the French language category, the intent behind each answer is very different.

And that’s why intent is the trickier part.

What do we mean by teaching intent?

When we say intent, we mean the intent in which a question is asked or an answer is given.

Candidate 1: “I speak French”

Candidate 2: “I do not speak French”

One is a positive intent while the second is a negative intent and we need an AI platform to understand this before categorising the response or answer.

By understanding and identifying the intent words, AI can start to understand the intent in which something is spoken.

Leading us nicely into the section about understanding.

We have four distinct areas that makeup understanding:

Utterance - keyword or phrase

Intent - positive or negative

Entity - intent + utterance

Context - conversation + entity

For AI to truly understand, it must be able to put together and process the utterance, intent, entity and context to allow it to come up with an appropriate action.

So if we take two examples:

Candidate 1: “I speak French but I do not want to work in a school”

Candidate 2: “I speak French and I do want to work in a school”

Taking into account AI doesn’t acknowledge filler words, the only difference between these two phrases is the utterance “not”.

If we have developed a platform that can identify and understand the utterance and intent, we call this an entity.

We need to ensure it also understands the right context.

We need the platform to understand and process that yes, both candidates speak French however only one candidate wishes to work in a school.

If AI can’t match the correct entity with the correct context, it can fall into trouble when it sorts Candidate 1 into the “Non-French Speaking” category due to a negative intent being present.

When in fact, the negative intent is associated with the industry not, the language skill.

That’s why context is so important.

Once we have the four categories of understanding sorted, we just have to worry about action.

A predetermined action can be executed at this stage by a chatbot as a chatbot can only understand utterance, intent, entity and context.

However, voice technology can take it a step further.

As conversational voice technology is not workflow-based, AI voice technology triggers a different action. The action triggered at this stage is not predetermined i.e. not workflow created.

What if the candidate hesitates, what if they speak fast, do they speak excitedly or calmly, do they stutter, are they clearly spoken, do they have long pauses at the start before speaking. All these things tell us something and are very unique to voice.

Voice technology has the ability to understand and take action on sentiment, identification and validation but, that’s for another day…



Disclaimer: for the purpose of this article, we have simplified areas. There will be a follow-up piece which digs a little deeper into Deep Learning for anyone interested in a more techy approach!