Reasoning is Essential to Humans and Should be Equally Important to AI

Mohammed Terry-Jack

September 22, 2020

9 min read

An overview of existing reasoning approaches.

As advanced as Artificial Intelligence has come, with all the funding and research being thrown at it, it still struggles at some of the most basic tasks due to a lack of reasoning skills. This can make products and services equipped with AI seem very unintelligent, despite the leaps that have been achieved over the past years. In this piece, we will touch on how AI can reason and suggest it is a necessary component and ability of any advanced AI of the future — in particular Conversational AI — to truly become intelligent.

1. Knowledge Reasoning vs Knowledge Retrieval

Imagine we want to store the information given in the following sentence:

“Charles has a 5 year-old daughter called Mary and a 15 year-old son called Tom”

Inside a knowledge base so that we can query that information later. We could memorise it exactly as it is (i.e. store it as a raw string) but this would make things like compressing the knowledge and querying it efficiently quite difficult. It is often preferable to store the information in a structured form, (like a table or graph, etc).


Structuring raw data.

Knowledge Retrieval

Different structures have limitations and advantages over others. Let’s say we choose to structure the sentence’s information into a table. It may look something like this:


An example tabular knowledge-base.


This type of structure serves well for efficiently querying it for the knowledge it contains (knowledge retrieval).

For example:

>>Query: “Which people are mentioned in the text?”



>>Query: “How old is Tom?”



>>Query: “Is Tom a girl?”



>>Query: “Is Mary one of Charle’s children?”


Knowledge Reasoning

However, we cannot query the knowledge base for information like:

>>Query: “Who is younger, Tom or Mary?”

>>Query: “Is Charles younger than Mary?”

>>Query: “Who are Tom’s parents?”

>>Query: “Who are Mary’s siblings?”

>>Query: “How old is Charles?”

Even though we can quite easily answer the above questions given the same sentence. We do this by reasoning. In fact, we reason so often, we are not always aware when we are doing it. If we were to analyse our thoughts, the reasoning we use to answer the above questions may sound something like this:

<pre> Q.1) who is younger, Tom or Mary? </pre>

Mary is younger because:

  • Mary is 5 (“5 year-old …called Mary”)
  • Tom is 15 (“15 year-old …called Tom”)
  • 5 is less than 15

<pre> Q.2) is Charles younger than Mary? </pre>

No. Charles cannot be younger than Mary because:

  • Mary is the child of Charles (“Charles has a…daughter called Mary”)
  • Parents are born before their children

<pre>Q.3) who are Tom’s parents? </pre>

Charles is a parent of Tom because:

  • Tom is the child of Charles (“Charles has a …son called Tom”)
  • If you have a child A then you a parent of A

<pre>Q.4) who are Mary’s siblings?</pre>

Tom is Mary’s sibling because:

  • Charles is Mary’s father (“Charles has a… daughter called Mary”)
  • Charles is Tom’s father (“Charles has a… son called Tom”)
  • Someone with the same father is your sibling

<pre>Q.5) how old is Charles? </pre>

Charles is definitely older than 15 because:

  • Tom is the son of Charles (“Charles has a…son called Tom”)
  • Tom is 15 (“…15 year-old …called Tom”)
  • a parent is older (born before) their child

Types of Reasoning: Deductive vs Abductive Logic

As you can see from the examples above, the actual information we used to conclude our answers was only partly extracted from the information in the text itself. We also use principals to expand that knowledge to lead us to answers that were not explicitly stated in the text. This type of reasoning is known as deductive logic and the deduced information is necessarily true as long as their underlying premises (i.e. the information from the sentence) are true.

Abductive logic is another type of reasoning which is more often used by us. The last question — “How old is Charles?” — is a prime example of where we may use such reasoning to further conclude that Charles is quite likely older than 15. The reasoning may look something like this:

  • Charles was likely past the age of puberty when he became a parent (because this is usually the case — a reasonable assumption);
  • the exact age of puberty varies from person to person, but is usually around the teenage years (another fair assumption);
  • Thus we can safely assume that Charles was older than 12 when he became a father;
  • and conclude, with some level of confidence, that Charles is most probably older than 27 years (at least 12 years for puberty + 15 years to be older than his eldest son).

Such reasoning is useful for real world problems, however, even if the underlying premises are true, the conclusions may not be due to the assumed theories used during the construction of the conclusions — e.g.Charles may have been abnormally young when he became pubescent, etc.


Relational Knowledge Bases (e.g. Knowledge Webs, etc.)


One way to perform reasoning over our knowledge base is to use relational structures such that all knowledge is stored as a relation to another piece of knowledge (forming a big knowledge web or relational graph). The sentence stored in our relational knowledge base may look something like this:

Horn clauses are used to structure the sentence’s info into an example relational KB (syntax = SWI Prolog).


The relational KB displayed as a graph.

Knowledge Retrieval

Now we can perform standard knowledge retrieval queries as we did with the tabular knowledge base, e.g.

>>Query: “Which people are mentioned in the text?”


>>Query: “How old is Tom?”

>>Query: “is Tom a girl?”

>>Query: “is Mary one of Charle’s children?”


Knowledge Reasoning

We can also encode reasoning rules to perform over our relational database too. For example, the reasoning we used before for deducing someone is your sibling was “Someone with the same father is your sibling” which can be formally encoded like so:

<pre> is_sibling_of(PERSON_A, PERSON_B) :-
  is_child_of(PERSON_A, PERSON_C),
  is_child_of(PERSON_B, PERSON_C),
  PERSON_A \= PERSON_B.
</pre>

Which essentially means a new relation-type (called “is_sibling_of/2”) will be automatically created in our database between any two objects (here referred to by the generic placeholder names PERSON_A and PERSON_B) if the first object (PERSON_A) is a child of a third person (PERSON_C) and the second person (PERSON_B) is also a child of that same third person (PERSON_C). What's-more, the two objects (PERSON_A and PERSON_B) cannot be the same (PERSON_A \= PERSON_B).

New relation (is_sibling_of/2) is automatically deduced when the conditions of the reasoning rule are met.


If we encode the rest of the reasoning rules used when solving the answers above, we may end up with something like this:

Example reasoning rules stored in our knowledge base (syntax = SWI Prolog).


Now we can query our knowledge base for things that we previously could not in our tabular knowledge base, like:

>>Query: “Who is younger, Tom or Mary?”


The first results returned are the answer to the question. Tom (older), Mary (younger). The other answers are bonus comparisons.

>>Query: “Is Charles younger than Mary?”

>>Query: “Who are Tom’s parents?”


>>Query: “Who are Mary’s siblings?”

And so on…

2. Reasoning to Answer Complex Queries

Now that you can do reasoning over your knowledge base, handling complex queries becomes possible. To illustrate this, let’s imagine we have a relational knowledge base of restaurants to query.


A small restaurant KB.


Now suppose someone expressed a fairly complex query, like:

“Hey, find me a restaurant that sells good Sushi near Charing Cross please.”

This query can be expressed as multiple relations coupled together via a shared variable to enforce a constrained search. For example:

<pre>restaurant(PLACE),
sells(PLACE, sushi),
near(PLACE, `Charing Cross`).
</pre>

This is a single query, composed of three relations, but the shared PLACE variable ties the relations together such that the returned value for PLACE needs to satisfy each and every relation it is in. I.e. The value for PLACE needs to be a restaurant, sell sushi and be near to Charing Cross.

The query displayed as a relational graph.


The only value in our small database which satisfies all three conditions specified in the query is a restaurant called “Sakagura”.

Reasoning to handle Queries with Negations

Reasoning can even be useful for queries which contain negations, like:


“Show me a restaurant near Charing Cross that does not sell Sushi”

<pre>restaurant(PLACE),
near(PLACE, Charing Cross),
sells(PLACE, FOOD_TYPE),
FOOD_TYPE\=sushi.
</pre>

Here we specify that the value for the variable FOOD_TYPE must not be sushi (FOOD_TYPE\=sushi).


Query displayed as a relational graph and the returned values which satisfy the conditions of the query.

Reasoning to handle Co-reference Resolution

Another powerful example where reasoning becomes crucial. It can help resolve ambiguous co-references, such as:

“I have a key and a lock, so I turned it and it opened!”

The question here is, what does each “it” refer to? The lock? The key?

We know that the first “it” refers to the key and the second “it” to the lock, but how? Perhaps we reasoned that a key is usually turned but not locks. Likewise, only locks can be opened, not keys.

Our example KB contains information on which objects can be opened, etc.


>>Query: it opened?

Co-reference resolution, “IT” refers to the lock.

3. Conclusion

We have seen great things that reasoning over our knowledge base enables. The examples range from deducing new knowledge from the seen knowledge, answering complex questions, creating new facts in our knowledge base and resolving ambiguities. Reasoning clearly goes beyond the data seen in training examples.

So what does the future hold for us?

Hint: Neural-Symbolic Machine Learning

Large language models such as BERT, T5, or GPT-3 consume large amounts of knowledge from huge amounts of text (e.g. all of Wikipedia, the whole web, etc.). This knowledge is stored in a distributed way in the underlying neural network alongside information on how language works (i.e. word order, grammar, vocabulary). That again is a very powerful paradigm and extends the usual paradigm of data-storage, as this knowledge is injected into the neural network without human supervision. (Obviously at the same time the knowledge is stored implicitly and therefore more obscurely.)

Gary Marcus and Nikolai Rozanov have in the past criticised pure Neural Network approaches that simply learn from incoming data and create a distributed version of the seen knowledge. Even though this is a powerful ideal of knowledge storage, there are limitations as to what knowledge storage and retrieval can achieve. That limit is reasoning.

Reasoning truly is a fascinating, defining feature of human intelligence, but how can it be utilised to make AI more intelligent? How do we reason over the distributed knowledge? How do we extract the relational queries automatically from natural language?

These are all questions that we at Wluper are addressing.

References

To run Python in browser, try Colab.

To run Prolog in browser, try (https://swish.swi-prolog.org/)

If you liked this article and want to support Wluper, please share it and follow us on Twitter and Linkedin.

If you want to work on Conversational AI, check our careers page.

Share with others

Read this post on Medium

Featured

Read More