Teaching AI to think

Rate this item
(0 votes)

Read Mia Andric's comments here...

Artificial intelligence is the new “in thing”. Despite the fact that it is still in its infancy, companies in every sector are experimenting with different ways to incorporate AI into their operations, and researchers are constantly striving for ways to improve machine learning algorithms.

Unfortunately, all of today’s algorithmic systems have demonstrated alarming levels of bias in their operation, doing things like predicting criminality along racial lines and determining credit limits based on gender. For this reason, a great deal of exploration has been taking place to try and find ways to make AI responses more “fair”.

Teaching AI to think

A new study led by researchers from the University of Massachusetts Amherst looks to offer an answer, describing a framework to prevent what the team calls “undesirable behaviour” in intelligent machines. This undesirable behaviour is not limited to biases causes by skewed statistics or a lack of consideration of extenuating criteria, but can be applied to any circumstances where AI systems have to make decisions.

The research team used diabetes treatment to illustrate this. They applied the concept of undesirable behaviour to dangerously low blood sugar, or hypoglycaemia, training a machine to limit the improvements to an insulin pump to exclude any changes that would increase the frequency of hypoglycaemia. Until now, most algorithms don't provide a way to put this type of constraint on AI behaviour because it wasn't included in early designs.

At the heart of the new system are what the team calls “Seldonian” algorithms, named after Hari Seldon, the central character of Isaac Asimov's famous Foundation series of sci-fi novels. The framework doesn't imbue AIs with any inherent understanding of morality or fairness, but rather makes it easier for people to specify and regulate undesirable behaviour when they are designing their core algorithms.

The team saw good results in tests where an automated insulin pump identified a tailored way to safely predict doses for a person based on their blood glucose reading. In another experiment, they developed an algorithm to predict student Grade Point Averages, while avoiding gender bias found in commonly used regression algorithms.

Another research team has taken a different approach to “humanising” AI responses. Neuroscientists Kingson Man and Antonio Damasio recently published a paper where they propose a strategy for imbuing machines (such as robots or humanlike androids) with the “artificial equivalent of feeling”.

Typical intelligent machines are designed to perform a specific task, like diagnosing diseases or driving a car, but AI is incapable of coping with different types of situations to those it has been programmed to deal with. Man and Damasio believe that “feelings” could be the answer.

At its core, their proposal calls for machines designed to observe the biological principle of homeostasis. In other words, life must regulate itself to remain within a narrow range of suitable conditions, and an intelligent machine’s awareness of features of its internal state would amount to the robotic version of feelings.

“Feelings motivate living things to seek optimum states for survival, helping to ensure that behaviours maintain the necessary homeostatic balance. An intelligent machine with a sense of its own vulnerability should similarly act in a way that would minimise threats to its existence,” they write.

“Rather than having to hard-code a robot for every eventuality or equip it with a limited set of behavioural policies, a robot concerned with its own survival might creatively solve the challenges that it encounters. Basic goals and values would be organically discovered, rather than being extrinsically designed.”

They also believe that devising self-protection capabilities might lead to enhanced thinking skills. In other words, protecting its own existence might therefore be just the motivation a robot needs to eventually emulate human general intelligence.

Here, again, we have echoes of Asimov’s science fiction, with Man and Damasio’s explanation being reminiscent of his famous laws of robotics: Robots must protect humans, robots must obey humans, robots must protect themselves. With advances in AI proceeding apace, perhaps it’s time that Asimov’s laws are incorporated into all future research in the space.