pixabay.com user:seanbatty

The field of artificial intelligence (AI) was born at a meeting of computer scientists at Dartmouth College in 1956. Researchers and their students had produced programmes that seemed to bridge the gap between man and machine, interpreting logical expressions in English or beating opponents in games of checkers. By 1965, one of the meeting’s attendees, Herbert Simon, claimed: “Machines will be capable, within twenty years, of doing any work a man can do,” setting a tone of aspiration for the years ahead. More than 50 years have passed since, and dreams of an automated world, with sentient computers and driverless cars, are still mere fantasy. What does the future hold for artificial intelligence?

“One doesn’t think twice about sparring against a chess-playing bot. But would you trust a machine to keep you safe on the road?”

AI looks at ‘intelligent agents’, those machines that are able to sense their environment and respond to it in a way that maximises the likelihood of success of a given goal. A particular algorithm is chosen and provided with sample datasets. Training then ensues, with the AI fitting parameters for the model to the datasets through sophisticated trial and error. The outputs of the final model thus form the basis of future decisions.

Whilst an automated future is still a long way off, scientists have looked to nature for inspiration on how to improve this so-called machine learning. For example, deep learning works in a similar way to the information categorisation systems of our nervous system, making machines capable of detecting features within an input data set, and using these features to structure subsequent analysis.

AI works using a black box approach, a system with inputs and outputs without knowledge of internal workings. Thus a key issue is one of trust. Humans are generally capable of rationalising their decisions, but algorithms have no concept of rationality. One doesn’t think twice about sparring against a chess-playing bot. But would you trust a machine to keep you safe on the road? Would you trust its judgment on the best treatment for your cancer?

An in-depth review by STAT in September showed that Watson, IBM’s celebrated AI, struggled to live up to its initial hype as a game changer in health care. Despite heavy marketing, only a “few dozen” cancer centres have adopted the system. A major concern, particularly from foreign hospitals, is that Watson’s advice is biased towards particular patient demographics and treatment preferences. This stems from the curated data Watson is given by doctors, who have been said to be “unapologetic” about inserting their bias in hope that their expertise will help the AI make better recommendations.

While Watson’s trainers may be overly careful with their data, others are not paying enough attention. Research published in April showed that AIs are very good at picking up and amplifying societal prejudices from the data we feed to them. Their AI was more likely to associate European-American names with pleasant words like “gift” and “happy”, and to associate African-American names with negative words. Google Translate shows similar effects, translating the Turkish gender-neutral pronoun “o” as “he” when paired with “doktor” (doctor), and “she” when paired with “hemşire” (nurse). This counters the proposition by many that AIs are more impartial judges than humans, being used in situations that are particularly susceptible to subconscious prejudices such as job recruitment and criminal justice.

The final stumbling block holding AI back is that frontline innovation in the field seems to have stagnated. Modern advances in AI were enabled not by the emergence of new models, but by vast increases in processing power. ‘Deep learning’ is simply a more complex, and hence more computationally expensive, variant on the decades-old technique of neural networks.

Deep learning has served us well, but it can only take us so far. For AI to play a more prominent role in our society, we need a new algorithm – hopefully, a more transparent one that can communicate its thoughts to us. This not only allows the public to trust it more, but also helps its human counterparts to advance their fields of study. The hype over recent successes like AlphaGo, the first AI to beat a professional player of Go, might have us believe that we are on the cusp of an AI revolution, but in reality, we are still a long, long way from it