Late Post

Can we depend on AI?

As synthetic intelligence (AI) programs get more and more advanced, they’re getting used to make forecasts – or reasonably generate predictive mannequin outcomes – in increasingly areas of our lives. However on the identical time, issues are on the rise about reliability, amid widening margins of error in elaborate AI predictions. How can we tackle these issues? 

Administration science presents a set of instruments that may make AI programs extra reliable, based on Thomas G Dietterich, professor emeritus and director of clever programs analysis at Oregon State College.

Throughout a webinar on the AI for Good platform hosted by the Worldwide Telecommunication Union (ITU), Dietterich instructed the viewers that the self-discipline that brings human decision-makers to the highest of their sport can be utilized to machines.

Why is that this necessary? As a result of human instinct nonetheless beats AI hands-down in making judgement calls in a disaster. Folks – and particularly these working of their areas of expertise and experience – are merely extra reliable. 

Studies by University of California, Berkeley, scholars Todd LaPorte, Gene Rochlin and Karlene Roberts found that certain groups of professionals, such as air traffic controllers or nuclear power plant operators, are highly reliable even in a high-risk situation. These professionals develop a capability to detect, contain and recover from errors, and practise improvisational problem-solving, said Dietterich.

This is because of their “preoccupation with failure”. They are constantly watching for anomalies and near-misses – and treating those as symptoms of a potential failure mode in the system. Anomalies and near-misses, rather than being brushed aside, are then studied for possible explanations, normally by a diverse team with wide-ranging specialisations. Human professionals bring far higher levels of “situational awareness” and know when to defer to each other’s expertise.

These principles are useful when thinking about how to build an entirely autonomous and reliable AI system, or how to design ways for human organisations and AI systems to work together. AI systems can acquire high situational awareness, thanks to their ability to integrate data from multiple sources and continually reassess risks.

However, current AI systems, while adept at situational awareness, are less effective at anomaly detection and unable to explain anomalies and improvise solutions.

More research is needed before an AI system can reliably identify and explain near-misses. We have systems that can diagnose known failures, but how do we diagnose unknown failures? What would it mean for an AI system to engage in improvisational problem-solving that somehow can extend the space of possibilities beyond the initial problem that the system was programmed to solve?

Shared mental model

Where AI systems and humans collaborate, a shared mental model is needed. AI should not bombard its human counterparts with irrelevant information, and must also understand and be able to predict the behaviour of human teams. 

One way to train machines to explain anomalies, or to deal with spontaneity, could be exposure to the performing arts. Researchers and musicians at the Monash University in Melbourne and Goldsmiths University of London set out to explore whether AI  could perform as an improvising musician in a phantom jam session.

Free-flowing, spontaneous improvisations are often considered the truest expression of creative artistic collaboration among musicians. “Jamming” not only requires musical ability, but also trust, intuition and empathy towards one’s bandmates.

In the study, the first setting, called “Parrot”, repeats whatever is played. The second system autonomously plays notes regardless of a human musician’s contribution. The third also features complete autonomy, but counts the number of notes being played by the human musician to define the energy of the music. The fourth and most complicated system builds a mathematical model of the human artist’s music. It listens carefully to what the musicians play and builds a statistical model of the notes, their patterns and even stores chord sequences.

Adding to this human/AI jamming session approach, Dietterich see a further two promising approaches to improve, and mathematically “guarantee” trustworthiness.

One is a competence model that can compute quantile regressions to predict AI behaviour, using the “conformal prediction” method to make additional corrections. Yet this approach requires lots of data and remains prone to misinterpretation.

The other way is to make autonomous systems deal with their “unknown unknowns” via open category detection. For instance, a self-driving car trained on European roads might have problems with kangaroos in Australia. An anomaly detector using unlabelled data could help the AI system respond to surprises more effectively.

As AI is deployed in more and more areas of our lives, what is becoming clear is that, far from a nightmare scenario of the machines taking over, the only way AI can be made more reliable, and more effective, is for there to be a tighter-than-ever symbiosis between human systems and AI systems. Only then can we truly rely on AI.

Fred Werner is head of strategic engagement at ITU Telecommunication Standardization Bureau

Source link