AI is increasingly ubiquitous, as are the reservations: data quality and integrity, trust, ethics, and the diminution of the role of human intuition. See Edward Tenner, in The Efficiency Paradox for some cautionary notes.
Towards a definition: software that organizes sensed data to make logical inferences leading to action. Monitor results to modify the internal logic leading to improving inferences and action. i.e. “Learning” to make better predictions. The predictive imperative of AI is highlighted in Prediction Machines, a recent book that recasts ” the rise of AI as a drop in the cost of prediction”, presaging an ever larger role in our lives.
- Prediction is at the heart of making decisions under uncertainty. Our businesses and personal lives are riddled with such decisions.
- Prediction tools increase productivity–operating machines, handling documents, communicating with customers.
- Uncertainty constrains strategy. Better prediction creates opportunities for new business structures and strategies to compete.
An EXAMPLE: “IBM has developed an automated speech analysis programme to find early indicators of mental illness. By combining text to speech software, advanced analytics, machine learning, natural language processing technologies and computational biology, they have created an application for mobile devices which can analyse the way we talk. This impressive technology is able to pick out patterns in a patient’s speech or written words to assess their meaning, syntax and intonation – all of which can provide insights into a person’s mental health. Once collected, data from the programme can be combined with that from wearables and imaging devices such as MRIs, to build up a picture of the individual. Artificially intelligent technology then analyses this data to aid medical professionals in their diagnosis, treatment and monitoring. This technique is able to pick up conditions including depression, degenerative neurological diseases such as Parkinson’s and developmental disorders like ADHD.” As reported in Nature, “In a 2015 study with Columbia University, IBM’s software was able to predict with 100 percent accuracy which members of a group of at risk adolescents would develop their first episode of psychosis within two years.”
Medical diagnosis is, in theory, a very promising application, helping physicians make certain they have considered all the possibilities. Unfortunately, the data is a mess. To be useful, data must be gathered in a consistent fashion over a meaningful period of time. Only recently has the collection of medical data become sufficiently methodical to eventually yield its potential.
From an interview with Mary Catherine Bateson, in which she observed that AI “lacks humility, lacks imagination, and lacks humor”:
“Until fairly recently, computers could not be said to learn. To create a machine that learns to think more efficiently was a big challenge. In the same sense, one of the things that I wonder about is how we’ll be able to teach a machine to know what it doesn’t know but that it might need to know in order to address a particular issue productively and insightfully. This is a huge problem for human beings. It takes a while for us to learn to solve problems. And then it takes even longer for us to realize what we don’t know all that we would need to know to solve a particular problem, which obviously involves a lot of complexity.”
An IKEA solution, as reported in The Economist: “COMPUTERS have already proved better than people at playing chess and diagnosing diseases. But now a group of artificial-intelligence researchers in Singapore have managed to teach industrial robots to assemble an IKEA chair—for the first time uniting the worlds of Allen keys and Alan Turing. Now that machines have mastered one of the most baffling ways of spending a Saturday afternoon, can it be long before AIs rise up and enslave human beings in the silicon mines?”