Why the “I” in A.I. needs to go
Apple co-founder Steve Wozniak quipped:
“I agree with the ‘A’ in AI” but not the “I”.
We couldn’t agree more! All A.I.s are “artificial”. None are “intelligent” in the same sense as humans (or even bees). This is the biggest confusion with A.I. It matters and here’s why!
Illusory goal posts (for now)
To define A.I. one needs to define “intelligence”. The jury’s out and has been for millennia on whether intelligence is a thing or an illusion, and if the former what the prerequisites might be.
Today, neither neuroscience, neuropsychology nor philosophy agree on what constitutes “intelligence”. As a result, most A.I. systems work toward a narrow definition of “intelligence”.
Intelligence so defined is the ability to learn, recognise patterns, display simulated emotional behaviours and solve analytical problems. However, this is just one definition of intelligence in a soupy mix of contested, vaguely formed ideas about the nature of cognition and the structures and circumstances for its existence.
“Intelligence” isn’t being solved anytime soon. Therefore when we talk about A.I. half of what we talk about is illusory (at least for now). So what can we talk about?
Types of A.I.
Unable to pin down the requisites for, and characteristics of, “intelligence”, we can instead grade an A.I. against a human’s ability to perform different functions. A common classification using this approach is as follows:
Can you guess which is today’s A.I.? Nope, it’s not the sexy AGI or ASI. There are no Data from Stark Trek or Terminators (except in films and TV).
Instead, we have self-driving cars, game playing robots and tools for automating the basic legal review of contracts. The latter two examples might meet or, in the case of Google’s Go playing robot AlphaGo, significantly exceed human proficiency at those specific tasks.
For a deeper dive on these distinctions, please see this article.
But boil it down and these A.I.s rely on machine learning — not real intelligence — and consequently do one thing well and one thing only.
It’s no surprise Matthew Velloso, technical advisor to Microsoft’s CEO, famously tweeted the below:
Machine Learning: Maths not Minds
Machine learning is software making statistical and often probabilistic decisions based on inputs to generate mathematical hypotheses to correctly map a given input to the desired output. We can say that the machine “learns” if its ability to accurately generate the correct outputs improves over time. It improves through data-driven tweaking of the hypotheses.
When pitching clients, presenting or working on or with A.I. products each of us at lawtomated prefer “machine learning”.
The “I’ in A.I. invites too much imagination! Machine learning doesn’t.
… however, even Machine Learning remains ambiguous enough to import meaning wider than what it intends to describe. This is because “learning’ is tightly coupled with how we describe the way human and animals acquire new behaviours and knowledge.
As a result, most audiences consciously or unconsciously assume some parallel between “learning” in the natural world and “learning” in the world of machines. Unfortunately, no such parallel exists with regard to the means even if the ends might appear, or be, equivalent.
Why are we stuck with ANI for now?
Today’s world is absent of an AGI or ASI equal to or exceeding human intelligence at more than one task, let alone being smarter than the smartest human in any given field. AGI and ASI exist in films, TV and sci-fi literature only.
Spoiler alert: today’s ANI is very old technology. The techniques for implementing ANI today are mainly: (1) rule-based machine learning, (2) neural networks, and (3) pattern recognition ANI.
All were invented decades ago.
So why now? What’s changed? Quite simply a technology trinity over the past 20 years:
- Data Explosion: thanks to the internet’s expansion into every part of our lives, from the phone in your pocket to smart TV, data availability has increased as much as 1,000-fold.
- Algorithms: key algorithms have improved 10-fold to 100-fold. The data explosion has driven research into refining algorithms, or even accelerated their optimisation. For instance, deep learning techniques improve relative to the quantity of data to which they have access. More data = better results.
- Hardware: the power of computer chips continues to double every 18–24 months in line with Moore’s Law (now 53 years old!). Chips continue to miniaturise and become cheaper. Together these developments enable easy experimentation with machine learning techniques that were previously so prohibitive in cost (time, expense and computational power) and data poor they remained academic and untested in the real world.
Today’s A.I. boom is what happens when technology catches up with ideas. As with space exploration, the ideas came first and the technology to execute them much later.
Does it matter that today’s A.I. is ANI?
No. ANI is proveably valuable. Although not truly “intelligent” in the same sense humans seem to be, ANI isn’t to be underestimated. But always remember it is maths not minds.
Explaining the above distinctions between ANI, AGI and ASI to clients is like spoiling a magic trick by revealing the secret.
That magic brain in a box to automate an entire firm’s billable hours doesn’t exist. Sorry folks! We’d love to be in the business of selling or using an AGI or ASI… if it would let us!
But even knowing the secret to a magic trick we can still appreciate it’s cleverness at achieving something that looks like magic. And so the same applies to A.I. ANI is just as transformative and impressive, albeit not necessarily for the reasons assumed before the big reveal.
With the machine learning contract review tools for legal, provided sufficient subject matter expertise is supplied (i.e. marking up data points in contracts), the ANI techniques involved can meet or exceed humans at the same task.
I.A. before A.I.
Even if it doesn’t meet or exceed humans 100% of the time, having a tool that augments a human process can be incredibly valuable. This is the same reason self-driving car technologies permeate into the latest Teslas and other vehicles — machine learning doesn’t have to replace a human task for it to have value.
This is why some commentators and vendors talk of today’s A.I. really being I.A., i.e. intelligence augmentation, which we deep dive in this article.
Let’s take a legaltech example. Assume a contract review tool is 60–90% accurate for any data point across a particular document type. The question is not:
X% accuracy = good enough?
Rather, the better question is:
X% accuracy + time for manual validation of machine’s results < time than 100% human review?
In other words, even if we cannot automate a specific task entirely, is augmenting that task better than business as usual? Does it reduce time, improve overall accuracy and / or reduce costs and thereby improve efficiency? Most important: does it deliver greater client value?
If we discover and accept we cannot supplant human effort for a given task the next question is:
What can we supplement?
For example, assume a human manually reviews a non-disclosure agreement (AKA “NDA” or “Confidentiality Agreement”) for x 10 data points.
At the risk of sounding like Brian Fantana from Anchorman, if an ANI can be accurate at identifying 80% of those data points, then the human reviewer is returned 80% of their time to spend on the 20% of data points too tricky for the ANI (e.g. those data points requiring human interpretation as to the legal effect).
Even if we accept that the human reviewer uses some of that returned time to double-check the ANI’s results, there is a significant time-saving in terms of what review is 100% human vs. < 100% human. In turn, that generates a cost-saving and efficiency gain, which can be passed on to the client.
For this reason, it is better to talk about and design for intelligence augmentation vs. A.I. Or more concisely, always put I.A before A.I. For more on this idea, see here.
Conclusion
Today’s A.I. is ANI. ANI would be better described as machine learning full stop. The “I” in A.I. is aspirational, but for now invites too much imagination, which often results in inflated expectations.
If those expectations are constantly deflated, that can only hurt the development of A.I. As such, we should ditch “A.I.” when talking about today’s technologies.
Instead, let’s use A.I. to theorise the future not explain the present. Instead, if you have to use an acronym, opt for I.A.!
Originally published at lawtomated.