AI for Legal: ANI, AGI and ASI

Lawtomated
10 min readMar 28, 2019

--

What is A.I.?

AI is a suitcase phrase. By this we mean three things:

  1. AI means nothing by itself.
  2. AI contains a bunch of ideas you have to unpack and understand individually before understanding the whole.
  3. Those ideas are often subjective, depending on who is speaking and who is hearing the term AI, in part because there is widespread disagreement (experts included) as to what is and is not part of the AI canon.

With this in mind, let’s begin unpacking some of the core ideas concerning AI. This article is the first of several examining key AI ideas, technologies and applications. Each article will gradually dive deeper.

In essence we aim to create a top down curriculum of articles to peel the onion of AI!

The AI Onion

The A.I. Triad

30,000 ft view

AI was first used to describe machines capable of performing tasks characteristic of human intelligence. In today’s parlance, and in practice, this means advancing the intelligence of computers.

Although experts disagree about a lot of AI topics, they broadly agree AI can be subdivided into three major types — the AI triad:

  1. Artificial super intelligence (“ASI”)
  2. Artificial general intelligence (“AGI”); and
  3. Artificial intelligence (“ANI”).

Let’s examine each in turn. Can you guess which type of AI we have today?

Lvl 1. Artificial Narrow Intelligence

Least intelligent is ANI. ANI achieves human-level performance in one task characteristic of human intelligence. However, it is completely lacking in all other areas of human intelligence.

Example — Google’s Alpha Go

Alpha Go is an ANI built by Google’s DeepMind division. Alpha Go can outplay human world champions at the ancient Chinese game of Go. Although superhuman at Go, Alpha Go can’t do much else. It can’t play chess for example.

In this way, we can say Alpha Go’s “intelligence” does not generalise well to other tasks and domains.

However, as you can see from the above graphic, the fact Alpha Go is ANI doesn’t demean its achievement. The graphic speaks for itself: Alpha Go is an incredible achievement for ANI and its particular set of techniques, including reinforcement learning.

For more detail on Alpha Go, please see:

  • here for the official website; or
  • here (PDF download) for their fascinating whitepaper, which explains how Google DeepMind used reinforcement learning to achieve Alpha Go’s superhuman performance at Go.

Lvl 2. Artificial General Intelligence

Bang in the middle is AGI. AGI has general ability to solve, and in some cases exceed, human-level performance in multiple (if not all) tasks considered markers of intelligence.

Example — Replicants, Blade Runner

AGI frequently features in science fiction. Given its theoretical human-level intelligence, it is often personified via the android or cyborg human trope.

For instance, the human-like replicants in Blade Runner. In Blade Runner, based on Philip K. Dick’s excellent 1968 novel “Do Androids Dream of Electric Sheep”, a replicant is a fictional bioengineeredandroid.

In the Blade Runner films, the Nexus-series of replicants are virtually identical to adult humans (and require specialist testing to distinguish from a human) but have superior strength, speed, agility, resilience, and intelligence, to varying degrees depending on the model.

In other words, they are able to execute, and somewhat exceed, at all tasks characteristic of human-level intelligence. This blurring between absolute equivalence with human intelligence and increasingly superhuman intelligence is illustrative of the fact things get murky at the boundary between AGI and ASI… at what point does an AGI cease to be AGI and instead qualifies as an ASI?

Does ASI need to be 1%, 10%, 100% or greater than 100% superior to be considered ASI? Let’s find out.

Lvl 3. Artificial Super Intelligence

But back to the question of when does AGI become ASI?

Unfortunately, there is no consensus regarding how much smarter than humans an AGI needs to be before we term it ASI. Intelligence is a spectrum of continuous values, not discrete ones. As such, it’s hard to place a definitive threshold between AGI and ASI.

By contrast, as we’ve illustrated above, the distinction between ANI and AGI is easier. This is a binary distinction for the purposes of AI discussions: can the AI do one thing only or most things characteristic of human-level intelligence?

Passing this threshold moves the AI from ANI to AGI, after which the only remaining question becomes non-binary: how much better at everything is the AI vs. human performance in the same domains (assuming the ASI can improve above and beyond human performance at all tasks)?

So in summary, what we can say is that ASI would have general ability to solve and exceed, human level performance in all tasks considered markers of intelligence, e.g. planning, understanding language, recognising objects and sounds, learning, self-awareness, abstract thought and problem-solving etc.

Crucially, an ASI would also exponentially increase its intelligence over time. It’s superiority would likely be in both:

  • quantity of intelligence, i.e. raw processing power / speed, and when combined with vast quantities of data (e.g. via an internet connection) gives an ASI unfair advantages over human-level intelligence, which is a fixed quantity (i.e. we can’t upgrade our processing power or storage capacity whereas a machine can… although future bioengineering advances might allow us to overcome these limitations); and
  • quality of intelligence, i.e. the ways in which an ASI might think. By analogy, humans think in qualitatively different ways to less intelligent beings such as a wasp or chimp. Unlike wasps and chimps, humans have a complex system of language, planning, and abstract thought capabilities — simply scaling up a wasp or chimp mind structure “as is” would not alter the quality of thought, only the speed. By analogy, you can only scale up a propeller plane engine so much in terms of performance. To take things to the next level you need a qualitatively different propulsion method such as jet engines, which like propeller mechanisms beforehand, are now hitting their limits and necessitating qualitatively superior replacements such as scramjets and ramjets.

Example — Deep Thought, A Hitchiker’s Guide to the Galaxy

Deep Thought, an ASI from a Hithchiker’s Guide to the Galaxy, is asked the ultimate question — “what is the meaning of Life, the Universe and Everything”? Deep Thought is both quantitatively and qualitatively superior at thought vs. humans, hence being delegated this question.

After meditating on this problem for 7.5m years Deep Thought replies “42”.

The human operators — descendants of Deep Thought’s original designers — despair, having no idea what Deep Thought’s answer means. Deep Thought chides the humans for the fact they are incapable of understanding the question.

It’s a nice satire on the idea an ASI might (and very probably would) think in a qualitatively different and therefore potentially unintelligible (and possibly incompatible) manner to humans. In this scenario, we humans are to an ASI what wasps are to humans: qualitatively inferior minds incapable of understanding.

And this takes us to why scientists worry about ASI.

Just as humans have no regard for wasps or wasp genocide, so too an ASI might look down upon humans and rapidly advance in ways that become increasingly, and necessarily, incompatible with human existence. Just like wasps and humans, our ability to evolve and consume huge amounts of resources vastly outstrips a wasp’s ability to protect its existence against human endeavour. So too would an ASI’s abilities vs. humans.

But this is all theory for now.

Strong vs. Weak A.I.

How much do you even A.I.?

Another distinction is between “strong” and “weak” AI. No this doesn’t refer to the physical strength of robots…

What it does mean is this:

  • “weak AI” is another way of describing ANI; and
  • “strong AI” is another way of describing AGI, and sometimes also ASI.

Naturally, AI vendors avoid the term “weak AI” like the plague. This is for obvious reasons — nobody buys “weak” products!

We like our AIs strong and jacked on steroids! Sadly there are no strong AIs in existence as yet… anyone claiming otherwise is a fraud or simply making something for Hollywood films, in which case that’s fine!

Strong AI, just kidding.

What AI do we have today?

By now it probably comes as no surprise that today’s AI is… ANI. Examples include:

  • Voice assistants such as Amazon Alexa, Google Assistant, and Microsoft Cortana
  • Google’s search suggestions, Amazon’s product recommendations or Netflix’s movie suggestions
  • Modern cars, which use ANI to identify when anti-lock brakes kick in or how to best inject fuel into an engine
  • Email spam filters — yes, this is why you no longer get those ads for viagra and other magic pills…
  • Commercial aircraft, which use ANI to manage 1000s of operations and adjustments per second when the plane is in autopilot
  • Google self-driving cars
  • Android phones use ANI to monitor battery and CPU power to allocate resources based on usage (the same is also true of energy grids that use similar technologies to load balance energy demands across the network)
Today’s AI = ANI

As you can see, each does one thing well and one thing only. Sometimes these systems perform at human-level performance, but often they fall short when faced with something unexpected that a human would easily accommodate.

If you’ve ever used a voice assistant no doubt you’ve been amazed at its ability to recognise your voice and its attempts to respond in an intelligent manner.

But equally, you’ve no doubt laughed when it completely screwed up or did something bizarre that a human would never do.

The point is this: ANI, despite being dimmest of the AI family is nevertheless powerfully enhancing our productivity. That said, as you’ll notice above, humans remain in the loop, often with final decision-making authority.

When is tomorrow?

The birth of agi and asi

ANI sounds boring. When can I get some of that AGI?

TBC! In 2013 Vincent C. Muller, Professor of Philosophy at Anatolia College and president of the European Association for Cognitive Systems, and Nick Bostrom, Professor of Philosophy at Oxford University and author of the awesome “Superintelligence” (well worth a read!), surveyed 100s of AI experts to answer this question.

Their aim was to identify by what year experts expected to see a 10% (optimistic), 50% (realistic) and 90% (pessimistic) probability of AGI being achieved.

The results were these:

  • Median optimistic year (10% likelihood): 2022
  • Median realistic year (50% likelihood): 2040
  • Median pessimistic year (90% likelihood): 2075

A separate study conducted by author James Barrat, an American documentary filmmaker and author of “Our Final Invention: Artificial Intelligence and the End of the Human Era” at Ben Goertzel’s AGI Conference asked: by what year would AGI be achieved?

Although a simpler question vs. the Muller / Bostrom study, the results are similar.

Barrat’s results predict AGI is more likely than not a 2050+ achievement:

  • By 2030: 42%
  • By 2050: 25%
  • By 2100: 20%
  • After 2100: 10%
  • Never: 2%

And what about the advent of ASI?

The Muller / Bostrom study also asked experts how likely we will reach ASI within: (a) 2 years of reaching AGI (i.e. almost immediate intelligence explosion); and (b) within 30 years of reaching AGI. The results were these:

  • Median answer for (a) was 10%
  • Median answer for (b) was 75%

Taking the AGI results and ASI results together, ASI might arrive in the 2060s or 2070s (i.e. 2040s arrival date for AGI + circa 20–30 years.

Should we be worried about AGI and ASI?

Birthing an AGI — an intelligence equivalent to our own — would have a transformational impact on more or less all areas of life and belief. But supercharging that into an ASI represents a tripwire.

That tripwire would be a tipping point between the world we know today and a game-changing explosion of intelligence that determines a new future for all living and non-living things on earth. This in itself is a huge and fascinating topic, which hopefully we will return to in another article.

If you want to read up on these philosophical, political and economic impacts we highly recommend reading one or both of:

Both books are excellent. Bostrom pretty much surveys the history of AI, its state today, where it might realistically get to, when and via what means plus the likely impacts AGI and ASI might have. It’s a dense read, but well worth it.

Tegmark’s work touches upon similar ideas but is more focussed around the cosmological occurrence and nature of intelligence, and the cosmological impact of an AGI or ASI for life in the universe as we know it (Tegmark is a physicist, hence the cosmological angle, which is pretty mind-blowing). Tegmark’s enthusiasm for this subject is infectious.

But for now, although ASI might unimaginably transform our world and us in it, nobody is certain when it will arrive.

Conclusion

Returning to today, ANI is now; AGI and ASI are tomorrow. Regarding AGI and ASI, although experts disagree on “when” they all agree there is no “if”. Humanity will birth an AGI and very shortly thereafter an ASI. The when for either event may be decades or centuries away. Hopefully, we will live long enough to find out!

So when buying / building AI or reading about AI, ask yourself whether what is being described sounds like ANI or something more. If the former, it’s probably reliable; if the latter you’d best take it with a pinch of salt. Of course, the foregoing advice goes out the window once someone creates a peer-reviewed and verifiable AGI.

When that happens we’ll update this article! Until then, be AI aware: know your ANIs from your AGIs and ASIs!

Takeaway: the above graphic summarises this article

Originally published at lawtomated.

--

--

Lawtomated

Legaltech Deep Dives | Legaltech Leaders | Legaltech Coding