Why A.I. hype hurts everyone
A.I. is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone else thinks everyone else is doing it, so everyone claims they are doing it.
Some of us at lawtomated have or continue to sell, develop and consult regarding various A.I. products and projects. We are also often pitched, review and test A.I. products for legal use cases.
Hype opens doors as easily as it slams them in your face. Far worse is hype’s ability to divert interest and investment away from genuine research and products, and instead steer it toward complete garbage. By garbage, we include debate, products, research, and talking heads.
We are by no means deep technical A.I. experts, but A.I.’s continued respect, understanding, and development deserve honesty and accuracy of reporting.
Let us tell you why.
The A.I. Hype Machine
Scary, amazing or bonkers
Today’s A.I. media reporting typically satisfies one or more of the following criteria:
- Saremongers, typically about robots replacing jobs (e.g. “AI and robots could threaten your job within 5 years”) or killing everyone (e.g. “Elon Musk: Regulate A.I. Before Robots Start ‘Killing People’”).
- Aggrandises, usually by misrepresenting current A.I. or by insanely extrapolating today’s technology into the realms of science fiction (e.g. “An A.I. god will emerge by 2042 and write its own bible. Will you worship it?”).
- Poses some utterly insane claim, e.g. “Saudi Arabia’s newest citizen is a robot”.
Or in other words, it saps (see what we did there?) energy away from constructive exploration and debate.
Reading today’s press, you’d be forgiven for believing A.I. will make almost everyone redundant by tomorrow, or worse kill us all.
Sorry: not true.
As we will illustrate with a couple of examples below, expert statements — including those of Elon Musk above — are often taken out of context to the extent all original meaning is lost, or at worst completely flipped.
Hype Machine = A.I. Killer not Killer A.I.
Hopefully what follows is obvious. Just like bold claims for six pack abs workouts, the hype is great to generate interest (or at least fear of missing out). Whereas reality sucks. It’s the same with A.I.
Every A.I. story overselling a particular A.I. technology — in the media or vendor marketing / sales — does more harm than good in the long term. It generates inflated expectations or misrepresents entirely what is possible today vs. what might be possible tomorrow.
Don’t believe us? Here’s some examples that leave us frustrated:
1. Sophia the Robot
Yes, this is the same “robot” granted citizenship in Saudi Arabia and described in the above quote. Putting aside the absurdity of affording a mechanical female more rights than a real female (see here), Sophia is misleading.
As this article in The Verge rightly explains, Sophia is a “non-persona non-grata” (get it?) in the A.I. community.
As The Verge stresses, Sophia’s creators consistently exaggerate her abilities by pretending she is “basically alive” rather than a clever animatronic (i.e. a mechanical remote control puppet).
Adding weight to the debate, The Verge cites Facebook’s head of A.I. research, Yann LeCun. LeCun has called Sophia “complete bullsh*t” and “This [Sophia] is to A.I. as prestidigitation is to real magic”.
In other words, Sophia is the A.I. equivalent of the sleight of hand and gimmicks magicians use to create the illusion of magical powers when they, in fact, possess none.
Unlike magic, where the audience wilfully suspends its disbelief, Sophia and similar examples foster an unwitting suspension of disbelief in the audience, i.e. “I can’t believe how advanced A.I. has suddenly become!”. The risk is that whereas the magician’s audience leaves knowingly misled, Sophia’s audience leaves unknowingly misled.
In the magic community, this is the difference between magicians who are honest in their dishonesty, and mediums who are dishonest in their dishonesty (to paraphrase world-famous mentalist, magician and sceptic Derren Brown).
We strongly recommend reading The Verge article — it’s an illuminating exposition of why this sort of hype hurts A.I. — quite simply it grossly misleads the public about what A.I. can and cannot do today. So it’s no surprise LeCunn references magic and prestidigitation (i.e. sleight of hand).
2. Creepy Facebook Chatbots
In August 2017 there was ridiculous coverage of an experiment at Facebook.
As this Wired article points out, reporters from other publications described how “Facebook A.I. researchers in a ‘panic’ were ‘forced’ to ‘kill’ their ‘creepy’ bots that had started speaking in their own language”. Pretty loaded language.
However, as the Wired article goes on to conclude, the reality was rather dull.
The experiment was to build chatbots capable of negotiating with people. The hypothesis being negotiation + collaboration with humans = useful skills for chatbots to learn. Makes sense right?
Unfortunately when training the chatbots against each other, the bots started generating a gobbledegook non-human syntax to communicate their demands. Can you make any sense of this (both Bob and Alice are algorithmic bots):
Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Thought not. Unable to communicate in human language, the experiment was scrapped. Sadly bots were harmed in that decision. Joke, they are lines of code — don’t fell sorry for them! Back to the drawing board. It was redesigned to ensure they stuck to English and not gibberish.
As the Wired article concludes, the truth is more important: “Instead of a scary story, Facebook’s experiment actually demonstrates the limitations of today’s A.I.”.
That’s our point:
Although less exciting, the truth is more informative of where we are today vs. where many would have you believe we are (including not only media but also spokespeople for vendors and large corporations with vested interests in A.I.).
2019: An Update One Year On
Sadly hype continues, but the conversation is slowly improving
The below is a nice smorgasbord of articles and tweets regarding A.I., from 2018 until 2019:
And here are some regarding use of A.I. in law, which are probably worse than the above:
Since this article was first written in 2018, things have moved on somewhat, mostly in the general conversation around A.I. and slowly but surely in the legaltech A.I. conversation.
For instance, several notable headlines (referenced above in the first image) covered the fact that as many as 40% of A.I. start-ups in Europe don’t use A.I.
Important point: not just any start-ups, those claiming to be A.I. startups!
Likewise, it is good to see a realisation that many vendors in that category have humans behind the interface to present the illusion of A.I. when none, in fact, exists (apart from Actual Intelligence haha).
Whilst that is shocking, it’s better to be shocked by the truthbefore having bought something you were told is A.I. vs. discovering undisclosed humans in the loop after the purchase, meaning your usage likely breaches client’s data sharing and confidentiality obligations. Think about that!
Terrible as that is, it’s better this is now being found out, communicated and hopefully understood.
Why should we care about misleading A.I. claims / reporting?
If false claims about aging creams are illegal and fake news the next regulatory target, why do we accept and encourage misleading, hype-laden reporting of A.I? As we noted above, the consequences of misleading buyers can be drastic: confidentiality breaches in the case of a product using undisclosed humans in the loop on the vendor side is one such example.
Yes, as we note above — fiction is often sexier than fact, but fact is ultimately more valuable to human endeavour (especially in the “post-truth” era we may now inhabit).
Spreading misinformation about A.I. among the general public damages trust. Spreading misinformation among investors and decision makers (whether in government or business) diverts interest and investment towards quacks. In its wake, we leave behind the genuine, serious A.I. research, projects and products massively underserved and misunderstood by a confused public. We risk a graveyard of good ideas whilst bad ones flourish.
As A.I. techniques are increasingly explored in decision-making systems, for instance, lending and HR, it becomes imperative that all involved understand these systems based on an appreciation of reality, not fantasy, as to how these tools and techniques work.
Today’s A.I. is maths not minds. Understanding this is fundamental to safeguarding against risk and making informed buying and usage decisions regarding A.I.
Without this, we risk implementing the wrong ideas for the wrong reasons and potentially lacking the rigour to explain them when such systems throw an exception. Crucially, hype also distracts us from asking key questions upfront:
- whether we have a defined need; and
- whether that need requires, let alone is solvable by, today’s A.I.
Let’s start being frank about A.I., what it is, what it isn’t and where it’s headed in real terms not purely aspirational terms!
Originally published at lawtomated.