Top 10 No hype A.I. articles

Lawtomated
14 min readAug 6, 2019

--

As Cassie Kozyrkov, Chief Decision Intelligence Engineer at Google, has written, “[e]very time some genius decides to apply AI where it doesn’t belong, the world collectively rolls its eyes and puts another ballot in the AI-Is-A-Fad box.

Legal A.I. is no different.

We’ve gathered below our top 10 zero hype A.I. / legal A.I. articles. We hope they help you understand:

  1. Why A.I. is an expectation management minefield — Article 1.
  2. How A.I. works today (i.e. machine learning, the mainstay of today’s “A.I.”) — Article 2.
  3. Where, how and why the hype spawned — Article 3.
  4. How and why hype is killing A.I. — Article 4.
  5. How, despite the hype, A.I. isn’t a silly fad to be written off — Article 5.
  6. How A.I. is powered by a hidden gig economy of human ghost workers — Articles 6 and 7.
  7. How current A.I. concepts and technologies do and don’t map to legal reasoning and drafting — Article 8.
  8. The state of legal A.I. and it’s limited, but useful, use cases to date — Article 9.
  9. That Robot Lawyers (!) are not, nor likely to be, a thing anytime soon — Article 10.

01. The Seven Deadly Sins of AI Predictions

Rodney Brooks | 6 October 2017 | MIT Technology Review | Find it here

🤖 Who is it by?

Rodney Brooks, Panasonic Professor of Robotics at MIT and robotics entrepreneur. Follow Rodney on Twitter.

🍕Key Takeaway

Almost all innovations in robotics and AI take far, far, longer to be widely deployed than people in the field and outside the field imagine.

Brooks posits 7 deadly sins of AI predictions to watch out for if presented with “A.I.”, whether it’s a product or a piece of commentary. Applying these, as done in Article 10 below, helps cut through the BS and separate fact from fiction.

It is paramount we take responsibility for our own critical thinking when it comes to A.I:

Mistaken predictions lead to fears of things that are not going to happen, whether it’s the wide-scale destruction of jobs, the Singularity, or the advent of AI that has values different from ours and might try to destroy us. We need to push back on these mistakes.

👍 Why do we like it?

Brooks provides a concise and well-exampled explanation of how and why we get A.I. so badly wrong. This ranges from overestimating short term potential and underestimating long term potential to the confusion of performance vs. competence and the conflation of machines with magic.

02. Machine Learning — Is the emperor wearing clothes?

Cassie Kozyrkov | 14 September 2018 | hackernoon | Find it HERE

🤖 Who is it by?

Cassie Kozyrkov, Chief Decision Intelligence Engineer at Google. Cassie’s writing about A.I., data, statistics, decision intelligence, public speaking and various other subjects are sublime in their simplicity.

We highly recommend you read her content on medium! Follow Cassie on medium and / or on Twitter.

🍕Key Takeaway

Machine learning = thing labelling. It’s based on maths, requires data to learn from + an algorithm that optimizes itself iteratively (again, via maths, not some magical power or consciousness). Humans fiddle with these configurations until the system gets good enough at labelling things to be usable in place of equivalent human labour.

“Machine learning uses patterns in data to label things. Sounds magical? The core concepts are actually embarrassingly simple. I say “embarrassingly” because if someone made you think it’s mystical, they should be embarrassed… Don’t hate it for being simple. Levers are simple too, but they can move the world.

👍 Why do we like it? 👍

It’s a fantastic entry-level machine learning de-mystifier illustrated by a simple no-code worked examples. We also love her emphasis on a lot of it being boring, e.g. the time spent data wrangling, installing packages of code and fiddling with hyperparameter tuning.

We highly recommend Cassie’s related article, Getting Started with AI? Start here! , as an immediate follow-up for anyone seriously interested in designing or running an A.I. project, whether that is at a law firm, another organisation or just for fun!

03. The BS Industrial Complex of Phony AI

Mike Mallazzo | 12 June 2019 | MEDIUM | find it here (via web.archive.org)

🤖 Who is it by?

Mike Mallazzo, a former employee of Dynamic Yield, an A.I. analytics business sold to McDonald’s for $300m. For reasons made clear in Mike’s article, WIRED editor in chief, Nicholas Thompson predicted that the sale of Dynamic Yield to McDonald’s would either go down as “peak A.I. hype” or “the day big data saved the Big Mac”.

🍕Key Takeaway

A.I. hype began sell-side. Marketers differentiated their products, labelling them “powered by A.I.”. This became a demand-side problem: RFPs, RFIs, conferences and pundits insisting vendors “leverage A.I.” or risk irrelevance.

With “the incentives of all players more or less perfectly aligned, conditions” became “perfect for a flywheel of bullshit to spin faster than it can hit the fan” leading TechCrunch to conclude (in 2017) A.I. has become a meaningless term, tech’s equivalent of ‘all-natural’.

TLDR: hyping A.I. has enriched investors, fooled the media and confused the hell out of everyone else.

“The core feature of a B.S.-industrial complex is that every member of the ecosystem knows about the charade, but is incentivized to keep shoveling.

👍 Why do we like it?

It’s a refreshingly honest insider’s account and polemic on the causes and dangers of overhyping A.I. Perhaps too frank as it is now no longer available via medium.com, instead only being retrievable by the waybackmachine!

04. Hype is killing AI — Here’s how we can stop it

ben dickson | 29 July 2019 | The Next Web | Find it here

🤖 Who is it by?

Ben Dickson, founder of TechTalks, a website aimed at educating the masses regarding all things tech. Follow Ben on Twitter.

🍕Key Takeaway

Inspired by Mike Mallazo’s article above (Article 3), Ben holds that “[t]he AI industry is currently rushing toward the peak of its latest hype cycle, creating a growing incentive for tech startups to put an AI label on their technologies for no other reason than to jump on the AI bandwagon. The practice has created confusion and frustration around what AI is and what it can do, and has given rise to a growing group of disenchanted experts, scientists and academicians.

To underscore his point, Ben highlights a “London-based venture capital firm MMC found that out of 2,830 startups classified as AI companies, only 1,580 accurately fit the description.

It’s a wonder the advertising standard agencies in the UK and elsewhere aren’t more concerned about the level of misleading / fraudulent positioning of “A.I. products”.

It’s no wonder Ben concludes “[t]he mystification of AI fully justifies the backlash by the science community” because “[m]isusing the terminology for the sake of creating excitement and drawing attention and investment to products and services is deplorable, and it certainly hurts the industry.

“Part of the problem is with the term “artificial intelligence” itself, which is vague in nature, and its definition changes as time passes and technology improves. So it’s easy for marketers to get away with renaming old technology as AI.

👍 Why do we like it?

Like Mallazo’s article, Ben highlights the key points regarding A.I. hype: overzealous marketing, complicit investors, bedazzled pundits and linguistic tautologies.

Unlike Mallazo’s article Ben proposes a solution: we all try harder to make A.I. understandable by all.

05. Is A.I. a fad?

CASSIE KOZYRKOV | 7 February 2019 | hackernoon | Find it HERE

🤖 Who is it by?

Cassie Kozyrkov, Chief Decision Intelligence Engineer at Google, i.e. the same author as Article 2 above.

🍕Key Takeaway

A.I. is increasingly viewed as faddish, in part as a backlash to the hype. Much like hype, indiscriminately labelling A.I. a “fad” is lazy thinking. This lazy thinking fails to recognise common sense A.I. principles, including:

  1. If you can solve a problem without A.I., then don’t use A.I. No really, put the A.I. down and go back to regex.
  2. A.I. is garbage in, garbage out — don’t expect magic where there is none.
  3. “Simple solutions don’t work for tasks that need complicated solutions. So AI comes to the rescue with — surprise! — complicated solutions.” Although we might not always understand 100% of how an A.I. system works, we “don’t need to understand how it works to check that it does work.” Likewise, it’s a principle pracitsed with medicine. As Cassie points out, “Do you know how that headache pill works? Neither does science. The reason we trust it is that we carefully check that it does work.

Overyhyping of A.I. doesn’t necessitate snarky “I told you so” putdowns or blanket dismissals that all A.I. is BS. Nor should it encourage dogmatic lionisation of legacy tech over A.I.

As always, things aren’t black and white. A.I. is another tool in our toolkit. We must remember Maslow’s law of the instrument that to a hammer everything looks like a nail. Cognizant of that, we can appreciate it is horses for courses: sometimes A.I. is a fit, sometimes it is not and even where it is, the solution may be better solved via simpler means. But none of that means lazily writing off A.I. altogether!

“The problems of the future will only be getting harder. After you automate the simple tasks, you’ll want to move on to bigger challenges. Once you reach past the low-hanging fruit, you’ll run into a task you can’t solve using your old tricks and the brute force of raw imagination. You’ll realize you can only communicate what you want with examples, not instructions…

👍 Why do we like it? 👍

Cassie articulates three common gripes using simple examples everyone can grasp. Not only this, she provides rebuttals and ways to avoid falling into these thought traps. It encourages critical thinking regarding hype, but also the countervailing accusations that all A.I. is just a fad.

06. The Automation Charade

Astra Taylor | 1 August 2018 | LogicMag | Find it HERE

🤖 Who is it by?

Astra Taylor, a documentary maker, writer and activist who has written about “fauxtomation”, i.e. faking automation via cheap human labour. Follow Astra on Twitter.

🍕Key Takeaway

Our fascination with “automation” often leads to fauxtomation: a clever way to dress up low-cost human labour as robotic process automation and / or A.I. Although it’s “omnipresent, fauxtomation can sometimes be hard to discern, since by definition it aims to disguise the real character of the work in question.

Astra highlights several powerful past and present examples of faxutomation.

Most relevant in recency Astra highlights The Moderators, a moving 2017 documentary short directed by Adrian Chen and Ciarán Cassidy that provides a window into the lives of 100,000s of individual workers “who screen and censor digital content, ceaselessly staring at beheadings, scenes of rape and animal torture, and other scarring images in order to filter what appears in our social media feeds.Wired called the film’s exposé “a kind of real-life, South Asian Clockwork Orange for the social media age.

You can watch the film (22 min) below:

In an office in India, a cadre of Internet moderators ensures that social media sites are not taken over by bots, scammers, and pornographers. The Moderators shows the humans behind content moderation, taking viewers into the training process that workers go through in order to become social media’s monitors.

As a result of fauxtomation Astra concludes:

“we fail to see — and to value — the labor of our fellow human beings. We mistake fauxtomation for the real thing, reinforcing the illusion that machines are smarter than they really are.

👍 Why do we like it? 👍

It’s a persuasive socio-political essay on the history and ideology of automation, or more properly fauxtomation. Importantly, Astra reminds us that automation often comes at a hidden human cost.

Astra’s arguments provide a poignant rebuttal to the fever dream of inevitable and ubiquitous automation.

07. The AI gig economy is coming for you

Karen Hao | 31 May 2019 | MIT Technology Review | Find it here

🤖 Who is it by?

Karen Hao, A.I. reporter for MIT Technology Review and former data scientist. Follow Karen on Twitter.

🍕Key Takeaway

Like Astra’s essay on fauxtomation, Karen highlights how the A.I. industry runs on the invisible labour of humans working in isolated and often terrible conditions — and how the model is spreading to more and more businesses.

Crucially, “Human workers don’t just label the data that makes AI work.” In some case, “humans workers are the artificial intelligence.” As with The Moderators, Karen explains that “[b]ehind Facebook’s content-moderating AI are thousands of content moderators; behind Amazon Alexa is a global team of transcribers; and behind Google Duplex are sometimes very human callers mimicking the AI that mimics humans.

Interestingly, Karen points out that there is a growing trend for such workers to have college degrees. Ghostworking in the A.I. gig economy isn’t the preserve of the poor — it’s coming for you:

“Behind the “magic” of its [Google Assistant] ability to interpret 26 languages is a huge team of linguists, working as subcontractors, who must tediously label the training data for it to work. They earn low wages and are routinely forced to work unpaid overtime. Their concerns over working conditions have been repeatedly dismissed.

👍 Why do we like it?

It lifts the lid on the millions of ghost workers plugging away, either labelling datasets for A.I. systems to process and / or being the A.I. when it fails. In fact, Silicon Valley’s newest unicorn, scale.com, is devoted to exactly this.

Legaltech companies are similarly reliant on a gig economy of ghost workers, i.e. full-time or contract hire law students, lawyers and paralegals. These workers toil away labelling datasets to feed such vendor’s A.I. contract review software. In some cases, these workers are deliberately hired offshore in India and similar to keep costs low.

08. 25 facts about AI & Law you always wanted to know (but were afraid to ask).

micha grupp | 7 april 2019 | medium | find it here

🤖 Who is it by?

Micha Grupp, CEO and founder of Bryter, a German no code workflow automation platform aimed at law firms and in-house legal teams. Micha also teaches a course on Legal Informatics and Innovation at Frankfurt University! Follow Micha on Twitter.

🍕Key Takeaway

The hype regarding legal A.I. is unhelpful and underserved.

Current “[i]ntelligent systems based on statistic applications play mostly on the syntactic level. Pattern recognition with neural networks essentially require numbers (or at least formalized, numeric information), and most other approaches do, too. While most legal reasoning requires at least some aspect of semantic interpretation, modern machine learning concepts cannot help here. As soon as an application requires semantic reasoning, we are stuck. Lawyers are trained to gather information from data, to read a text, interpret it, reason on it, and produce a result. The process is a rapid and iterative comparison of several semantic layers that machines cannot mimic or at least not through self-learning reproduction.

This, plus insufficient quality data nor an easy means of formalizing legal data labelling, leaves legal A.I. constipated. As a result, despite lots of legal A.I. applications, the industry is today better served via simpler process re-engineering and automation than it is widespread use of machine learning.

“In law, AI is still all the talk. Most of it is slightly or utterly incorrect. Discoveries in recent years have little impact on the automation of legal work and the legal industry. Legal reasoning is different from other fields — technology should reflect this.”

👍 Why do we like it?

It’s an intellectual and thorough essay on the limits of legal A.I. In particular, sections 9–20 concisely describe blockers to legal A.I. application and adoption, namely: data quantity, data quality and lack of a formalized and high fidelity means to label the nuances of legal language and reasoning in a format humans can agee upon and machines can read.

Without these legal A.I. is, for now, a thirsty engine without the right fuel.

09. AI: Where the Rubber Hits the Road

Ryan mcclead | 1 August 2019 | The Legal executive institute | FIND IT HERE

🤖 Who is it by?

Ryan McClead, Principal and CEO of Sente Advisors, a legal technology consultancy in cross-platform solutions and support. Follow Ryan on Twitter.

🍕Key Takeaway

“Despite its [A.I.’s] ominous name and its depiction in movies as being omniscient, omnipotent, and typically evil, AI is none of those things. It is simply applied technology. And it’s not fundamentally different from other technologies we use every day. It’s not actually smart. It doesn’t think on its own or make unilateral decisions. It does not form opinions, generate insights, or “care” about anything — at least no more than Microsoft Word does.

As such legal A.I. (and A.I. in general) does one or more of three things:

  1. pattern recognition, e.g. Kira, Eigen, eBrevia, Luminance and RAVN for text classification and extraction;
  2. prediction, e.g. court analytics such as WestLaw Edge’s Litigation Analytics or Lex Machina; and
  3. performance of routine tasks and basic logic-based reasoning, e.g. robotic process automation tools such as Bryter, Autto, Neota Logic, UiPath etc.

All of which are useful, but far from the job-stealing Robot Lawyers!

“AI optimists claim that it is massively disruptive and it’s widely available now, while AI cynics state that it’s hype and it’s a long way off. And less scrupulous vendors speak as if it’s magic that can do just about anything you imagine. The reality, as usual, lies somewhere in the middle.

👍 Why do we like it?

It’s a concise summary and categorisation of today’s legal A.I., where it fits, how and why. Not a robot lawyer in sight!

10. Artificial Intelligence and Law — Robots replacing Lawyers?

Brian Inkster | 1 October 2018 | The Time Blawg | FIND IT HERE

🤖 Who is it by?

Brian Inkster, partner at Inskters solicitors and legaltech blogger at The Time Blawg. Follow Brian on Twitter.

🍕Key Takeaway

We end where we began. Brian skillfully applies Rodney Rodney Brooks’ 7 Deadly Sins of A.I. Predictions (Article 1 above) to legal A.I. He proves it a useful lens through which to assess legal A.I.

In doing so Brian identifies multiple sins present in the reporting, marketing and publicising of A.I. products and projects within legal.

“There is much hype out there about robots taking over the work of lawyers. Apply the seven deadly sins that Rodney Brooks has so well enunciated to any articles you read on Legal AI to debunk the hype and see the wood from the trees. AI is a tool that lawyers can and will usefully use and which they should only use when it is relevant, necessary and cost effective to do so. There is also much technology currently in their arsenal that they could be using to greater advantage before they consider even looking at AI.

👍 Why do we like it?

Brian’s humorous assessment of legal A.I. predictions past and present underscores why we all need to read bold proclamations announcing the imminent arrival of Robot Lawyers with a huge pinch of salt.

By mapping Brooks’ 7 Deadly Sins of A.I. Predictions (Article 1 above) to legal A.I. Brian provides a framework to keep us safe from A.I. hype merchants wherever they lurk.

In this vein, see also our popular guide explaining what you should ask and why of legal A.I. vendors to sort the experts from the cowboys.

🎁 BONUS 🎁 Why A.I. hype hurts everyone

Lawtomated team | 5 May 2018 | lawtomated | find it here

🤖 Who is it by?

OK, this one is a cheeky plug: it’s by us at lawtomated! 🐒 Follow us on Twitter. Like many of the above articles, we’ve been writing about A.I. hype for some time… mostly because we’ve built A.I., sold A.I., bought A.I., and implemented A.I..

In other words, we’ve seen from all sides how hard it is to manage expectations regarding A.I.!

🍕Key Takeaway

A.I. gets massively overhyped in one of three ways: (1) aggrandisement, (2) scaremongering or (3) via some plainly insane claim.

We highlight several examples, including misreporting of killer chatbots and “sentienthumanoid robots. Because of this, we conclude that everyone needs to be tech curious and do more to cut through BS!

“Spreading misinformation about A.I. among the general public damages trust. Spreading misinformation among investors and decision makers (whether in government or business) diverts interest and investment towards quacks. In its wake, we leave behind the genuine, serious A.I. research, projects and products massively underserved and misunderstood by a confused public. We risk a graveyard of good ideas whilst bad ones flourish.

👍 Why do we like it?

Includes several real-world examples of massively overhyped A.I. reporting, and exposes the mundane realities behind each that went largely unreported except in the more serious tech press. A good example of why you should read widely when it comes to A.I.!

Originally published at lawtomated.

--

--

Lawtomated
Lawtomated

Written by Lawtomated

Legaltech Deep Dives | Legaltech Leaders | Legaltech Coding

No responses yet