I.A. vs. A.I. — what’s the difference and why I.A. comes before A.I. (Part 1)
In the legaltech community, a new #hastag is gaining ground: #IAbeforeAI. All well and good but what does it mean and why care? Two things! It can mean intelligence augmentation and information architecture.
Information Architecture + / Intelligence Augmentation
Hand in glove
After the initial article was published on 10 April 2019 we had some further conversations with Alex Smith, the originator of the #IAbeforeAI hashtag and discussed its original (and continued use) to refer to information architecture (see commentary to this article below).
In a subsequent part 2 we will take a deeper dive into information architecture and its relevance to law and to legal A.I.
A brief teaser re information architecture
For now, in a nutshell, information architecture is:
The design, organisation structures and schemes and systems to label, connect, navigate and search data.
In many ways this is turning data (facts + figures) into information (data organised / presented to make it useful) we can use.
Like intelligence augmentation, information architecture:
- Is critical for A.I., given A.I. depends on good information architecture, either in terms of the data the A.I. application interacts with and / or the way it organises and navigates its own data or that of third-party systems.
- Is undervalued / absent as a consideration in newer legaltech products, commentary (hence the #IAbeforeAI to course-correct this) and use cases. Given (1), it’s no surprise absence of good information architecture severely impedes the value of current legal A.I. use cases and applications.
- Is nothing new.
- Goes hand in hand, i.e. if you’re augmenting lawyer intelligence with information (which is what a lot of legaltech is really about given law is primarily about finding, combining, collaborating and applying intellectual capital) you need good information architecture. You can’t have one without the other… although that’s unfortunately sometimes the status quo starting point.
- Is a more boring (hence the related #bringbackbording tag), but more useful solution for many of law’s biggest problems, many / most of which don’t need A.I. but better user-centric information architecture to enable value add intelligence augmentation for lawyer and client.
Re (3), we’ll just indulge in a geeky legaltech history lesson…
For instance, law libraries have existed for 100s of years to tackle information management and evolved systems to organise and navigate data to benefit users (information architecture like index systems and so on).
Likewise, legal information databases (e.g. LexisNexis) and software to create and manage these for ad hoc or transaction-specific needs (e.g. eDiscovery) have existed for decades. Don’t believe us? Well…
The above is the Lexis UBIQ™ terminal. Released in 1979, two years before IBM® introduced its PC (1981) (!), the UBIQ™ terminal took online legal research to another level.
The UBIQ offered the first-ever desktop access, to Lexis’ full-text database. As a result lawyers could retire, cumbersome terminals operating on 300-baud modems (baud is a unit of transmission speed). The advent of 1200-baud modems around the same time was another leap for law firms, quadrupling the speed at which they could search Lexis’ full-text database.
The UBIQ offered other conveniences besides desktop legal information access, including:
- The use of a centralized printer!
- The world’s first auto-dial feature.
- Function keys. Each key was labeled with exactly the function it performed, allowing lawyers — especially those who couldn’t type (more common then, given typing pools took care of typing for lawyers) — the ability to issue commands to the service with one keystroke.
The UBIQ reinforced LexisNexis as one of the earliest entrants into the online information world and a pioneer of electronic legal information. It’s no surprise they remain a market leader today 40 years later.
What’s more: no A.I., just good I.A. in the sense of information architecture to organise, navigate and find information and thereby augment lawyer research and intelligence.
Nothing is new in this world!
But back to intelligence augmentation
The use we will explore in the remainder of this article is intelligence augmentation.
With that in mind, this article considers:
- What is I.A. in general and in legaltech?
- How does it compare to A.I.?
- Is I.A. better than A.I. or vice versa?
- Why should I care?
We’ll answer these and similar questions with regard information architecture in due course. One thing at a time in this busy space!
What is I.A.?
cyborgs
I.A. in the context of the remaining article stands for Intelligence Augmentation, aka:
- Intelligence Amplification,
- Cognitive Augmentation
- Machine Augmented Intelligence
- Enhanced Intelligence
The term was first coined in the 1950s by William Ross in Introduction to Cybernetics. It was also proposed as an explicit alternative to A.I., which is particularly relevant to the increasing debate about I.A. vs. A.I. in legaltech!
OK, enough history. What is I.A.?
The core idea is simple: expand human intelligence via technology. Like A.I. it’s a frequent feature in films and literature, for instance:
- Ghost in the Shell: set in 2029 almost everyone in Japan is connected to the cloud via cybernetic implants, body parts or entire bodies (up to but excluding the original human brain). These upgrades provide enhanced cognition, completely new sensory abilities (e.g. infrared / telescopic vision), telepathy and instant learning of / access to information. Note if you want to watch it, watch the 1995 original, not the 2017 remake! The cover image for this post is borrowed from the 1995 original. It’s also a source of inspiration for The Matrix: the Wachowski siblings play Ghost in the Shell to producers when pitching them The Matrix!
- Flowers for Algernon: different from the above in that the augmenting technology is a surgical technique vs. software or hardware. In this award-winning classic novel, Charlie Gordon, a 34-year-old man who works in a bakery with a very low IQ of 68 (70 or below is often considered a mental disability) has his IQ gradually enhanced via a breakthrough surgical technique. It’s a melancholy but poignant tale. Well worth a read.
- Limitless: continuing the medical theme, if you’ve seen the film, you’ll know Bradley Cooper’s character starts taking a new nootropic called NZT-48 which greatly enhances his cognitive abilities, and in turn his fortunes.
- The Matrix: set in a future where an enslaved humanity is hooked up to a cloud-based simulation of the earth (i.e. “The Matrix”). However, it is hackable: hackers repurpose it’s enormous computing power and direct uplink to the human brain to “download” new skills like Kung Fu to fight back against their A.I. overlords. In many ways, it’s symbolic of the I.A. vs. A.I. debate: the I.A. enabled humans go on to fight for freedom against their A.I. overlords.
Enough sci-fi: what about the here and now?
Hint: you use i.a. every day
I.A. is today whereas A.I. is tomorrow. What we mean by this is that most systems classed as A.I. today are more properly I.A., especially when most people vastly overestimate today’s A.I. to the point where it simply does not reflect the reality of what we term “A.I.” today.
Don’t believe us? Check this out:
How many devices and services do you use each day that fall into the left column vs. the right column? For instance, in a typical day we use:
- Google Maps to find a place to meet, directions to / from work, get congestion updates.
- Google Home Hub / Amazon Alexa to retrieve information like the weather, answer questions, check a schedule or even pay for a takeaway.
- Gmail’s Smart Compose to automatically generate appropriate sentences in response to an email.
Takeaway: I.A. = supplementing not supplanting your brain
Clever and useful as they are, the above examples all supplement rather than supplant your cognition. Your cognition is enhanced, mainly through delegation of repetitive cognitive tasks and / or extended through instant access to specific information. In any case, you’re always in control. The upside is increased human bandwidth to tackle other tasks.
While the underlying technologies powering A.I. and I.A. are often the same, the goals and applications are fundamentally different:
- A.I. aims to create systems that run without humans
- I.A. aims to create systems that make humans better
To be clear, I.A. is not a separate category of technology. Instead it is simply a different way of thinking about the purpose of current technologies, including those described as “A.I.”.
Arguably, many A.I. branded technologies currently available for businesses can and should be more accurately be described as I.A.
I.A. in Legaltech
better lawyers not bots
Adopting the above definitions, many legaltech tools — including many / all — labelled A.I. can and should more precisely defined as I.A.
In most / all cases these technologies make lawyers better rather than replace them. In legal this is usually achieved via:
- Automating repetitive tasks
- Scaling a lawyer’s capacity to complete increasing volumes of tasks
- Enabling something new
In no instances do such systems decide anything close to the final outcome necessary for a lawyer to produce their ultimate legal product or service. For this reason news of robot lawyers are greatly exaggerated!
Don’t believe us? Here’s some practical examples of legal I.A. in action
eDiscovery
How it works
A one time process, whereby an SME (usually a senior lawyer) familiar with the litigation matter and likely issues applies tags to 500+ items of litigation data, e.g. responsive or unresponsive.
In machine learning terms this is known as training.
The system then tries to suggest tags for X number of items sight unseen by the system beforehand.
The SME reviews the system generated tags and corrects them where necessary.
The above train — test loop is repeated until the system no longer improves in its tagging ability.
At that point, the system is tasked with tagging the remainder of the litigation items that were previously unseen by the system. The lawyers are left with pre-tagged documents to correct or approve.
Why is this I.A. not A.I.
Although “A.I.” techniques are used, i.e. supervised machine learning via the human training, when viewed as a whole the solution is really about I.A. in means and motive.
eDiscovery TAR 1.0 supercharges a lawyer’s ability to tag documents. It does this in two ways:
First by offering a simple interface to accelerate the ease with which that tagging happens, i.e. clicking vs. printing and physically tagging docs as in the old days.
Second, by automating the first pass tagging of documents the lawyer has not yet tagged during the training phase.
Together, the lawyer is able to complete their review faster and more effectively than manual efforts alone. Their intelligence is copied and extended, but decision making remains with the lawyer.
Contact Due Diligence
How it works
Broadly the same as the above: lawyer tags clauses (e.g. rent amount clause) and / or an intra-clause data point (e.g. numerical rent amount).
This data, known as a training set, is used to train the system.
As above, the lawyer runs the trained system over a further set of documents previously sight unseen by the system, reviews the results and corrects the system, thus providing further training. This process is repeated until the system reaches the desired accuracy level.
Note such systems are usually trained by the vendor and / or the end user. It all depends on the delivery model.
See our deeper explanation of these techniques here.
Why is this about I.A. not A.I.
As above: machine learning scales a mundane human process, identifying clauses and intra-clause data but it doesn’t and cannot interpret and analyse that data to inform decisions about legal / commercial risk nor the consequential remedial actions or advice such necessitate.
Again, the system is augmenting a legal process from within, not swapping out lawyers for bots.
Playbook Review
How it works
The system is typically pre-trained using supervised learning to identify key clauses for particular document types, e.g. NDAs.
On top of this the vendor usually works with the user to configure a set of rules to mirror the user’s playbook for negotiating the document type, e.g. NDA duration of no greater than 2 years.
Documents are uploaded to the system, which identifies the presence or absence of key clauses, which may in itself be in or out of compliance with the user’s playbook (e.g. a missing governing law clause would be a red flag). It will also overlay the rules to identify for each key clause whether or not it is OK or not against the playbook.
A human will need to review the results to approve / reject the system’s conclusions about whether and to what extent each clause complies with the playbook. Often the system allows for easy human editing of offending clauses flagged by the system.
Why is this about I.A. not A.I.
Again, as above: machine learning scales a mundane human process, identifying clauses and performing some basic checks against fairly standard concerns.
As always though, the human remains firmly in the loop. Unfortunately, a lot of buyers hope these systems more or less eliminate human review of high volume low-risk documents like NDAs. These systems are getting there, especially where the market standard for a document is highly standardised, but there is a long way to go yet!
Regardless, its “first pass” review can, along with a proper workflow process (itself augmenting the legal tasks associated with review) free up lawyer cognition for higher value legal tasks.
Assembly, Workflow and RPA
How it works
Obviously, we’re lumping together what are often very different and separately sold tools. But for the sake of brevity, these tools automate the copying and pasting, deletion of square brackets / blobs to include or remove optional wording or key variables in contracts, execution of simple tasks (e.g. splitting out and printing to PDF signature pages), duplicating information across documents with shared information (e.g. company data, document descriptions, and other recitals across shareholder and board resolutions).
Why is this about I.A. not A.I.
Easier to spot than the above — in part, because these tools don’t tend to be A.I. branded — these systems are all about eliminating repetitive manual labour: the clicks, copy and pastes etc.
Rarely do these systems use machine learning, but that’s because they don’t need it: lots of lawyer time is misspent doing repetitive “dumb” tasks such as these.
Software is better suited and more able to perform repetitive tasks faster and more accurately than equivalent manual efforts alone.
Research & Analytics
How it works
In some case “A.I.” systems are opening up new insights to lawyers, e.g. predictive analytics concerning court data to determine how a particular judge might decide based on historical data of similar factors to the case in hand.
Why is this I.A. not A.I.
As cool as this is, it’s an extension of what lawyers already do. For centuries lawyers have based litigation strategy on the past behaviours of courts, reading up on judgments to identify, develop and argue their client’s cause.
What’s new is the level of insight, which is greatly expanded by the growing amount of court data collected and available via such platforms. In many ways this is akin to the printing press and court reporting that came before: the wider availability of larger datasets means more insight and potentially better decisions, but crucially does not decide decisions.
In all of the above, the system is supplementing not supplanting the lawyer. Yes, it definitely supplants the lawyer from some of the work, but not all of it, least of all decision making. A lawyer is needed to interpret and tie together the results.
All that’s changed is the quantity, quality and possible combination of tools in the lawyer’s arsenal… much like the below:
So why talk in terms of I.A. and not A.I.?
solve problems
There’s really two headline reasons to talk in terms of I.A. Let’s explore each below.
1. Reality vs. Fantasy
As we’ve argued above today’s A.I. is better understood as I.A. Understanding this enables a sensible and practical conversation about its use. In particular, talking in terms of I.A. with regard to legal A.I. has the following benefits:
- Shiny Object Syndrome (SOS): technology decision makers (whether lawyers and / or technologists) have vendors emailing / calling them every day touting the latest and greatest A.I. Sure, these things all sound wonderful in theory but are they going to move the needle? Instead, reality check the claims — by understanding how the tech works — and most importantly, what you need (of that, see (2) below).
- Fear of Missing Out (FOMO): a corollary of the above, there is something of an arms race between law firms. In part, this is because clients are keen to see firms do (or at least claim to do) more with tech to deliver better services. Given the noisy legaltech market, Big4 vs. BigLaw arguments, conflicting opinions and mixed marketing messages sprinkled with buzzwords no one wants to get left behind. A.I. hype feeds on this and is a major reason for explosive growth in legal A.I. companies. Again, critically assessing such tools in terms of I.A. helps determine to what extent the tool will move the needle or simply be something nice to mention in a slide deck, press release or RFP / RFI response (it can be all of these if bought or built well).
- Optimists: as we’ve covered elsewhere the A.I. hype is counterintuitive to use and progress. As a result, law firms and clients are stuffed with optimists, eager to believe the bold claims of vendors. However, often the pressing problems to be solved are better thought of in terms of I.A. (including in the sense of better information architecture).
- Cynics: tied to the above, the A.I. hype has festered huge cynicism, both within vendors themselves (partly why there is a growing shift toward re-branding as I.A. vs. A.I. given I.A. reflects reality) and potential buyers. In part, this is due to the peak of inflated expectations, first popularised by Gartner’s famous hype cycle that postulates any new technology rises skywards on inflating expectations only to come crashing back to reality — the so-called trough of disillusionment. Likewise, this is part of Amara’s law: a human tendency to overestimate the short term impact and underestimate the long term impact of technology. Right now this has been driven by the number of legal A.I. vendors overpromising and underdelivering whilst simultaneously under-educating users and buyers about how the technology works, the necessary but often absent prerequisites (including a decent information architecture + quality + quantity of data) and how and why the system achieves its results. Tied to that is the tricky trade-off between usability and functionality — distilling machine learning concepts and workflow into something lawyers can use and understand without a Ph.D. is hard, but that doesn’t excuse how undervalued user-centric design continues to be in legal A.I. tools. Thinking about these challenges in terms of I.A. will help solve these issues.
- Change Management: optimist or cynic, users often fear A.I. will eliminate their job. Similarly, A.I. is rather intimidating for most people and many worry (privately or publicly) they won’t be able to learn this new technology and will be left behind. Reframing the conversation around I.A. helps stay ahead of that conversation to both build stakeholder buy-in and de-mystify A.I. into something easy to understand, upskill and apply productively where there is a need and a fit.
- Adoption: finally, tied to change management, that demystification and reframing of A.I. as I.A. can aid adoption. Understood properly these tools are largely about adapting or adopting processes within a larger process to solve a problem. As we will explain with (2) below, thinking in terms of I.A. from the start can help pinpoint whether there is even a need for such tools and, if so, where in your organisation. Getting a good product-market fit will make adoption exponentially easier than trying to find problems for products unsuited to solving them.
2. People, Process, Problem, Priority, Products (in that order)
Too often it’s easy to get wrapped up in the SOS and FOMO. In the end, this means focussing on solutions in search of a problem.
As we’ve said above, assessing A.I. products in terms of I.A. better reflects their reality. But before you begin looking at such products, it’s best to identify the people, process, problem to be solved.
By that, we mean identifying the needs of your business. These will be best articulated by your people and their clients. But that doesn’t simply mean asking them “what’s the problem?” For some, they may already know. For others, they will not. For those who do, they may assume wrong.
To begin with, then it’s worth working with teams to map out their processes as they stand today. It may sound surprising but this is rarely done in law. For the most part, this is probably because it is not a billable activity, nor part of legal training. It’s odd because lawyers are analytical procedural problem solvers when it comes to client work and legal issues, but don’t have the time nor join the dots between those skills in order to apply them to their own practice areas or service delivery.
Thankfully that is changing and generating significant dividends. Such conversations can uncover treasure troves of information. Not least you will have a better understanding of what the team does and them of themselves. Very likely the current process has redundancies, repetitions or bottlenecks. Sometimes these are the very things that drive people nuts, even if they couldn’t quite explain why beforehand.
Mapping the landscape helps optimise the route from A to B and refocus away from mindless doing to user experience and effectiveness, both for lawyer and client.
Having done that, it will become clearer whether:
- there is any need for a technology solution (it may be that Bob needs to pick up the phone rather than send endless emails); or
- if there is a need for a technology solution, the specific problem targeted requirements will be clearer.
Likewise it will give you something to measure against in order to assess ROI, i.e. old vs. new process rather than a new process vs. what process? Together, this helps prioritise what to fix both within a single team / process but within multiple teams / processes, whether interrelated or separate.
Taking this information together means you go to market (or to your dev team if you have one in-house) armed to find and assess appropriate solutions vs. the latest shiny toy everyone else is bragging about.
In doing so you will avoid buying solutions in search of a problem. It will also be a good test to see how well vendors have listened to, and thought through, your challenge rather than tried to brainwash you into a FOMO based buying decision.
Working this way also better reflects the reality of today’s technology, which — apart from the most basic legal processes — will always be about supplementing some or all of a process rather than supplanting the human pieces entirely.
Conclusion
I.A. before A.I.
Today’s legal A.I. (and A.I. in general) is more precisely described as I.A., that is: technology that supplements, supports but certainly does not supplant human decision making in its entirety.
Understood this way it can be a good litmus test for vendor claims, especially when combined with your own pre-work to identify the real people, processes, problems, and priorities most in need of a solution rather than doing the reverse!
Reframing A.I. in terms of I.A. also helps nix the prevailing issues with A.I. conversations, most of which stem from mismanaged expectations and change management.
For these reasons the most effective “A.I.” salespeople, whether vendors selling to legal teams, or innovation teams selling such tools internally, frame these tools as I.A. and not A.I. It works because it reflects reality, not fantasy.
So keep an eye and ear out for vendors (or innovators) who talk in terms of I.A. over A.I. even when they are dealing with A.I. branded products.
Lastly, remember that I.A. is all around you:
I feel it in my fingers
I feel it in my smartphone
The I.A. that’s all around me
And so the correct conversation growsTech Tech Tech — I.A. is All Around You
Originally published at lawtomated.