Background
Almost 15 years ago to the day of publishing this article, I started my first legal job as an associate attorney. My very first assignment ever was to prepare a revocable trust package along with an ILIT. In order to do so, I was trained to do the following:
Grab a form book off a shelf;
Go to the copier, and copy each relevant form and transmittal letter I needed;
Log copy charges in a notebook by the copier (at 20 cents per page);
Use a red pen to fill in names and custom language, and to strike through irrelevant provisions of each form;
Provide the form, marked up with red pen, to a legal administrative assistant who would then type up the edits in Microsoft Word;
Get the form back and make any additional edits in red pen to again be typed up by the legal administrative assistant;
Provide the final form to the managing partner to review, edit, and send the forms; and
Get markups back from the legal administrative assistant or managing partner to learn what I had missed or done wrong.
I say this not to be facetious, because this was an intentional system that was created to train associates. However, it was not the most efficient process from a time perspective. And, it reflected a generational divide, through which some attorneys lacked the ability to use Microsoft Word - a skillset that many younger generations develop from an early age (and thus take for granted).
At some point as I am prone to do, I became haughty and proposed that I just type the forms myself. But, in the process, I upset the legal administrative assistant, who felt left out of the assembly process. (Notably, only my time was being billed - we did not bill time for a paralegal or legal administrative assistant in the drafting process.) So, I went back to the old way - at least until circumstances changed and we had to adapt.
This serves as somewhat of an allegory for our current environment of AI, which really reflects the same fears, dynamics, and insecurities of those who identify as cogs in the assembly lines of traditional legal practices.
A Whole New World
If you feel judged by the above story, that is not my intention - in fact, I identify with you more than you think.
Over the past year, I have been pressed into using AI. I initially felt like a fish out of water. In addition, it just felt wrong - as if I was cheating. As an attorney, I was trained to believe that I, and only I, could read documents with a critical eye and that doing things right required putting in the requisite elbow grease. This created a huge logjam because I am a staff of one who often does not play well with others. So, unlike prior bosses of mine who could avoid learning Microsoft Word and other now-ubiquitous tools (like legal research services) because their associates had that ability, I was left with no choice but to learn.
But, a recent discussion with a colleague led to an interesting conclusion. While there are ethical risks to the use of AI that have been repeated fairly consistently, there is one practical tension that has not been realized.
To set the stage, let’s say an AI-trained associate comes into a new practice. They are expected to review a draft document with a critical eye, reading front to back. Let’s assume this is a task that would take 2-3 hours.
But, the associate plugs the document into an AI service, and prompts the service to review the document for any typos, grammatical errors, or ambiguities - along with a list of names of parties identified in the document (to see if any names carried over from an incomplete search-and-replace job). Seconds later, the associate has a comprehensive list of changes. These changes are made and then turned around to a partner or senior associate.
In the process, a 2-3 hour task has been turned into a 10 minute (or less) task. But, the associate recognizes several conundrums:
If the task is turned around in 10 minutes, their completeness will be questioned.
10 minutes barely makes a dent in their billable hour requirement.
While the associate trusts the AI software, they are still anxious that perhaps there is some hidden error looming in the document that AI missed that will be spotted by a partner, senior associate, or (perhaps worst of all) the client.
If they make a habit of being highly efficient, perhaps their reward for learning how to quickly consume broccoli will be more broccoli. In other words, they will have to do more work based on the baseline of efficiency expectations they set.
On the other hand, perhaps the firm does not have enough work to keep them busy, so by finishing a projected 2-3 hour task in 10 minutes, they will be twiddling their thumbs waiting on more work while not hitting their billables.
To resolve these issues, the associate does the unthinkable. They wait a while to send back their changes, and then dishonestly mark down 2.5 hours (or perhaps 2.6 for good measure) on their billing software.
Trust Issues
Recently, the ABA (American Bar Association) issued Formal Opinion 512, recognizing the ethical issues and some suggested practices in the face of AI tools. While this article is not designed to summarize these ethical issues, I think it helps to consider the broader picture of professional responsibility:
Should use of AI remain hidden?
Emotionally, I’m not sure that the legal world is willing to embrace and trust AI because of the broader trust issues. When I first used it, my gut instinct was to keep it hidden. After all, it challenged what I perceived to be my skillset, and identity, in law - the ability to summarize sets of documents, and articulate interactions between those documents. It could expose me as a phony. While the opinion notes that client’s informed consent to the use of AI should be determined on a case-by-case basis and may in some cases be unnecessary, my feeling is that this is perhaps the biggest (perhaps unspoken) fear of attorneys.
As an aside, that same managing partner I mentioned above did value the time spent with the client. Clients often raved not about his work product, but instead about the sage counsel he was able to provide on a face-to-face basis. In this light, AI could have helped him focus on his core competency. But, I recall a story he told criticizing a former associate who was not a good fit, because the associate expressed a desire to just sit back in their office and prepare forms all day - a service model that could, indeed, at least be partially threatened by AI.
Another core fear is perhaps that the client will question the attorney’s fees - even for a flat fee - if AI can allow the attorney to complete tasks in a fraction of the anticipated time. This could also drive an attorney to hide their use of AI, while also leading to billing practices like the one I described above.
Fees are a rabbit hole I am going to hop over for now, but that same discussion with a colleague yesterday led to perhaps a different conclusion. Might clients be willing to pay a premium for work that is delivered quickly? After all, the traditional adage of only being able to pick two of the following three choices - quick, good, or cheap - seems to imply a trade-off that something could be quick, good, and expensive with the assistance of AI.
For now, after years of trusting our own instincts, it is hard to turn over control and trust AI to review documents. After all, we are confronted with frequent stories about AI’s errors - including the hallucination of legal authority (which I have seen in real time from one of the preferred AI services) - which give us pause. Yet, humans are also prone to errors that sometimes match AI errors, or other times do not align.
Significant time creating work product in law is spent checking for, and correcting, human errors. The folly in adopting AI is believing that it can replace humans - including human errors. Instead, we just need a new skillset of checking AI errors. And, as I will show you in this series, that skillset does not require as much of an uphill climb as you may think.
The Bandwagon, and Baby Steps
To preface, I know there is a ton of AI content out there. Some of it is practical. But, a lot of it is not. To the attorney who does not know how to use Microsoft Word, where would you even start? Likewise, to the attorney who does not know how to use and implement generative AI, it can seem overwhelming.
So, I am hopping on the bandwagon to produce AI content. But, I will involve visual and video guides to get you there. I will also create very finite use cases that you can implement for purposes of building trust and developing the parallel skillset of checking for AI error (as opposed to human error).
And, most importantly, I hope to address the human side of AI. It is easy to feel cast aside and irrelevant in this whole new world. Nobody is speaking to the anxiety attorneys are faced with every day with respect to AI. There is an emotional element to this tool, which manifests in different ways.
One of my favorite movies growing up was the 1991 classic What About Bob? in which a patient, Bob (played by Bill Murray) follows his therapist, Leo (played by Richard Dreyfuss) on vacation to seek assistance with his phobias. Leo is trying to become famous for an approach set forth in his fictional book, Baby Steps, on how to overcome anxiety and fear. Yet, in a twist of fate, the movie somewhat comically concludes with Bob and Leo switching places and stature in life due to Bob’s adherence to the principles in Baby Steps and Leo’s inability to manage his ego around the project.
Attorneys need their own set of both practical, and emotional, Baby Steps towards adopting AI - especially in estate planning - and my goal is to help you get there.
Next (Baby) Steps
Stay tuned, as I will be illustrating specific and finite use cases for AI in transactional law practices - especially estate planning - to help you see how to integrate this groundbreaking-yet-imperfect tool. I will also compare tools to help you land on practical solutions. And, of course, ethical issues will be addressed. Most of the AI content will be exclusive for paid subscribers.