State of Estates

State of Estates

Share this post

State of Estates
State of Estates
AI in Estate Planning: Duties of Competence?

AI in Estate Planning: Duties of Competence?

Getting to the root of the plant

Griffin Bridgers's avatar
Griffin Bridgers
Jan 21, 2025
∙ Paid

Share this post

State of Estates
State of Estates
AI in Estate Planning: Duties of Competence?
Share

Table of Contents

  1. Where We Left Off

  2. AI Exercise of the Day – Documenting Competency

Where We Left Off

In the three months that have passed since my last AI article, a lot seems to have happened on the AI front. But, it can be hard to separate noise from reality. We continue to see new models come out every day on various free and paid platforms. However, while I’m sure some will (perhaps rightfully) accuse me of user error, I have experienced an erosion in quality for many of the AI platforms I use. In some cases, the erosion is so significant it has forced me back into my old (and less glamorous, yet more familiar and comfortable) ways of doing things.

As with all prior AI articles, I will preface with the caveat that this is somewhat of an opinion piece. And, my opinion can appear soap-boxy and biased. My opinion is also perhaps colored by spending too much time on social media where there seems to be a lot of noise (usually from software vendors) promoting and even celebrating accelerated adoption of AI and tech. But, I am here today to challenge a sacred cow – that AI can save you time. Perhaps the optimal use of AI requires us to focus our efforts, and time, towards areas we often ignore.

Without getting into specific examples, the typical value proposition of AI is replacing tasks. Before AI, task replacement usually required “delegation” to another employee within an organization or to an outside vendor. This process is nothing new, and represents the “leveraging” upon which many professional services models are based.

But, this is where I must wield ABA Model Rule 1.1 like a cudgel. To recap, this rule provides:

A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.

Often, vendors of any software or technology cite Comment 8 to this Rule, which provides (with emphasis added):

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.

For most of you, this is not a new insight, and rest assured that I am not continuing this thread into duties to supervise or charge an appropriate fee. But, this is because I am choosing to go against the grain – not just for AI, but for the idea of automation in general. My radical position against the entire backdrop of tech is that competence must precede adoption of technology.

In other words, you have no business outsourcing something unless or until you have the requisite competence to perform that task without the assistance of the tool or utility you are using – not just at the time of outsourcing, but also if and when the tool or utility fails (such that you are forced to again do things the old-fashioned way). This goes not just for AI, but for any software that automates certain tasks. It even holds true for tasks you might delegate to an associate or support staff. But, what does competence actually mean?

We briefly touched on this principle when discussing perhaps the one item that cannot be delegated – reading the law – not just because of concerns on competence and supervision, but also because refusing to read the law essentially means you cease to be an attorney to begin with. At the risk of sounding “woo-woo,” there is a human side to planning that cannot be abandoned. While this is often discussed from the perspective of a professional’s relationships (or lack thereof) with their clients and how AI either enhances or threatens it, I would like to turn the mirror inward to admonish you not to lose the relationship with yourself.

What nobody seems to be discussing is the fact that AI (or at least promotional marketing relating thereto) challenges, at its core, what it means to be a professional of any stripe – whether you are an attorney, financial advisor, CPA, trust officer, or any other service provider who uses a combination of knowledge and experience to assist others. This brings us back to the question of competence, which perhaps can simply be boiled down to the elements of knowledge and experience. But, if we imagine competence as a stool, it cannot have just two legs. There usually has to be a third leg, which is enjoyment. In other words, if you do not enjoy the process of gaining competence (in the form of knowledge and experience), the likelihood of you continuing to pursue a profession is low.

Now, I am not saying that enjoyment has to be viewed through a black-and-white lens. No profession is perfect, and belief in such an outcome perhaps means that nothing will be enjoyed. There is a reason, however, that an attorney’s development of knowledge and experience is called the practice of law. Practice means we are constantly improving and evolving, that we have never truly reached a point of being good enough, and perhaps (most importantly) that we enjoy this process even if we do not enjoy some or many of the day-to-day tasks than encompass doing so. That is, to me, the spirit of ABA Model Rule 1.1 which, in the context of Comment 8, doesn’t just advocate for technological competence but also tells us to engage in continuing improvement as a subtext and purpose to our own day-to-day activities.

My broader point in all of this is that we often jettison tasks that we don’t “enjoy” to AI. But, the main reason we don’t “enjoy” a task is usually because we view it as not being the highest and best use of our knowledge and experience. In the best case, this is altruistic (i.e., we would like to spend more time with clients) and in the worst case, this is ego-driven (i.e., we are “too good” to perform a task). And, there is often an issue in the middle best framed as “clients won’t pay my rate of $x per hour for this task” (which, perhaps, raises a separate issue of how exactly you are communicating the value-add of your services as a whole as I will discuss in the exercise below).

What do we do, however, when AI fails or delivers a sub-par output? How much capacity do we have to pick back up and do things the old way? One may liken it to a situation where an employee departs, leaving others to pick up the slack. A new hire, or sharing the load, may be effective for more ministerial tasks. But, many of the tasks that are delegated to AI – such as breaking writer’s block, summarizing documents and cases, streamlining research, and perhaps even drafting documents – are higher-stakes or personal tasks that are not easily trained and substituted. Short-term savings in time can increase the risk of future wasted time if and when things “break.”

And on the human side, we often encounter creeping internal dissatisfaction. Circling back to enjoyment of our practice, “what” we enjoy shifts over time. Early in my career, I idealized being locked away in an office cranking out complex drafting projects. Now, I idealize a mix of cranking out educational content like this and experiencing opportunities to meet with my peers and audience (such as last week at Heckerling). But, when we speak of balance, we often enjoy what we enjoy because it offers a respite from what we do not enjoy. In other words, you can’t have good without bad. If we jettison all the bad, does that mean what was once “good” or “enjoyable” will creep into the territory of itself becoming something we want to jettison? The concept of “lifestyle creep” is usually examined with respect to increases in income, but I also believe it could introduce itself in this context through increases in time or efficiency.

I could pontificate on this much longer, but cutting to the chase I think solutions like automation and AI adoption require (as a prerequisite) a certain amount of self-awareness that is taken for granted. We idealize the practice we “could” have, but ignore the fact that these tech tools may just fix symptoms. The creeping dissatisfaction belying what we do or do not enjoy is something that should first be examined, and to me this is the true root of our duty of competence. If we cannot appreciate that what we do and do not enjoy are both necessary elements of the process of gaining competence, this itself could stall the goal and process of gaining competence itself.

And lest you think I am judging you, rest assured that I am struggling with these same issues in real-time. You are not alone.

AI Exercise of the Day – Documenting Competency

Many of our exercises have been short, 5-minute exercises with low-stakes. But, the very processes we discussed above in terms of outsourcing or automating unpleasant tasks can be leveraged with AI.

As estate planners, we often walk our clients through an exercise of “what” would happen if they were to walk out of our office and get hit by a bus. I would venture to guess, however, that few of us do the same for our own practices. What if you could create a process manual starting with your own day-to-day practice, and then shift this to other employees in your organization as well?

Keep reading with a 7-day free trial

Subscribe to State of Estates to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Griffin Bridgers
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share