Artificial Intelligence in Your Michigan Practice

Concerned about incorporating AI into your practice? Your Partnership is the place to turn when you want to see what other Michigan lawyers are thinking about this tricky issue.

Andrew B. Fromm from Brooks Wilkins Sharkey & Turco PLLC and Vanessa L. Miller from Foley & Lardner LLP recently shared some key considerations on the impact of AI in the legal field while filming in our studio. In case you missed it, here’s some can’t-miss information from your “Business Law Update 2025” on-demand seminar that speaks about AI overall—no matter your practice area.

These excerpts have been edited for length and clarity.

 

Key Considerations on AI in the Legal Field

 

ICLE: Can you give us an update on AI in the legal field?

Andrew: Artificial intelligence is having a tremendous impact on the legal profession. And it's happening at a rapid pace, with each month, more regulations and guidance being issued. As recently as November of 2024, the State Bar of Michigan issued recommendations for AI for attorneys in the state of Michigan. It recommends that Michigan attorneys consider AI in their daily practice. And it reminds attorneys that the MRPC (Model Rules of Professional Conduct) still apply to lawyers' use of AI. The State Bar of Michigan cautioned attorneys on a number of rules, and I'm going to summarize a few of those today because I think they have impact to a lot of lawyers in Michigan.

ICLE: What ‘s the first rule to consider?

Andrew: First, under rule 1.1, the State Bar of Michigan reminds attorneys that they have an ethical obligation to understand technology, including artificial intelligence. This means that lawyers cannot just decide not to use it because they're concerned about many of the risks. They have to actually learn how to incorporate it into their practice.

This also requires that lawyers must stay informed and up to date on the rapidly evolving AI technology and different tools that are available. Rule 1.1 also can be an issue with attorneys who are incorporating AI and not properly confirming that the case citations, for instance, in briefs that are submitted to the court are correct.

There was a recent case out of the Southern District of New York, Mata versus Avianca, Inc. The judge in that case sanctioned the attorney for using AI in a brief and not checking the citations in that brief, and the AI tool that the lawyer used had created fictitious citations, and the court was concerned that the lawyer’s use of that AI and not confirming violated a number of rules, including the duty of candor towards a tribunal.

So this is a very important topic that lawyers need to seriously consider both incorporating AI and properly understanding how they must use sufficient checks and balances in their practice to ensure that the information is correct.

ICLE: What are three key takeaways from the rules?

Andrew: Rule 1.3—diligence. Lawyers are responsible for the documents that they draft, and they cannot solely rely on AI tools and programs when submitting documents to the court or to their client.

Rule 1.6—confidentiality. Lawyers can only input information into AI tools that is not confidential, and if they do submit confidential information, they must inform their client and obtain consent before doing so.

Rule 1.4—the duty of communication. This requires lawyers to exercise independent professional judgment even when using AI tools. This means that lawyers cannot simply use an AI tool or program to provide legal advice or to give a client an answer to a legal issue.

ICLE: What are some trends we are seeing?

Vanessa: Practically speaking, we are seeing law firms and lawyers using AI in ways such as document review, brief writing, contract documents. But there's the fear of these hallucinations, which are the fabricated facts that AI generates.

So you have your ethical obligations and a tug of war with this. And that's why we're only seeing about 10 percent of law firms that have explicit policies on AI, and they know that it's here, they know that they need to implement policies, but everyone's being very cautious, which is typical of the legal industry.

From a trend perspective, we're seeing law schools more focused on AI in their curriculums, including with respect to data science, ethics, and other considerations for technology law to adapt to this growing practice and usage. Courtrooms are also looking at AI-generated evidence, and what will the evidentiary standards be to admit this evidence? So it's a valuable tool. It's being implemented, but it's going to need more regulation and consideration with respect to risks for lawyers.

 


 

“Lawyers cannot just decide not to use it because they're concerned about many of the risks. They have to actually learn how to incorporate it into their practice.”