It’s Law – Episode 11: The Rise of AI in Legal Practice Feb 8, 2024

It’s Law – Episode 11: The Rise of AI in Legal Practice Feb 8, 2024

|  Podcasts Blog News & Updates


Oh, and welcome to FR Law Groups podcast. We are joining you today. My name is Rita Gara. I’m an Attorney with FR Law Group, I’m joined by my fellow Attorney, Danielle Olson. And I’m really excited about our topic today. It’s certainly a hot topic in the news trending in areas like the law and lots of entertainment areas. So we are going to talk about Artificial Intelligence issues arising out of the use of AI. Super excited to talk about it, I think there’s a lot going on. A lot of I feel like a lot of people have a lot more knowledge than I do about this. I’m really excited to talk about it. It’s something that is very relevant to today. So why don’t we kind of start from the basics? What are we talking about when we talk about artificial intelligence? Where AI? Yes,



so Artificial Intelligence refers to the simulation of human intelligence in machines. It’s machine learning machines that are capable of thinking like human beings. So there’s different types of artificial intelligence, I think the two major categories right now is there’s Narrow Artificial Intelligence and then there’s General Artificial Intelligence. Narrow, artificial intelligence is referred to as the weaker form of artificial intelligence. Essentially, what it means is that there’s a lot more constraints on what the artificial intelligence can do. All of the artificial intelligence that we see today, or, as far as I know, is all narrow artificial intelligence, meaning that there’s some type of constraint on what the artificial intelligence is allowed to do. General Intelligence, in theory, allows the machine to think and have the same capabilities as human and understand and apply knowledge in different contexts autonomously.



I think that’s kind of the area and brings up the your sci-fi movies like iRobot that people are really interested in in terms of the ability to learn and kind of outpace learning of humans in some aspect. That’s super interesting. And they’re definitely



companies out there that are testing the boundaries of how far they want to go but allowing themselves to kind of pull back on the AI. You know, that’s sort of a topic that we’ll we’ll go into as far as how much we’re able to control artificial intelligence, to what extent are we going to control artificial intelligence?



Yeah, I don’t want to get into that. But maybe before we have it into that direction, I think a term that I know I’ve heard a bunch I’m sure most people have heard. And maybe we can talk about what this means is the term chat GPT. What is Chat GPT? And what is it used? For? Sure,



so Chat GPT is a form of generative AI, which is another distinction between general AI and generative AI. The generative AI purchase means that generates responses Chat GPT, you know if you ask Chat GPT what it is its definition. Generative AI refers to a subset of artificial intelligence technologies that can generate new content, whether it be text, images, music, or other forms of media that is similar to the content and has been trained on but not identical. So it’s still within that narrow AI category because there are still constraints on what it can create.



And if I understand correctly, correct me if I don’t, there are different types of Chat GPT up to that right? That’s



correct. And part of that, I think, has to do with that, I think part of it has to do with how much data these programs can hold. Right now, there’s at least two different types, there’s Chat GPT at 3.5 and Chat GPT up to 4.0. Chat GPT to 3.5 is free for users to so you can go look up open AI chat. You can see right now you can plug in your email address, and you can start using Chat GPT. It’s a basic service. You know, it’ll generate responses to questions that you ask if it’s like a search function like a search engine like Google or Bing or something like that. And it will you know, it can draft things for you. It can come up with new ideas, but it is limited in its capabilities. So Chat GPT 4.0 It’s the newer generation of chat GPT. It is a subscription service. I think right now it’s about $20 per user. And it allows you to do much more than the chat GPT  to 3.5 version. You can actually ask it to create images so it will draw something for you Do you know, and sometimes they can be pretty complex or a computer-generated image. It also is able to handle much more complex ideas and queries. And it will, and it will just also respond in much more complex and longer responses.



So you’ve raised a ton of things that I think are super interesting, and that I’ve heard about in terms of issues that the ability to draw the issue of the ability for it to address something user issues I want to talk about in terms of just a general I think it’s super interesting, but also, as lawyers how that comes into play in what we do and the issues that we’ve seen many headlines. So maybe we can talk about, you know, the rise of AI and the use of it in illegal practice and the areas that it impacts. I think, again, we’ve seen the we’ll get into detail. We’ve seen it with respect to drafting, but I think it touches on a lot of areas relate to the practice of law, the attorney, an attorney needs to be aware of or want to be aware of both useful things and negative thing. Okay. So for instance, let’s take billing. How potentially does AR and affectability an illegal practice? Sure



i In the future, it might be something that law firms use, when in their own billing systems. I haven’t heard of that happening yet. But what I have heard of happening is companies that hire the law firms, they will use AI in receiving the bills from the attorneys. And then we’ll use AI to kind of go through the hours build to make sure that you know, they weren’t over billed, or if they have questions about you know, why was this build what what was worked on here? Why was it this many



hours duplicate billing?



Absolutely, yes, it will review all of that for the company. And that the idea is to mitigate costs. And



I would imagine, you know, we’ve, we’ve taken on some construction cases where the billing gets into the, you know, six figures, and it involves numerous, numerous time entries spanning multiple years. I can see where that would be such a useful tool in that, you know, instead of having somebody go through hundreds of pages with a magnifying glass, reading every entry that can probably be done in really quick time.



Absolutely. And so, billing is one area where we’ve seen used in the law, another area would be drafting contracts, even revising contracts. So if you want, you have to be careful about that, and I’ll get into that more just because of the information you put into Chat GPT. It is like giving information to a third party. So you know, if you have questions about maybe a clause within the contract, or ideas on how to write in or other language you could use that might be a possibility, but you want to be really careful about putting party information in or specific company information. And because you always have to want to remember that you’re giving that information to a third party, that information could potentially be packaged and sold to another party. Did you’re



gonna have issues in terms of, I’m just thinking in terms of legal issues that would kind of derive from that you’re gonna have potential issues with privilege if you’re sharing with a program, man, and you’re gonna have issues with how secure is that program? And can you understand how secure that is, with hacking, you know, attacking an issue that’s going to be relevant to that? Are they storing the stuff that we’re putting in there? And then I would imagine also, I know a lot of cases we deal with confidentiality, stamps, you know, the documents, the parties have agreed, whether it’s trade secrets or other private business information, that’s going to have an effect. It’s there’s a confidentiality order filed with the court. Are you gonna have to file a motion to get permission to use these things? These are all things that are gonna have to be figured out and argued about as we go down this road, I would imagine. What about the idea of brainstorming, for example, how is it useful? Is it useful? How are ways that an attorney can utilize it in that regard?



Yeah, so one way that attorney might use it for brainstorming purposes might be, let’s say, you have filed a complaint. There is a counterclaim that styled and you want to present arguments to attack the counterclaim its arguments, you might put in one of the arguments from the counterclaim, and the chat GPT might be able to generate ideas with in ways that you can attack that argument. Some of them are based on the law, but some of them might just be brainstorming ideas on how, you know, common sense, this doesn’t make sense and how you want to work with that is you would go to a legal database like LexisNexis Westlaw. And find cases that support arguments that you’re looking for. So it might be a way to, you know, if you’re you’re trying to think of a way that you may want to draft a certain clause with a contract. And there’s a certain requirement that the client wants or a need that the client wants within that contract, it may give you ideas on how you might be able to draft that in a way that fits the client’s objectives. That could be another way now. Again, it’s it’s not a source where you want to copy and paste. I wouldn’t say, I think you always want to just use it possibly for ideas but or strategies, but as far as using it, as the end all be all, it’s it. It’s not there yet. And also, if it doesn’t have the same capabilities as I think an attorney might have in understanding the client’s needs and having that face-to-face interaction. And it doesn’t. It doesn’t always give accurate information or accurate legal information.



We’re going to talk about that a little bit further on. But we’ve certainly seen the headlines where people have gotten in trouble for that. And I want to talk about that because I think that it has the potential for some serious abuse, and you can find your license a little bit of trouble, and people cannot care if I’m relying on it. Yeah. What about things like one of the tasks that can consume a lot of attorneys’ time and money for the client would be a discovery review, especially if you’re talking about a case? And as you know, we have cases with again, six figures in terms of the number of documents that have to be reviewed. Is that something that can ease the burden in that space?



I think so. We haven’t seen it as much yet. It’s still very new, and it’s developing. But what we are seeing is some of these discovery platforms that law firms use are starting to incorporate AI into their discovery platforms, which will allow both the discovery platform and the law firm to be able to sift through material using search terms much faster. So, in the cases, for example, where we have over 500,000 documents that we have to go through, I can see the potential benefit of using AI in discovery process by being able to review discovery much faster, which will save time and money for the client. And in fact that our law group may be implementing AI into its discovery process, even as it says here. We’re looking into that right now.



And that it does bring up an interesting point. And I don’t know if we’re going to touch on this later. But I’ll mention it now. We can circle back if there’s more to discuss. But it is interesting in thinking about the use of it. And if it’s saving your client money, will there come a point where there’s a professional obligation to utilize these things? Because I can imagine between? You know, having somebody go through each document versus I mean, we kind of have already seen that with electronic discovery services. But I would imagine at some point, there could be an obligation where it’s the different systems in a bill between somebody who’s using it, and somebody who’s not using AI is, you know, 10s of 1000s of dollars, I imagine that’s going to be an issue? Absolutely.



I think it is still so new. There aren’t answers to these questions yet. So it’s the legal world right now trying to figure out what the best case use of AI is. And where are we going to put regulations on this? You know, when is it considered an abuse of AI,



which obviously, then raises the spector of using it for motion writing argument writing before we get into kind of some of the abuses we’ve seen advance and some pitfalls of relying on that at this point. Anyway, going back to I like what you were talking about with respect to its use for I think it’s a brainstorming the idea of, you know, I think Google is this something that we all use to get you started somewhere with a deal or point you in the right direction, outside of the legal field, and it reminds me kind of have that as a starting point for brainstorming. And I could see where that would be really helpful. If you’re using your checks and balances that we’ll talk about, but what about it’s used in terms of, you know, summarizing cases or using it when you’re trying to, you know, synthesize a bunch of research that you’ve done. Yeah,



absolutely. I think it’s its best use right now within the legal field is with summarize information, it’s very good because it’s all about input-output, right? Garbage in equals garbage out. Whatever you put into it, it’s going to be able to conceptualize much better, I think then it just coming up with its own ideas. It’s it’s able to do that. But it’s it’s a work in progress progress. So if you give it a specific case, and then you ask it to synthesize it, it will be able to summarize, legalese, and put it into plain language, it’s easier for the Layperson to understand. Um, you do have to be careful about this, though, because there have been instances where it has interpreted case law incorrectly. I mean, that’s why lawyers go to law school. They learn how to interpret the law, there’s still arguments about how the law is interpreted. That’s why there’s two sides to a case. So, it’s not always clear-cut. And there have been instances where AI has interpreted a case incorrectly. And so you always have to cross reference check with your attorney, check with your lawyer or lawyers check with a legal database, and, you know, maybe consult with other attorneys on their interpretation. But as far as summarizing information, for the most part, it is it’s pretty, pretty good at what it does in summarizing information. Yeah.



And a really good beginning. Yes, but not the end of the story. All right. So, obviously, we have kind of touched on this is certainly well. It’s already raised and will continue to raise ethical considerations. So let’s go through some of the ways we’ve seen this kind of rears its ugly head already and things to think about for the future. What are some of the ethical issues in practicing law and using AI? Sure, I



think the first one that comes to mind is confidentiality, you want to be very careful. I think that you know if you are going to use Chat GPT, it’s still you’re still giving information to a third party, and you still have confidentiality, ethical requirements, and obligations. So you wouldn’t, you know, you may not want to provide party information, as mentioned earlier, our specific case information, I’m using it for generalities, or for legal research, or to get ideas or even checking grammar. You could use it for that. But you do have to be careful about the information that you put in there. Because you have to understand that you’re giving it to a third party. Another ethical consideration would be the misinformation you have there. Are there federal rules, there’s ethical obligations to not lie, and to not provide incorrect information to the best of your ability? And that the information that you’re providing is based on some type of, you know, information that can be there verifiable,



is it the best of your ability isn’t going to be appearing before a judge saying, Well, I pulled this from chat GPT and didn’t bother there about a single a single quote or item or case citation? Absolutely. That’s not going to cut it. Absolutely.



And we’ve seen that happen in a case in the Southern District of New York. There was an attorney who used chat GPT to find cases for his brief. There were six cases that were incorrect in that brief; when asked about it by the court to say, hey, you know, we can’t find these cases. Can you please provide us more information on these cases? The attorney would have gone back to that Chat GPT and asking Chat GPT to find the cases. Chat GPT then created excerpts of cases that were fake that that didn’t exist. And I and what it was doing there was it was loose, fascinating. And we can talk more about what that means. But the attorneys essentially doubled down with their Chat GPT research and then presented those excerpts to the court. Well, the court did their own verification and verification, yes, and found out that these cases did not exist. And those attorneys were fined by sanction and fined $5,000



I think that’s such an important point is, you know, as much as that the facts of that seem crazy, that that somebody would not verify and then as you said, double down and use it’s the same thing that got him into the problem in the first place use it to verify his work. Well, not his work. And that is interesting because I think that, you know, there’s certainly unethical people in every profession and, sometimes also, it’s just a matter of, you know, we all know what it’s like to be really busy and have deadlines and you just think, Oh, this is just a crutch. I can use this time to get this, you know, work mark out, and it’s gonna have to be really careful. I think it also points to where AI is at this point; it’s this kind of huge new fun toy, but it has serious limitations that that you really have to be aware of, as you’re, you know if you’re going to use it in any capacity. Absolutely. And



I think it’s a, it’s a balancing act for lawyers and for law firms right now, because you also don’t want to be left in the dust. And you don’t want not to adapt. Because I think at this point, AI is inevitable. It’s not going away unless something drastic happens. Because I mean, imagine a law firm that didn’t use Google, or imagine a law firm that only use books in order to do all this legal research or only to review physical documents to do its discovery, it would just set that law firm back ages. And so I think you have to adapt with the times, but you have to be very careful and cautious as you’re moving forward. And I think moving forward slowly is how lawyers are successfully utilizing this tool.



And I think there are already a number of good resources for lawyers who are trying to educate themselves beyond Google, and especially within the realm of legal practices. CLAS looks for those CLAS that are out there. I know there are a number of them to start getting yourself educated. I do think it’s an interesting point. And it goes back to what we discussed before with billing and the savings to your client. When you talk about don’t get locked into dust, I mean, it is kind of like, you know, moving forward with electronic evidence presenting evidence, you know, on an overhead screen, instead of in physical form, doing Zoom hearings instead of in person. I mean, we are in the technological age, and is going to be, you know, it is here to stay in one form or the other. You mentioned hallucinations. This is such a fascinating thing. Let’s talk about hallucinations. Yes.



So, again, if we’re going to Chat GPT, hallucinations refer to instances where the model generates incorrect, misleading, and entirely fabricated information. So there was a very interesting study recently done by Stanford, where they found, I think that they did 200,000 legal questions on opening eyes chat GPT 3.5 Google’s poem two and Meta’s llama to all general purpose models not built for specific legal use, but they are large language models, and they found that these large language models hallucinate at least 75% of the time, when answering questions about a CT score ruling. That’s



a terrible number. Yeah, awful, awful. It’s not a workable number. Yes.



So, it definitely goes to show the risk of using AI and how that can give you incorrect, misleading, fabricated information. So, definitely don’t 100% rely on AI. But there’s, you know, with having this information and having this study done, I think it’s going to give the developers of AI, you know, a starting point on how they can improve the technology as well. So even that, I think what we’re hearing about AI right now is just the magnitude and the significance of how quickly it can learn and adapt, obviously, much bigger than a human brain. So, you know, who’s to say that in a year that these types of studies show that there’s a much smaller rate of error?



Yeah, yeah. It’s, it’s frightening. But I think I think that is an interesting point that, as with any kind of new technology, the developers are going to have to learn, from the person who has the unfortunate experience of relying on a hallucination in court. And I



think, you know, an interesting question here is, if somebody is harmed by the hallucination of AI, who’s held accountable? Yeah. Because right now, we don’t, you know, as far as I know, there’s no case law out there that provides damages for someone as a remedy if AI creates something that ends up being false. So is it the developers that are held accountable? And if so, that will, I think, incentivize those developers to have, you know, clear constraints on what AI can and cannot do.



You know, it’s this is just a sidenote not necessarily related to the law, but it reminds me of this. You know, when you talk about, like, who’s to be held responsible? It was interesting, I think, many of us watched to the writer’s strike that was going on the job a long time to resolve and one of the issues that came up there was the use of, you know, a person’s identity or identity and, and so many issues there, whether it’s, you know, being compensated for that or the right to use it in how many different you know, is there an endless ability to use that image? And go ahead? Yeah.



And it will be really interesting, especially within entertainment law is because AI is essentially compressing all of the information and input into it. Copyright, you know, and intellectual property law, how that’s going to be implemented with people using AI information in the material that they that they put out into the world. And I think that was one of the concerns of that strike is, you know, are we going to use real writers that create content out of their imagination? And, you know, there already are restrictions in place for them not copying other people’s material and producing that as their own? Is, you know, at what extent is AI using other people’s material to produce its ideas? Yeah. How are protected



our original canon idea from a Chat GPT? Exactly. And so I think, yeah, I think there’s, it’s, it’s overwhelming, honestly, when you think about the amount of litigation that’s coming, and the laws that will have to be put on the books, as you know, it also still continues to develop. But I think we have, we’re all familiar with, there have been some pretty big headlines where AI has gone wrong for businesses. Let’s talk about some of those examples. Yes.



So in 2016, Microsoft came up with a chat, it was the AI chatbot called Tay. And it was designed to develop conversation understanding by interacting with humans and interacting with humans through Twitter. And essentially, it would take other people’s posts and communicating with them to learn how to communicate with other people



if you’re learning how to communicate from Twitter.



Now, my section is a little sterile, yes. And Microsoft had to shut that down after one day of use, because the chatbot started saying racist, sexist, and various kinds of things on the Twitter platform. Yeah. So that was one example of where AI went wrong for Microsoft. Another example in 2021 Zillow, the online real estate marketplace. They announced to shareholders it was going to wind down at Zillow offers operations and cut 25% of the company’s workforce, which was about 2000 employees. So, the reason for the layoff was because it had used AI it. And there was it was the result of the error rate of the machine learning algorithm it used to predict home prices. And Zillow said that the algorithm had led it to unintentionally purchase homes at higher prices than its current estimates of future selling prices. And this resulted in a 304 million inventory write-down in the Quarter Three. Wow. And



that’s, again, similar to the to the attorney who doesn’t double check or verify. You wonder what kind of processes they went through? Or do they just rely on that, you know, capital, the algorithm? Right, the



calculation? That’s a significant business law.



Right. And the case where the attorneys were fined $5,000, I think it was it was a federal tort case. You know, I don’t know how significant the damage was to their client, I think they ultimately did end up losing that one. But, you know, on these high profile cases, if somebody makes a mistake that, you know, I what are the what, to what extent could these damages be because of somebody using AI? And



yeah, and it’s, you’re talking about damage to a business that you’ve self-inflicted, you’re talking about damage to a client when you have not relied on a good sound law, or law that even actually exists, and it’s damaging your reputation? I think that’s what we’re seeing too is whether it’s in the context of your reputation as an attorney and not making like quick decisions or your reputation as a business, you know if you are making decisions based on, you know, the



don’t have a good outcome, this outcome it can it can have a significant impact.



What about so this was something that I think kind of captured everybody’s imagination, or was the the deep fakes what is a deep fake? Yeah,



Deep fakes are fake images of real people. So, Oh, it’s happening.



I remember the first one. I think I saw there was a Tom Cruise defect and you couldn’t, you couldn’t tell it that, you know, wasn’t attended. They are as if you’re looking at the actual right,



and I think a recent example was Taylor Swift. Yeah, there was an image of her that went viral. That was explicit, obviously something that she didn’t want out there in public and harmed her. And it you know, it took the but it wasn’t her. No, it wasn’t later when there was a deep fake image of her that AI had generated. And it was viral on x ‘s platform. And it took 19 hours before they were able to take that image down, even though it went against the platform’s social media policy. It was, I mean, eventually, the account was this did, but definitely is going to, I think, some litigation as far as who’s responsible for that image? Getting out there into the world and it being allowed to be up there for 19 hours? Yeah, I think it



implicates a lot of different areas. And what are these, you know, we just saw if you watch, you know, the heads of some of our big social media






called the task on their lack of, I suppose, protections within their platforms of protecting children. But I think this is kind of another area, what kind of responsibilities are businesses going to have, you know, whether you’re talking about a giant company like Madiga, or Twitter or TikTok, or you’re talking about a local business, who hire somebody to create content for them, and they willingly use a deepfake? I mean, you’re also going to have it reminds me of when this whole identity theft crime first started, and people were, you know, pulling their credit reports and seeing, you know, a mortgage that they hadn’t purchased or credit card that they had no been, it seems to me that the potential for identity theft on another level, it’s going to be there. And it’s going to be, I wonder how hard it will be to kind of get to the bottom of, you know, get to the bottom of prevents, prosecutes, because it’s all done behind the screen. So that’s going to be something that is going to be, I think, a headache for years to come as well.



Absolutely. I think the only way we even know how to understand more about this is by asking questions of the developers of artificial intelligence and saying, you know, what information? What constraints are you putting on artificial intelligence? And they’re gonna have to track all of that and be able to give their reasons for why they did or did not put a constraint into AI that potentially caused harm to another person. I think it’s, you know, we’re



talking a lot about like images. And I think one of the things I’ve seen this as scams being discussed in terms of, you know, don’t fall for the scam is, you know, using, they’re, they’re taking your voice as well. So you have this potential to take somebody’s physical image that looks perfectly accurate, and then getting, they can get, you know, just a clip of, of your voice from answering a phone call and then attaching that to it’s going to be really difficult to find some of these. And I think it’s something that you know, from protecting yourself, hiring an attorney to address some of these issues in terms of, you know, if you are in any sort of entertainment or social media field to be used social media for your business, we’re doing a podcast, are you advertising? Do you have Instagram patient advertise your business? And if you are hiring somebody to do work for you? What are you? What do you understand what they’re using, how they’re creating their image, how they’re creating, you know, the tagline they’re going to use for you, and what limitations if you’re, you know, a family-owned business and they’re going to use your photographs or images or likenesses, you know, what does that contract say about the limitations on the use and who owns the image, it’s just the it’s a, it’s a minefield of legal issues, but it’s something that you really want to stay on top of. You know, it’s interesting to talking about the way it’s gone wrong for business, as we’ve already talked about a couple examples of how it, Ah, the use of AI has gone wrong for lawyers. But I know there’s a couple other big ones out there. So let’s get into some of the other stories.



Sure, I mentioned the one, the Southern District of New York, that’s the most famous one right now. There’s another recent one. And it just goes to demonstrate why it’s not only important for lawyers to check the work that they do on Chat GPT but verify the information that they get from their clients. Because another sort of famous example Michael Cohen gave his lawyer bogus legal citations concocted by the AI program Google bar. The fictitious and fictitious citations were used by the lawyer in a motion submitted to a federal judge. Mr. Cohen is quoted saying he didn’t and had not realized the lawyer filing the motion on his behalf. David Schwartz would drop the cases into HIS Submission wholesale without even confirming that they existed. And Michael Cohen’s lawyer acknowledged that he had not independently reviewed the cases, and he ended up having to get his own lawyer. So that goes to show that, you know, even information that you get from your clients, whether they’re sophisticated clients or not, you always need to be verifying those cases, in, you know, what has proven to be sources that can be relied on by attorneys, like LexisNexis Westlaw, you know, if you’re if your Bar offers some type of option to look up cases within, you know, it’s programmed, and you can use something like that. But just relying, you know, blindly on information, it just goes to say, not only with case law but with any type of information, make sure to Verify your sources and make sure that you know where the information is coming from. And that is reliable information.



And I think, again, it goes back to kind of a theme that has to be playing through your head at all times when you’re using this stuff is, you know, at this point anyway, in the legal field, that’s a starting point, it shouldn’t be your main point, it shouldn’t be. It’s



a tool in your toolbox of everything that you have out there. But it definitely should not solely be relied on.



And I think we’ve kind of touched on this in terms of what governing bodies are doing to regulate this kind of stuff if anything at all. I think this is what we’ve seen with the use of, you know, a lot of technology is it seems that, you know, the technology just moves at a faster rate than the law can to catch up. So there’s going to inevitably be a lag. Is that the kind of thing you’ve seen? Yeah, I



think so. I think that right now, it seems like what the governing bodies are trying to wrap their head around and figure out how they’re going to regulate is social media in general. I don’t even I mean, while AI has been discussed, I don’t think there has been much legislation to regulate it because it’s so new people don’t know what they don’t know. And that’s probably how it’s going to be for a while. You know, I think right now there’s there’s some stronger regulations coming out of European countries, as far as protections on social media, and you know, who can post what can be posted? How it can be posted? Can people use filters? Can people not use filters if they do use filters? Do they need to let the public know that they’re even chosen? And these kinds of things are are being talked about. And legislation is being drafted and worked on in different governing bodies? As far as AI? I mean, the biggest example out there right now, I think was the writer strike in Hollywood, where I think they were able to negotiate. And actually, there is some language within the contract that was negotiated that discusses AI. And I think that was the first example where that had happened. But you know, as far as governing bodies, having regulations on this, it just it’s not, it’s not there yet. So I think right now, what companies can do is they can put it in their corporate policies, as far as you know, how they want their employees or their company utilizing AI and what they’re comfortable with, you know, law firms, I think it’s going to be a topic of discussion for them as far as how much they implement AI into their discovery process or, you know, having putting that in their contracts. It’s protection and protections Exactly. But it’s still it’s just so new that there’s there’s no more to come. There’s



more to come. There’ll be a part two for sure. Jack. So I think maybe to wind things down and kind of sum up some of the things maybe it’s just good to go over how, from the perspective of a law practice how AI can be good for your client. Yeah,



I think that’s a great time. So, you know, despite instances of where AI has gone wrong, there are so many examples of how AI is benefiting, you know, every industry, but specifically within the legal industry, I think that it’s going to be most useful, as we talked about earlier in the discovery process. If we can sift through discovery much faster than we had been able to with, you know, solely the speed at which a human can work, I think that it will be better for the client because it will save them time and it will save them money. I think the whole goal would be to mitigate costs for the client. Yeah. And then, as far as you know, just for the client’s own information, they can use Chat GPT as a tool or a starting point to understand maybe some of the complex legal issues that are being discussed within their own case. And they can present those their questions maybe in a much more sophisticated way to their attorneys to get the best answers that they’re looking for. So it can be it can be very helpful in synthesizing legalese, putting it into plain terms, so that it’s a language that everybody can speak together. Yeah,



that’s great. Well, I think as you kind of discuss, there’s gonna be a lot more to come on this issue. We’re gonna see a lot more headlines. I have a feeling. But that’s it for today. Thank you for joining us, and we’ll see you next time.