‘AI’ Dumps Lawyer in the Soup

Why Do A.I. Chatbots Tell Lies and Act Weird? Look in the Mirror. - The New  York Times

Just askin’–can the robot pass a lie detector test?

A New York lawyer is in hot water for using “Artificial Intelligence,” aka ChatbotGPT, to write the briefs he would submit to the court. The robot referred to over a dozen other cases. The problem was… all those cases were fictitious (https://www.breitbart.com/politics/2023/05/29/sweet-little-ai-lies-new-york-lawyer-faces-sanctions-after-using-chatgpt-to-write-brief-filled-with-fake-citations/). They never existed.

Uh-oh.

I don’t know how the robot just makes up stuff like that, or why anybody would need a robot to do it. We used to call it “lying.” The lawyer says he was “unaware of the possibility” that AI might be citing fictitious sources. (I did that once, for a high school term paper. All by myself, without a robot.) Well, boyo, the judge sounds like he means to make you very keenly aware of that.

How badly do we need an Artificial Intelligence that lies to us? Our politicians and educators need absolutely no help in that department. That goes for a lot of our scientists, too.

13 comments on “‘AI’ Dumps Lawyer in the Soup

  1. A time of incredible folly. A lawyer that uses Chat GPT is, in my humble opinion, worthless.

    1. Something tells me that Chat GPT will become far less fascinating, as time goes by.

  2. I do not see a real big problem using AI to find court cases or even help write a brief. However, I would take the time to review those it finds, and any other references, just to make sure they did not come from la la land.

    A few months ago, a friend, using a new AI program, along with the book manuscript I was working on, had it write a few more pages. Using its capabilities of searching the internet archives of thousands of books and what I had already written, it produced a very interesting few more pages that were well-written, followed the thrust, ideas, and intent of my book, and really seemed written by a human being, I was impressed.

  3. I would really like to be able to edit my posts on this blog after I post them. For sometimes I see mistakes, or other things I would like to change. Thus, I would like to add something to my previous comments. It seems the AI program used by the lawyer was just writing a story, just like it had for my book. It had no concept of facts, or lies, or truth. Not knowing all the facts, I am assuming, the error was in its programing. It seems it wasn’t programed to reference actual courts cases.

    Nevertheless, it is amazing the progress AI has made, and has become within the last few years. What could go wrong with this fast-paced progress of AI? I still remember “Westworld,” the 1973 science fiction Western film starring Yul Brynner as an android gunslinger in an amusement park. “Where nothing, nothing can possibly go wrong.”

    1. I almost never edit readers’ comments, lest I put words in their mouths; but I can make any changes you ask me to make.

  4. Thanks, but you do not need any more work to do. And I think the readers forgive any mistakes I make, and if there was something else I wanted to say and forgot, I will do just as I have with my comments on this post. And thus far with my comments, I don’t think I have made such a horrendous error or comment, that it must be corrected or changed…so far, so good…

    And yes, I agree, beware.

    1. I don’t know of a way a reader can edit his own comments. It’s like they’re carved in stone. But I thank you for not giving me more work to catch up to.

  5. AI depends on who is feeding the data into the computer. Christians need to jump into this technology to compete with the humanist’s version – but then that is true of all of society’s institutions. The body of Christ has its work cut out for her.

Leave a Reply