Justice and computers–it won’t work
We had a case just like this one in New York a few days ago. There’ll probably be more of them–because lawyers are as lazy and shifty as anybody else, and they’d rather let a robot prepare court briefs than do it themselves.
A Texas judge has banned the use of “Artificial Intelligence,” aka AI, in preparing briefs for court cases (https://www.cbsnews.com/news/texas-judge-bans-chatgpt-court-filing/). No more ChatGPT, says Judge Brantley Starr.
Why not? Because, he explains, the robot can and does “invent facts”–invented facts aren’t facts at all–and has “a tendency to hallucinate.” Oh, spiffy. Imagine losing a major lawsuit, or even going to prison, because some machine invented “facts” against you. I wonder if that’s happened yet. “AI” is coming on too fast to be monitored. We’ve heard lots of warnings about that, including some by the same tech wizards who developed AI in the first place.
And anyway, do we really need robots to help us lie? We’ve always done a fabulous job of that ourselves.