Just askin’–can the robot pass a lie detector test?
A New York lawyer is in hot water for using “Artificial Intelligence,” aka ChatbotGPT, to write the briefs he would submit to the court. The robot referred to over a dozen other cases. The problem was… all those cases were fictitious (https://www.breitbart.com/politics/2023/05/29/sweet-little-ai-lies-new-york-lawyer-faces-sanctions-after-using-chatgpt-to-write-brief-filled-with-fake-citations/). They never existed.
Uh-oh.
I don’t know how the robot just makes up stuff like that, or why anybody would need a robot to do it. We used to call it “lying.” The lawyer says he was “unaware of the possibility” that AI might be citing fictitious sources. (I did that once, for a high school term paper. All by myself, without a robot.) Well, boyo, the judge sounds like he means to make you very keenly aware of that.
How badly do we need an Artificial Intelligence that lies to us? Our politicians and educators need absolutely no help in that department. That goes for a lot of our scientists, too.