Tag Archives: Asimov’s 3 Laws of Robotics

The Tech Fair Horror! Robot Attacks, Injures Man

Robots v Humans: AI machine ‘attacks’ visitor at Chinese tech fair (PHOTOS)

Oh, we shoulda listened to Isaac Asimov! Instead, it’s going all Terminator on us.

At the China International Hi-Tech Fair recently, a robot that was supposed to be an “educational tool” launched itself through a glass display case and made a frenzied attack on an innocent bystander, who was slightly injured (https://www.rt.com/viral/367426-robot-attack-china-technology/). We are unable to confirm reports that the robot growled “Die, human, die!”

I dunno, it doesn’t look so fierce to me. It looks kinda like R2D2 from Star Wars. Which reminds me–What does R2D2 take when he has a cold? Robotussin! But I digress.

It should be pointed out that some killjoy who doesn’t want us to have any fun with the nooze says the robot crashed through the display case because somebody mixed up the “forward” and “back” buttons, hit the one when he should’ve hit the other. We would rather read that the robot’s Artificial Intelligence took it upon itself to add a blood-lust program. “He must’ve programmed himself to do that!” Just because those immortal words originated in Godzilla vs. Megalon doesn’t mean they aren’t true.

Just to be on the safe side, steer clear of hi-tech fairs and bring no robots into your home. ‘Cause you never know when they might program themselves to be smarter than you and take away your stuff.


‘Robots as Persons’–Is This for Real?

Image result for i, robot by isaac asimov

The European Parliament has approved a draft proposal to grant “legal rights” to robots as “electronic persons” ( https://www.rt.com/viral/373450-robot-kill-switches-status/ ). It will also include “obligations” for robots to “make good any damage they may cause”–an obligation, by the way, which does not seem to apply to “asylum seekers” in Germany and France who have done a lot of damage which is not made good. But I digress.

Are they jiving us? I’m having a hard time believing this story. Take this quote, for instance:

“AI [Artificial Intelligence] developers will have to ensure their creations follow a set of rules that prohibit them from harming or allowing a human to come to harm through their inaction. AI can protect their own existence under the rules, if this does not harm any humans.” And just in case, it is proposed that a “kill switch” be added so that “any rogue robots can be turned off easily.”

Hold on a minute, there! Isn’t this “Asimov’s Three Laws of Robotics”? In his 1942 short story, “Runaround,” science fiction great Isaac Asimov proposed “three laws,” which he applied to his stories about robots from then on and which were adopted by many other science fiction writers. I grew up reading those stories and novels, and I know those “laws” by heart.

One) A robot must not, by any action or inaction, cause or permit any human being to come to harm.

Two) A robot must obey all commands given to it by any human being, unless that would conflict with the First Law.

Three) A robot may do whatever it needs to do to protect its own existence, except when that would conflict with either the First or Second Law.

The robots in Asimov’s novels, like I, Robot, were almost indistinguishable from real human beings. The EU seems to be concerned that people may wind up confusing a robot’s simulation of human drives and emotions with the real thing. That could get kind of sticky.

And so the secular humanist God-playing project goes on and on, from one folly to the next. It is he that hath made us, and not we ourselves; we are his people, the sheep of his pasture–that’s what the Bible says (Psalm 100:3), and the secular whoopee crowd has a real problem with it.

As for me, I don’t see how people who don’t have all that much intelligence themselves can be so confident in their ability to create artificial intelligence in electronic persons.

Artificial Stupidity–yeah, I think they can manage that.

These people are very seriously deluded.


Robots Being Designed ‘to Hunt Prey’

Image result for images of the most dangerous game, movie

In the 1932 movie classic, The Most Dangerous Game, a homicidal madman gets his thrills hunting human beings.

In a little science project underway at the University of Zurich, scientists–heh-heh–are trying to design robots “to hunt prey” ( https://www.engadget.com/2016/07/05/robots-hunt-prey/ ).

Oh, they assure us that this new technology will only be used for thoroughly benign and constructive purposes, while at the same time really souping up our knowledge of robotics. Do you believe that? I don’t.

Imagine a gaggle of super-rich Davos types getting together to see whose robot will be the first to pounce on a Climate Change denier.

What does that say about our times, that this is not at all difficult to imagine?

For the time being, let’s take a little peek back into the history of science fiction: Isaac Asimov’s “Three Laws of Robotics,” which for many years set the standard for robot stories.

One) A robot must not, through any action or inaction, allow a human being to come to harm.

Two) A robot must obey any and all commands given to it by a human being, except where such commands would conflict with the First Law.

Three) A robot must do anything necessary for self-preservation, except where such action or inaction would conflict with either or both of the first two Laws.

I don’t think they’re gonna build those laws into the system–do you?


%d bloggers like this: