Today's New York Times has an interesting article on the near future possibility of autonomous robots on the battlefield. Part of the appeal is that robots would be hardwired with rules of engagement and would avoid things like emotionally fueled war crimes. This part in particular caught my eye:
In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called 'the psychological problem of ‘scenario fulfillment,’' which causes people to absorb new information more easily if it agrees with their pre-existing ideas.Coincidentally, I'm currently reading (well, listening to) The Forever War, by Joe Haldeman. It's a Hugo/Nebula award winner (although it's showing its age now) about a soldier who fights a war on far off planets while, thanks to the magic of time dilation, the Earth he's fighting undergoes fundamental changes.
In the part I listened to on the way home from work yesterday, the narrator explains why, several hundred years in the future, robots did not prove useful as ground troops - precisely due to the lack of a self-preservation instinct. That would seem counterintuitive, but maybe that will be the outcome after all?
No comments:
Post a Comment