Monday, March 7, 2011

Control the bad actors


A few days ago, my son introduced me to the online game "Moonbase Alpha". I was struck by its similarity to Second Life. In fact, it looked like a primitive ancestor.
I examined the Second Life scripting language to see if I could develop a similar game to run under it. I found that such a development would be straightforward. Unfortunately, when anything like Second Life is established, bad actors appear as quickly as fruit flies on rotten fruit on a hot summer day. An example occurred at the beginning of the course when Dr. Calongne faced a hacker attack on Acheron. She responded by making the island private, which solved her immediate problem at the cost of making it unavailable to good people as well as evil doers.
I then started speculating on improvements to Second Life to protect against bad actors. The question is, how do you do that.
I could create scenarios designed to entrap specific kinds of bad actors and lead them down various paths. In other words, experiment on them! However, there is an elephant in the room ... Ethics! What right do I have to experiment on people without their knowledge or permission, even though they do not respect my rights at all?
After the Manhattan Project in the Second World War, Robert Oppenheimer said “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb”. Many Project scientists had misgivings about releasing the genie from the bottle. I believe that this genie would eventually have escaped its bottle even if there had been no Manhattan project. If some other country had been the first to develop the bomb, for instance the USSR during the 1960s, it might have resulted in annihilation for the free world. Certainly if Stalin or Hitler had possessed the ability to destroy North America by touching a button, they would not have hesitated.
It may seem silly to compare Second Life with the atomic bomb, but remember that the first nuclear reactor was just a pile of graphite blocks and uranium rods. It still produced enough radioactivity to kill everyone who worked on it, and it was a critical step in the development of the bomb. Whenever there is technological development, the people involved focus their attention on the technological first, and the ethical after the fact. Technological and ethical forces are often intertwined and sometimes at odds with each other.
I have always believed that choices between good and evil are obvious and simple to make. Is that because of some universal logic, or does it result from my Judeo/Christian  background? If a malicious hacker makes the island of Acheron unusable to others, I consider the act evil, but perhaps from the hacker’s point of view, the concept of evil does not apply. When Hitler ruled, did he know that his deeds were evil? Was his awareness of good and evil suppressed? Did he even have a concept of good and evil?
If I have the technical skill to cause either evil outcome A or good outcome B, I am compelled to choose outcome B, but there may be difficult ethical questions along the way.

No comments:

Post a Comment