Defending Security by Obscurity

Almost as soon as Google launched its “Social Graph API” the discussions began. As with other innovations in the field of social networking the Google social graph will be a potential new threat to privacy – and like everything else produced by Google it will be well-packaged and presented in a non-threatening manner.

So what is the social graph and why is it important?

Basically the social graph is a way to take existing data and to use it in new ways. By analyzing the information available the social graph will present relationships between data and people online. One of the examples used in the instructional video (found here) is this:

social graph by Google

the user Brad joins twitter and searches for friends. The social graph knows that b3 belongs to Brad (maybe his blog), from the Blog the social graph knows that Bradfitz is also Brad. Bradfitz is friends with Jane274 who is also known as Jane on twitter. Since they are friends on livejournal Brad can ask Jane to be friends on twitter.

The criticism against this model is that Jane274 may accept Bradfitz on livejournal but Jane may be trying to avoid Brad on twitter – even if they are the same people. Maybe Jane is trying to avoid Brad alltogether but has failed on livejournal? Who knows? Whatever the reason Jane may be using different names to create watertight compartments of her online life. This model of security is not particularly strong but it works reasonably well and is known as security by obscurity.

Tim O’Reilly argues that the weakness or false sense of security created by security by obscurity is dangerous and therefore social graphs should be implemented. He realises people will get hurt when the obscurity is lost but considers this to be a necessary cost of evolution

It’s a lot like the evolutionary value of pain. Search creates feedback loops that allow us to learn from and modify our behavior. A false sense of security helps bad actors more than tools that make information more visible…But even here, analogies to living things are relevant. We get sick. We develop antibodies and then we recover. Or we die.

Basically it’s evolve or die to Tim.

This is OK if you are pretty sure to be among those who survive the radical treatment. But what about those who are hurt by the treatment – what about those who die? Danah Boyd at apophenia writes:

…I’m not jumping up and down at the idea of being in the camp who dies because the healthy think that infecting society with viruses to see who survives is a good idea. I’m also not so stoked to prepare for a situation where a huge chunk of society are chronically ill because of these experiments. What really bothers me is that the geeks get to make the decisions without any perspective from those who will be marginalized in the process.

The problem is that the people who will get hurt in large scale social experiments such as these are never those who are responsible in carrying them out. The costs will be carried by those who are not techie enough to defend themselves. The experts will continue to go about their lives because they will always have the ability (time, money, knowledge) to defend themselves.

Those in the position of privilege should remember that with great strength comes great responsibility. In other words those who have the ability to create systems such as these should really think about the social implications of the tools they are creating. Not as seen from their positions of privilege but from the perspective of the users who may be hurt.

Leave a Reply

Your email address will not be published. Required fields are marked *