Expressions in Code and Freedom: Notes from a lecture

Being invited to give an opening keynote is both incredibly flattering and intimidating. Addressing the KDE community at their Akademy is even more intimidating: I want to be light, funny, deep, serious, relevant, insightful and create a base for discussion. No wonder I couldn’t stop editing my slides until long after sundown.

Tweet: doubly useless

The goal of my talk was to address the problem of the increased TiVo-ization of life, democracy and policy. Stated simply TiVo-ization is following the letter of rules/principles while subverting them by changing what is physically possible (wikipedia on origins and deeper meaning)

In order to set the stage I presented earlier communications revolutions. Reading and writing are 6000 years old, but punctuation took almost 4000 years to develop and empty spaces between words are only 1000 years old. What we see here is that communication is a code that evolves, it gets hacked and improved. Despite its accessibility it retains several bugs for millennia.

The invention of writing is a paradigm shift. But its taken for granted. printing on the other hand is seen as an amazing shift. In my view Gutenberg was the Steve Jobs of his day, Gutenberg built on the earlier major shifts and worked on packaging – he gets much more credit for revolution than he deserves.

Tweet: Gutenberg

Communication evolves nicely (telegraphs, radio, television) but the really exciting and cool stuff occurs with digitalization. This major shift is today easily overlooked, together with the Internet, and we focus on the way in which communication is packaged rather than the infrastructure that makes it possible.

The WWW is one on these incredible packages that was created with an openness ideal. We should transmit whatever we liked as long as we followed the protocol for communication. So far so good. Our communications follow the Four Freedoms of Free Software, Communication is accessible, hackable and usable.

Tweet: Stallman

Unfortunately this total freedom inevitably creates the environment that invites convenience. Here corporations provide this convenience but at the cost of individual freedom and, in the long run, maybe at the cost of the WWW.

The risk to the WWW emerges from the paradox of our increasing use of the Web. Our increased use has brought with it a subtle shift in our linking habits. We are sending links to each other via social media on an unimaginable level. Sharing is the point of social media. The early discussion on blogging was all about user generated content. This is still important, but the focus of social media today is not on content generation but on sharing.

Focusing on sharing rather than content creation means we are creating less and linking less. Additionally the links we share are all stored in social media sites. These are impermanent and virtually unsearchable – they are virtually unhistoric. Without the links of the past there is no web “out in the wild” – the web of the future will exist only within the manicured and tamed versions within social network nature preserves (read more Will the web fail?)

On an individual level the sharing has created a performance lifestyle. This is the need to publicize elements of your life in order to enhance the quality of it. (Read more Performance Lifestyle & Coffee Sadism).

Tweet: coffee

This love of tech is built on the ideology that technology creates freedom, openness and democracy – in truth technology does not automatically do this. Give people technology and in all probability what will be created is more porn.

The problem is not that social media cannot be used for deeper things, but rather that the desire of the corporations controlling social media is to enable shallow sharing as opposed to deep interaction. Freedom without access to the code is useless. Without access to the code what we have is the TiVo-ization of everyday life. If you want a picture then this is a park bench that cannot be used by homeless people.

image from Yumiko Hayakawa essay Public Benches Turn ‘Anti-Homeless’ (also recommend Design with Intent)

Park benches which are specifically designed to prevent people from sleeping on benches. In order to exclude an undesirable group of people from a public area the democratic process must first define a group as undesirable and then obtain a consensus that this group is unwelcome. All this must be done while maintaining the air of democratic inclusion – it’s a tricky, almost impossible task. But by buying a bench which you cannot sleep on, you exclude those who need to sleep on park benches (the homeless) without even needing to enter into a democratic discussion.Only homeless people are affected. This is the TiVo-iztion of everyday life.

The more technology we embed into our lives the less freedom we have. The devices are dependent on our interaction as we are dependent upon them. All to often we adapt our lives to suit technology rather than the other way around.

In relation to social media the situation becomes worse when government money is spent trying to increase participation via social networks. The problem is that there is little or no discussion concerning the downsides or consequences of technologies on society . We no longer ask IF we should use laptops/tablets/social media in eduction but only HOW.

Partly this is due to the fear of exclusion. Democracy is all about inclusion, and pointing out that millions of users are “on” Facebook seems to be about inclusion. This is naturally a con. Being on/in social media is not democratic participation and will not democratize society. Why would you want to be Facebook friends with the tax authority. And how does this increase democracy?

The fear of lack of inclusion has led to schools teaching social media and devices instead of teaching Code and Consequences. By doing this, we are being sold the con that connection is democracy.

Tweet: Gadgets

So what can we do about it?

We need to hack society to protect openness. Not openness without real function (TiVo-ization) but openness that cannot be subverted. This is done by forcing social media to follow law and democratic principles. If they cannot be profitable within this scenario – tough.

This is done by being very, very annoying:
1. Tell people what the consequences of their information habits will have.
2. Always ask who controls the ways in which our gadgets affect our lives. Are they accountable?
3. Read ALL your EULA… Yes, I’m talking to you!
4. Always ask what your code will do to the lives of others. Always ask what your technology use will do to the lives of others…

 

The slides are here:

Could Facebook be a members only social club?

What is public space? Ok, so it’s important but what is it and how is it defined? The reason I have begun thinking about this again is an attempt to address a question of what government authorities should be allowed to do with publicly available data on social networks such as Facebook.

One of the issues with public space is the way in which we have taken it’s legal status for granted and tend to believe that it will be there when we need it. This is despite the fact that very many of the spaces we see as public are actually private (e.g. shopping malls) and many spaces which were previously public have been privatized.

So why worry about a private public space? Who cares who is responsible for it? The privatization of public space allows for the creation of many local rules which can actually limit our general freedoms. There is, for example, no law against photographing in public. But if the public space is in reality a private space there is nothing stopping the owners from creating a rule against photography. There are unfortunately several examples of this – only last month the company that owns and operates the Glasgow underground prohibited photography.

Another limitation brought about by the privatization of public spaces is the limiting of places where citizens can protest. The occupy London movement did not chose to camp outside St Paul’s for symbolic reasons but because the area land around the church is part of the last remaining public land in the city.

Over the last 20 years, since the corporation quietly began privatising the City, hundreds of public highways, public pathways and rights of way in place for centuries have been closed. The reason why this is so important is that the removal of public rights of way also signals the removal of the right to political protest. (The Guardian)

This is all very interesting but what has it got to do with Facebook?

In Sweden a wide range of authorities from the Tax department to the police have used Facebook as an investigative tool. I don’t mean that they have requested data from Facebook but they have used it by browsing the open profiles and data available on the site. For example the police may go to Facebook to find a photograph, social services may check up if people are working when they are claiming unemployment etc.

What makes this process problematic is that the authorities dipping into the Facebook data stream is not controlled in any manner. If a police officer would like to check the police database for information about me, she must provide good reason to do so. But looking me up on Facebook – in the line of duty – has no such checks.

These actions are commonly legitimized by stating that Facebook is a public space. But is it? Actually it’s a highly regulated private public space. But how should it be viewed? How should authorities be allowed to use the social network data of others? In an article I am writing right now I criticize the view that Facebook is public, and therefore accessible to authorities without limitation. Sure, it’s not a private space, but what about a middle ground – could Facebook be a members only social club? Would this require authorities to respect our privacy online?

Regulation is everything, or power abhors a vacuum

Can we really control the Internet? This is question has been around long enough to be deemed a golden oldie. But like a fungal infection it keeps coming back…

The early battle lines were drawn up in 1996. In an age where cyberspace was both a cool and correct term lawyers like Johnson & Post wrote “Law And Borders: The Rise of Law in Cyberspace” and activists like John Perry Barlow wrote his epic “A Declaration of the Independence of Cyberspace“.These were the cool and heady days of the cyberlibertarians vs cyberpaternalists. The libs believed that the web should & could not be regulated while the pats meant that it could and should. (I covered this in my thesis pdf here) Since then the terminology has changed but sentiments remain the same.

I miss the term cyberspace. But more to the point the “could/should” control argument continues. Nicklas has written an interesting point on the could part:

Fast forward twenty years. Bandwidth has doubled once, twice, three times. Devices capable of setting up ad hoc networks – large ones – are everywhere. Encrypted protocols are of state-defying strength and available to everyone. Tech savvy generations have grown up to expect access to the Internet not only as a given, but as unassailable. Networks like Anonymous has iterated, several times, and found topologies, communication practices and collaboration methods that defy tracking. The once expensive bottleneck technologies have become cheaper, the cost of building a network slowly approaching zero. The Internet has become a Internet that can be re-instantiated for a large swath of geography by a single individual.

So far so good. Not one internet but personal portable sharable spaces. The inability to control will lead to a free internet. But something feels wrong. Maybe its a cynical sadness of having heard this all before and seeing it all go wrong? From his text I get images of Johnny Mnemonic and The Matrix basically the hacker hero gunslinger fighting the anonymous faceless oppressive society. Its cool, but is it true?

The technology is (on some level) uncontrollable (without great oppression) but the point is that it does not have to be completely controlled. The control in society via technology is not about having 100% surveillance and pure systems which cannot be hacked. Control is about having reasonable amounts of failure in the system (System failures allow dissidents to believe they are winning).

The issue I have with pinning my hopes on the unregulatable internets is that they are – in social terms – an end to themselves. Who will connect to these nets? Obviously those who are in the know. You will connect when you know where & how to connect. This is a vital goal in itself but presents a problem for using these nets in wider social change. Getting information across to a broader section of the population.

Civil disobedience is a fantastic tool. But if the goal is disobedience in itself it is hardly justifiable in a group. If the goal is to bring about social change: ie. the goal is for a minority to convince a majority then the minority must communicate with the majority. If the nets are going to work we need to find ways for the majority to connect to them. If the majority can connect to them then so can the oppressive forces of regulation.

On the field of pats & libs I think I am what is a cynical libertarian. I am convinced of the power, value, social & individual power of non-regulation of technology but I don’t believe that politicians and lobbyists will leave technology alone. It’s an unfortunate truth: power hates a vacuum.

Empowered citizens or Digital dairy cows: Notes on a lecture

The purpose of today’s lecture was to familiarize the audience with social media and what they may need to know about it. The lecture began with examples of what the media reports when social media is mentioned. The interesting thing is that media today has turned from the previously optimistic position to being more openly critical. To exemplify this I used three recent examples from Swedish media where the papers reported that research showed: smart phones make us selfish, Facebook spreads unhappiness & the need to be connected causes insomnia among young people.

Generally speaking the extremes of the debate either view social media as revolutionary (and fundamental for the Arab spring) or trivial. Defining the Arab spring as a Facebook revolution degrades the pain, suffering and efforts of the individuals doing the work. My example of the trivial is a response from an older professor when he heard I was working on an article on Twitter:

“Twitter? Isn’t that where everyone talks about what they had for breakfast?” Just as with the revolutionary view of social media this may have a grain of truth. Social media can be used for trivial conversation but it would be incorrect to see social media as only trivial. It may also be important to remember that most conversation is trivial. Trivial conversation is what creates and maintains social relations.

The approaches to social media belong to a longer tradition of techno-optimism and pessimism. My examples of optimism are a quote from Wikipedia:

Social media…At its most basic sense, social media is a shift in how people discover, read and share news, information and content. It’s a fusion of sociology and technology, transforming monologues (one to many) into dialogues (many to many) and is the democratization of information, transforming people from content readers into publishers. (Wikipedia, May 2009)

What does “the democratization of information” even mean? My second optimism example is Time Magazine’s choice of YOU as person of the year in 2006.

My choice of pessimists were a quote from Andrew Keen’s The Cult of the Amateur: How today’s Internet is killing our culture” (2007)

“Out of this anarchy… what was governing the infinite monkeys now inputting away on the Internet was the law of digital Darwinism, the survival of the loudest and most opinionated.”

Say what you like about Keen, but he is extremely clear about his position. The second pessimist quote is from Baroness Professor Susan Greenfield:

“My fear is that these technologies are infantilising the brain into the state of small children who are attracted by buzzing noises and bright lights, who have a small attention span and who live for the moment.”

From here the lecture moved on to the developments to what led to social media decade and the changes our new toys have caused. Naturally there are profound changes occurring all around us but the small stuff is fun to note.

The Wordfeud app is an interesting example. A couple of years ago admitting of regularly playing Scrabble may have been a form of social suicide – today things have changed and we happily boast of a high score. Similarly, a few years ago looking at pictures of your friends, enemies and other loose ties would have been voyeurism and maybe borderline stalking – today it’s just Facebook. Our use of technology has normalized abnormal behavior.

Our connectivity and our toys have also diminished our need for boredom – a feeling that may have filled an important purpose. I have written about Boredom as source of creativity earlier.

At this point the lecture moved on to some important points about what technology can do. Beginning with my favorite example of the Tokyo park bench read it here.

When we look at the effects of social media the most important point to begin with is the seminal quote by blue_beetle

If you’re not paying for something, you’re not the customer; you’re the product being sold

I like this quote but I have always felt that there was something missing. We are not really the product – we are the creators of the product, which is data. We are digital dairy cows and the product is digital milk.

A social change caused by social media is our relationship with our contacts. We are the stars in our own performance attempting to present our ordinary lives in extraordinary ways. We document our lives for the entertainment of others – or maybe for the creation of the image of a more exciting life. As an example I showed my coffee project (a mix of entertainment, amusement & sadism – to be explained in a later blogpost).

In order to understand more about what we are doing it is good to know what the controllers of the infrastructure think about. It is important to understand the digital dairy farmers.

One of the main players is Mark Zuckerberg and his position on “radical transparency”

“You have one identity… The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly… Having two identities for yourself is an example of a lack of integrity”

There are several things wrong with this position (not even focusing on the fact that his company profits from this position). According to Zuckerberg the days may be coming to an end (which I seriously doubt) but what to do now? The media is full of examples where individuals have been punished (socially or economically or more) for information that may not have been illegal or even immoral.

In addition to this Zuckerberg has claimed that privacy is no longer a social norm. Additionally, Zuckerberg’s goal seems to create a personalized view of the world (check out Pariser’s Filter Bubble or some stuff on personalization I wrote here). In Zuckerberg’s own chilling words:

A Squirrel Dying In Your Front Yard May Be More Relevant To Your Interests Right Now Than People Dying In Africa.

It is worrying that Zuckerberg is profiting from pushing these positions at the same time as he develops a technology that promotes excessive sharing and profits from the same.

So if social media is not going to show social responsibility, then who will fix this problem?

Usually we turn to the law. However the law is all focused on concerns with Orwell’s view of surveillance via Big Brother. But today we are the ones giving away our information for the sake of convenience and entertainment – we are in the controlled world of Huxley’s Brave New World (check out the Orwell/Huxley paradox here).

So we are left to our own devices – in more ways than one. What can we expect of the future? First we will see an increased efficiency in personalization (as I have written earlier):

The same is true of information. The sweet and fatty information in a long historical context was an understanding of who was allied with whom? Who is sleeping with whom? And whom can I get my genes over to the next generation (obviously just a nicer way of thinking about getting laid!). This is why we today have a fascination about gossip. Which minor celebs are attempting to sleep with each other takes up an extraordinary part of our lives. But this was all ok since the access to gossip was limited. Today, however, we are connected to the largest gossip engine ever conceived. Facebook may try to hide it in its spin, but part of our fascination is all about looking at each other. The problem is that there is only a limited amount of time in life and spending too much time on gossip limits our ability for more relevant information. We are becoming information obese and the solution is to decrease fatty information intake and go to the information gym regularly.

The development of walled gardens or information silos… Facebook (and other silos) is branding us like the cattle we are. By attempting to lock our behavior into their site and prevent us from leaving they are diminishing our freedom – a freedom which was originally created in the design of the Internet and is being subverted by the growth of social media (Read Long Live the Web by Tim Berners-Lee).

We are not going to be helped from our locked stalls by either law or corporations. We are left to practice thoughtful self-restraint and hope that the law will eventually catch up with our technology and needs.

The slides I used are here.

Looking for Orwell, missing Huxley, or Why privacy law is failing: Notes from a lecture

Being invited to talk somewhere else is always thrilling. Being asked to go to Berlin was even more so. The event was part of Internet und Gesellschaft Co:llaboratory who have been working on Internet & Human Rights. The event was a full day of talk (admittedly a lot in German but I had wifi and work to do so I was happy) followed by an open seminar with three talkers. These notes are from the presentation I gave at the seminar.

The lecture opened with a look at three historical highpoints of privacy regulation and thought. First was 1890 which was the year where Warren & Brandeis published their seminal paper The Right to Privacy which attempts to create a new right in society. Today, living in a rights-focused society arguing for rights seems natural (or banal) but what was it like to be the first to argue the right to privacy?

To exemplify the situation I showed the killer app of the 1890s. It was the Hollerith Tabulating Machine

Hollerith Tabulating Machine

The legal protection of privacy did not immediately spring to life and the next great step came in the 1970s where first the Lander of Hesse in Germany and the in 1973 Sweden created data protection legislation. The idea was to protect against the abuses of data collection but the state and large corporations.

The killer app of the 1970s is the impressive UNIVAC computer

UNIVAC image from Musée de l’Informatique

Kind of looks like the communal laundry room in my apartment building.

The next step was the European Data Protection Directive which attempted to harmonize data protection across Europe. It came in 1995 which as a killer app had the Windows95 operating system (couldn’t resist it!) and more importantly the first browser wars between Netscape and Internet Explorer (Microsoft released versions 1 and 2 in 1995). The browser wars are incredibly interesting as they show the importance of controlling the flow of information to the end user was not dependent on the hardware or operating system. It also shows that power consists of inserting oneself between the information and the end user – but I digress.

Before continuing I wanted to remind the audience that the law (and lawmaker) is behind the times so I quoted the late great Douglas Adams from his book The Salmon of Doubt

“Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.”

Following this I added a theoretical dimension to the lecture. The regulatory pyramid is intended to show that we focus on the law – this is what I was taught at law school. But the law is a self-sustaining system that ignores (or struggles to) the realities of social norms/rules and architecture. Social norms, not law, are what control most of our behavior the law is often too expensive, too drastic, too formal to be an efficient mode of conflict resolution. When someone “steals” “your” parking space, you don’t sue or call the prosecutor. You apply social norms. Your reaction depends on your upbringing and context – you may smile sweetly, flip them off or become verbally or even physically violent. Architecture is how the world works. It controls us by the rigidity of its being.

Regulatory Pyramid

If you want to slow down cars from speeding the law could be applied (a traffic sign will remind us of a pre-existing rule), or we use social norms by reminding drivers of accidents or children playing in the area. By implementing architecture we remake the physical environment and, for example, add bumps in the road – at this all cars must slow down. It is, however, important not to confuse the equal treatment with fairness. Architecture will prevent even an ambulance that may have good reasons to drive faster in a slow area.

As an example of my theory I show this wonderful/awful park bench in Tokyo.

image from Yumiko Hayakawa essay Public Benches Turn ‘Anti-Homeless’ (also recommend Design with Intent)

The bench is an example of outdoor public furniture known as anti-homeless technology or anti-bum benches. In order to prevent an undesirable group of people from using a public space we could create a rule against it – but by creating a law we need to accept the democratic constraints in rule making. Someone could remind us that in a democracy excluding people is inherently wrong. By choosing a bench that is unsuitable for sleeping the democratic process is bypassed. Additionally the park officials can always claim to have made an aesthetic choice i.e. we like this bench, rather than being against homeless people. This is control through design choice – imagine the control that may be created in manipulating communications technology.

The next segment is surveillance theory. As individuals we constantly leak and spread information. Most of us attempt to create strategies of control for our information flows. The most common is the process of compartmentalization which means that we present different information to different groups. I.e. the information you give about what you did over the weekend may be different when presented to your boss, wife, mother, children, best friend or lover. This is not necessarily lying but it is an attempt of controlling flows of information. Technology, and in particular social media, is all about losing the ability to practice this control.

Traditional surveillance theory is based upon Michel Foucault’s developments of Jeremy Bentham’s plans for the Panopticon prison. The concept is basic – if we are unaware of being watched we will internalize our own surveillance and become our own jailers. This is the whole premise of George Orwell’s book nineteen eighty-four: Big Brother is watching you. People under constant surveillance can be controlled. But is this really true? The control by the state is under constant refinement and yet citizens still attempt to cheat and steal – violent crime in general remains constant despite cctv. Could it be that Foucault (and Orwell) got it wrong?

The next step is technology. For me it’s the radical Huxleyian shift. What Orwell feared was totalitarian control via surveillance technology. But Huxley premised a more base society. Give people enough sex and drugs and they won’t care who controls them. Enter the convenient, comfortable, entertaining world of social media.

Social Media Timeline

Our newfound joy of communications technology has already changed our behavior in a major new way. Patterns of behavior that were deemed amoral, antisocial or even illegal have now become acceptable. Spending an evening looking at pictures of your ex-partners new partner would have been a textbook case of voyeurism and stalking. Today, its just Facebook. This reminds me of this early cartoon:

Additional changes in our behavior which should concern us are the fact that we can no longer refuse, ignore or exclude social media from our lives. Many claim they don’t have time for such nonsense but this will not be an efficient information control strategy. Even individuals outside social media use are being photographed and tagged by users and therefore identities are being created of them. These “friends” will also ensure that opting out is not a viable option.

The final level of surveillance is autoveillance. This is the self-chosen role of spreading information about ourselves. This is not the fact that my telephone stores and communicates my location information and more. This is part of the performance lifestyle which has created a performance anxiety, a need to present interesting inspiring activities from an ordinary lifestyle.

This may be silly, but is it harmful? Here it is not enough to study the moves of individuals (even millions of individuals).

Basically we are being seduced by technology, locked by licenses & killed by a lack of social responsibility. This creates four harmful outcomes that need somehow to be countered: Privacy, Personalization, Information obesity, & mind control.

As with the Japanese park bench above, understanding the users will not enable us to see the intentions of the manipulators. We must look to those with influence in social media and who can be more influential than Mark Zuckerberg.

Zuckerberg has been quoted as saying privacy is no longer a social norm

Which is interesting given the fact that he has created a system which helps us to forget our inhibitions about sharing personal information and that his business model is premised on our sharing. He has a stake in the removal of protections against privacy.

Zuckerberg on the topic of personalization of technology: “A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa” (Pariser NYT)

Sure there have always been gatekeepers choosing which information is important for me or not. But these gatekeepers did not create a personal information resource only for me. The daily newspaper is created to appease a society of readers. I may chose to ignore an article but at least it’s there in front of me. In a personalized world I will no longer be confronted by any kind of information that does not fit my profile.

The Holy Grail of many Internet providers is to give us this kind of personalization. The problem occurs when this kind of convenience and service removes our ability to control our flows of information. We lose the ability to read information that we may need – because we are constantly being bombarded by the information Facebook thinks we want.

Information obesity: Our bodies crave sweet and fatty foods. One way of looking at this is through the lens of evolution. Finding fatty and sweet foods was key to our survival but these were not to be found everywhere or everyday. Today we are surrounded by fatty and sweet foods so access is not the problem. The problem is overindulgence and obesity due to accessibility. This forces us to think about diet, to think about exercise. Self-control is essential to our survival.

The same is true of information. The sweet and fatty information in a long historical context was an understanding of who was allied with whom? Who is sleeping with whom? And whom can I get my genes over to the next generation (obviously just a nicer way of thinking about getting laid!). This is why we today have a fascination about gossip. Which minor celebs are attempting to sleep with each other takes up an extraordinary part of our lives. But this was all ok since the access to gossip was limited. Today, however, we are connected to the largest gossip engine ever conceived. Facebook may try to hide it in its spin, but part of our fascination is all about looking at each other. The problem is that there is only a limited amount of time in life and spending too much time on gossip limits our ability for more relevant information. We are becoming information obese and the solution is to decrease fatty information intake and go to the information gym regularly.

The final concern is of mind control. This is all about what happens when a social media is told that you are interested in a certain thing. Say, for example, you have a secret pleasure in seeing videos of kittens being kicked. You would never say this aloud – and if you did your social group would correct you by telling you this is an unhealthy impulse. You may even manage to convince yourself that you have no sadistic urges in this area. However, social media knows the truth and will continue to give priority to information about kitten kicking. You may resist some of it but if you have an urge you probably will click on some of the information. By clicking you re-enforce the information algorithm and you will be sent even more kitten kicking information. A question of moral responsibility can now be posed: While your latent sadistic tendencies are being reinforced and enhanced – what is the moral responsibility of the provider? This is akin to asking whether a drug pusher has any moral responsibility to his clients. In your answer consider that many users of social media are very young and there is no general awareness or discussion on the harms of social media.

So what about regulation? Well the problem is that we are considered to be autonomous. In other words we are old and wise enough to live our own lives. Indeed we have all agreed to the terms of use of social media sites. We may not have read them, maybe not understood them, they may have changed drastically since we read them – but legal fiction is that we agree to them.

This shouldn’t be a problem. If society deems an activity harmful enough it can, and should, legislate against it – even if some may protest this regulation. There have been protests against: motorcycle helmets, seatbelts, hitting children and the right to smoke (makes you lose faith in human intelligence) but the social cost was deemed greater than the loss of individual autonomy. The problem with social media is that the social costs are not particularly visible.

Finally on the question of gatekeepers and Orwellian or Huxleyian control it is interesting to note that typical Orwellian control is easier to see and therefore easier to protest against. Therefore the cost of maintaining it against the wishes of the people is too high to bear in the long run. But Huxleyian control is based on making me happy, fulfilling my desires. Counteracting this requires that I first become aware and then exercise self control. This is difficult on an individual level and close to impossible on a social level.

Here are the slides which accompanied the lecture.

Democracy cannot ignore technology: Notes from a lecture

Not really sure if this should be called a lecture as it was part of a panel presentation where we were allotted 15 minutes each and then questions. The setting was interesting as it was part of the Swedish Parliaments annual conference about the future and the people asking the questions were all politicians. So for my 15 minutes (of fame?) I chose to expand on the ill effects of politics ignoring technology (or taking it as a stable, neutral given).

The presentation began with a quote from Oliver Wendell Holmes jr.

It cannot be helped, it is as it should be, that the law is behind the times.

What I wanted to do was to explain that the law has always been seen as playing catchup. This is not a bug in the law it is a feature of the law. Attempting to create laws that are before the time would be wasteful, unpopular and quite often full of errors about what we think future problems would be. I wanted to include a quote from Niels Bohr

Prediction is very difficult, especially about the future.

But time was short and I needed to avoid meandering down interesting – but unhelpful – alleyways.

Instead I reminded the audience that many of our fundamental rights and freedoms are 300 years old and, despite being updated, they are prone to being increasingly complex to manage or even outdated when the basic technological realities have changed. This was the time of Voltaire who is today mainly famous for saying

I disapprove of what you say, but I will defend to the death your right to say it

(Actually he never said this. The words were put in his mouth by the later writer Evelyn Beatrice Hall. But lets not let the truth get in the way of a good story.)

The period saw the development of fantastic modern concepts such as democracy, free speech and autonomy may be seen as products of the enlightenment. They remain core values in spite of the fact that our technological developments have totally changed the world in which we live. For the sake of later comparison I added that the killer app of the time was the quill. Naturally there were printing presses but as these are not personal communication devices they provide easier avenues of control for states. In other words developing concepts of free speech must be seen in the light of what individuals had the ability to do.

As I had been asked to talk about technology and society I chose to exemplify with the concept of copyright which was launched by the Statute of Ann in 1710. In Sweden copyright was introduced into law in the 19th century and the most recent thorough re-working of the law was in the 1950 with the modern (and present) Swedish copyright Act entering into the books in 1960. The law has naturally been amended since then but has received no major reworking since then. The killer app of the 1960s? Well it probably was the pill – but that’s hardly relevant, so I looked at radio and tv. The interesting thing about these is that they are highly regulated and controlled mass mediums. While they are easy to access for the consumer, they are hardly platforms of speech for a wider group of people.

Moving along to the Internet, the web, social media and the massive increases in personal devices have created a whole new ball game. These have create a whole new way of social interaction among citizens. The mass medium of one to many is not the monopoly player any more. So what should the regulator be aware of? Well they must take into consideration the ways in which new technologies are changing actual social interaction on many levels and also the changes in fundamental social values that are coupled with our expectations on the justice system.

The problem is that all to often regulators (as they are ordinary people) tend to take as their starting point, their own user experience. In order to illustrate what I meant I include one of my favorite Douglas Adams quotes (it’s from The Salmon of Doubt)

Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.

Therefore it is vital not to ignore the role of technology, or to underestimate its effects. Looking at technology as – simply technology – i.e. as a neutral tool that does not effect us is incredibly dangerous. If we do not understand this then we will be ruled by technology. Naturally not by technology but by those who create and control technology. Law and law makers will become less relevant to the way in which society works. Or does not work.

In order to illustrate this, I finished off with a look at anti-homless technology – mainly things like park benches which are specifically designed to prevent people from sleeping on benches. In order to exclude an undesirable group of people from a public area the democratic process must first define a group as undesirable and then obtain a consensus that this group is unwelcome. All this must be done while maintaining the air of democratic inclusion – it’s a tricky, almost impossible task. But by buying a bench which you cannot sleep on, you exclude those who need to sleep on park benches (the homeless) without even needing to enter into a democratic discussion.

If this is done with benches, then what power lies in the control of a smart phone?

Here are the slides used with the lecture.

Because we can: comments from a lecture

The weekend and FSCONS is now over. This year my presentation was the last talk of the final session. It’s a dirty job but someone’s got to do it?

My presentation was on the topic of privacy and raised the question of whether it is possible to maintain ones privacy in the world of extreme technology dependencies and broad social technology adoption. The answer is, dependably & depressingly, negative.

The talk was entitled Off the Grid: is anonymity possible? And focused on different forms of surveillance that are in the hands of uncommon players today. This is not big brother society, this is not little brother society. What we have is a society were privacy is lost because our contacts inform their contacts of interesting details from our lives. These details are able to be spread further by my contacts contacts. Potentially reaching the ends of the Internet. Whether or not this happens does not depend on anything I control but the interestingness of the information.

To illustrate this I displayed this tweet:

Translation: Thing that can happen at #fscons: @Klang67 proclaims himself queen. A bit unclear over what.

This is a form of surveillance through acquaintances and therefore I have chosen to follow the French wording (surveillance is French for viewing from above) and called this connaivellance for the fascinating word connaissance or acquaintance. I find the French word more interesting than the English as its root connai is the word for knowledge. Therefore, the French connaissance (acquaintance) is someone who has knowledge of you. How very apt.

The next form of surveillance is the self-surveillance of the social media age where we tell the world of ourselves. Or as a professor I met earlier in the week protested, with absolute conviction: “Twitter? That’s only people telling each other what they had for breakfast!”

Another thing I find fascinating with social media is the way it shapes our communication. One part of this is the way in we move towards the extremes. Few people online drink coffee, read books, or listen to lectures… We all seem to read fantastic/terrible books, drink great or awful coffee and lectures are either inspiring or snooze fests. All this with a shower of smileys too.

Both this autoveillance (which I have written more about here) and this connaivellance filled much of my lecture. As the law fails to protect, and our acquaintances and ourselves enthusiastically push information the last lines of defense must be the attitudes and interests of the social media creators. What my lecture showed was that protecting us is not in their interest. Therefore we stand unprotected. The slides from my presentation:

This morning I came across a further example of surveillance which needs to be added to the list. The story comes from a Forbes article by Dave Pell, entitled Privacy Ends at Burger King. The short version of the story is that a man who heard a married couple argue at Burger King began live tweeting the event and added pictures and even video clips. He began his broadcasting with the tweet “I am listening to a marriage disintegrate at a table next to me in this restaurant. Aaron Sorkin couldn’t write this any better.”

Pell’s analysis:

In that Burger King, Andy Boyle thought he was listening to the disintegration of a couple’s marriage. He was really hearing the crumbling of his own ethics and self-restraint. We can’t stand by and let an alliance between technology and poor judgement disintegrate all decency, and turn every human exchange into another tawdry and destructive episode on a never-ending social media highlight reel.

This example provided an interesting additional example to my discussion on surveillance. For me, this example shows an additional reason why any attempts to control social media (legally, socially or technically) will fail. The desire of people to communicate the interestingness in their (and others) lives makes control a difficult affair.

FSCONS continued late into the night.

Dangerous Bits of Information: Notes from a lecture

Last week was an intense week of lecturing, which means that I have fallen behind with other work – including writing up lecture notes. One of the lectures was Dangerous bits of information and was presented at the NOKIOS conference in Trondheim Norway. Unfortunately I did not have much time in the city of Trondheim but what I saw was wonderful sunny city with plenty of places to sit and relax by the river that flows through the center. But there was not much sitting outside on this trip.

The lecture was part of the session “Ny teknologi i offentlig forvaltning – sikkerhet og personvern” (New Technology and Public Administration – security and data security). In the same session was Bjørn Erik Thon, Head of the Norwegian and Storm Jarl Landaasen, Chief Security Officer Market Divisions, Telenor Norge.

My lecture began with an introduction to the way in which many organizations fail to think about the implications of cloud technology. As an illustration I told of the process that surrounded my universities adoption of a student email system. When the university came to the realization that they were not really excellent at maintaining a student email system they decided to resolve this.

The resolution was not a decision of letting individuals chose their system. But the technical group (it was after all seen as a tech problem) was convened and decided in an either – or situation. The decision placed before the group was whether we go with Google or with Microsoft. The group chose Google out of a preference for the interface.

When I wrote a critique of this decision I was told that the decision was formally correct since all the right people (i.e. representatives) where present at the meeting. My criticism was, however, not based on the formality of the process but rather about the way in which the decision was framed and the lack of information given to the students who would be affected by the system.

My critique is based on four dangers of cloud computing (especially by public bodies) and the lack of discussion. The first issue is one of surveillance. Swedish FRA legislation, which allows the state to monitor all communication, was passed with the explicit (though rather useless) understanding that only cross border communication will be monitored. The exception is rather useless as most Internet communication crosses borders even if both sender and receiver is within the same small state. But this cross-border communication becomes even more certain when the email servers are based abroad – as those of gmail are.

The second problem is that some of the communication between student and lecturer is sensitive data. Also the lecturer in Sweden is a government official. This is a fact most of us often forget but should not. Now we have sensitive data being transferred to a third party. This is legal since the users (i.e. the students) have all clicked that they agree the licensing agreements that gmail sets. The problem is that the students have no choice (or very little & uninformed – see below) but to sign away their rights.
The third problem is that nothing is really deleted. This is because – as the important quote states – “If you are not paying for it you are not the customer but the product being sold” – the business model is to collect, analyze and market the data generated by the users.

But for me the most annoying of the problems is the lack of interest public authorities has in protecting citizens from eventual integrity abuses arising from the cloud. My university, a public authority, happily delivered 40000 new customers (and an untold future number due to technology lock-in) to Google and, adding insult to injury, thanking Google for the privilege.

Public authorities should be more concerned about their actions in the cloud. People who chose to give away their data need information about what they are doing. Maybe they even need to be limited. But when public bodies force users to give away data to third parties – then something is wrong. Or as I pointed out – smart people do dumb things.

The lecture continued by pointing out that European Privacy Law has a mental age of pre-1995 (the year of the Data Protection Directive). But do you remember the advice we gave and took about integrity and the Internet in the early days? They contained things like:

  • Never reveal your identity
  • Never reveal your address
  • Never reveal your age
  • Never reveal your gender

Post-Facebook points such as these become almost silly. Our technology has developed rapidly but our society and law is still based on the older analogue norms – the focus in law remains on protecting people from an outer gaze looking in. This becomes less important when the spreading of information is from us individuals and our friends.

The problem in this latter situation is that it extremely difficult to create laws to protect against the salami-method (i.e. where personal data is given away slice by slice instead of all at once).

At this stage I presented research carried out by Jan Nolin and myself on social media policies in local municipalities. We studied 26 policies ranging between < 1 page to 20 pages long. The policies made some interesting points but their strong analogue bias was clear throughout and there were serious omissions. They lacked clear definitions of social media, they confused social media carried out during work or free time. More importantly the policies did not address issues with cloud or topics such as copyright. (Our work is published in To Inform or to Interact, that is the question: The role of Freedom of Information & Disciplining social media: An analysis of social media policies in 26 Swedish municipalities)

Social media poses an interesting problem for regulators in that it is not a neutral infrastructure and it does not fall under the control of the state. The lecture closed with a discussion on the dangers of social media – in particular the increase in personalization, which leads to the Pariser Filter Bubble. In this scenario we see that the organizations are tailoring information to suit our needs or rather our wants. We are increasingly getting what we want rather than what we need. If we take a food analogy we want food with high fat and high sugar content – but this is not what our bodies need. The same applies to information. I may want entertainment but I probably need less of it than I want. Overdosing in fatty information will probably harm me and make me less of a balanced social animal.

Is there an answer? Probably not. The only way to control this issue is to limit individual’s autonomy. In much the same way as we have been forced to wear seat belts for our own security we may need to do the same with information. But this would probably be a political disaster for any politician attempting to suggest it.

Surveillance, Sousveillance & Autoveillance: Notes from a lecture

The theme for today’s lecture was about online privacy and was entitled Surveillance, Sousveillance & Autoveillance.

The lecture had to open up with a minor discussion on the concept of privacy and the problem of finding a definition that many can agree upon. Privacy is a strange mix of natural human need and social construct. The former is not easily identifiable and the latter varies between different cultures.

It is not enough to state that privacy may have a natural component – sure, put too many rats in a cage and they start to kill each other – you also need the technology to enable our affinity for privacy to develop.

For example in At Home: A Short History of Private Life, Bill Bryson writes that the hallway was absolutely essential for private life. Without the hallway people could not pass by other rooms to get to the room you need to go to – but they would have to pass through the other rooms. Our ideas of privacy were able to develop after the “invention” of the hallway.

In order to settle on a definition I picked one off Wikipedia …(from Latin: privatus “separated from the rest, deprived of something, esp. office, participation in the government”, from privo “to deprive”) is the ability of an individual or group to seclude themselves or information about themselves and thereby reveal themselves selectively.

And to fix the academic discussion I quoted from Warren and Brandeis The Right to Privacy, 4 Harvard Law Review 193 (1890)

The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world…solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress…

I like this quote because it also points to the effects of modern inventions on the loss of privacy.

In closing the lecture introduction I pointed out that privacy intervention consists of both data collection and data analysis – even though most of the history of privacy focused on the data collection side of the equation. In addition to this I broke down the data collection issue by pointing out that integrity consists of both information privacy (the stuff that resides in archives) and spatial privacy (for example surveillance cameras & the “right” to be groped at airports).

For the next section the lecture did a quick review of the role of technology in the privacy discussion. Without technology the ability to conduct surveillance is extremely limited. The early origins of tax records and collections like the Domesday book were fundamental for controlling society. However, real surveillance did not really begin until the development of technology such as the wonderful Kodak nr 1 in 1888. The advantages of this technology was that it provided a cheap, easy to use, portable ability to take photographs. Photographs could be snapped without the object standing still. A whole new set of problems was instantly born. One such problem was kodakers (amateur photographers, see “’Kodakers Lying in Wait’: Amateur Photography and the Right to Privacy in New York, 1885-1915”, American Quarterly, Vol 43, No 1 March 1991) who were able to suddenly able to take photographs at of unsuspecting victims.

Surveillance: A gaze from above

The tradition concerns of surveillance deal with the abuse of state (or corporate) power. The state legitimizes its own ability to collect information about its citizens. The theoretical concerns with surveillance are the abuse from the Big Brother state and foremost in this area is the work of Foucault and his development of the Panopticon (all-seeing eye prison). Foucault meant that in a surveillance society the surveilled, not knowing if anyone was looking, would internalize his own control.

Sousveillance: A gaze from below

The concept of sousveillance was originally developed within computer science and “…refers to the recording of an activity by a participant in the activity typically by way of small wearable or portable personal technologies…” Wikipedia

But in the context of privacy the idea was that our friends and peers (especially tricky concepts in Social Media) will be the ones who collect and spread information about us online.

We are dependent upon our social circle, as Granovetter states: “Weak ties provide people with access to information and resources beyond those available in their own social circle; but strong ties have greater motivation to be of assistance and are typically more easily available.” (Granovetter, M.S. (1983). “The Strength of the Weak Tie: Revisited”, Sociological Theory, Vol. 1, 201-33., pp 209).

This ability of others to “out” us in social media will become more interesting with the development of facial recognition applications. These have already begun to challenge social and legal norms (Facebook facial recognition software violates privacy laws, says Germany – The Guardian 3 August 2011).

Autoveillance: a gaze from within

The final level is Autoveillance – this is obviously not the fact that we are looking at ourselves but attempts to address the problems of our newfound joy in spreading personal information about ourselves.

Is this a form of exhibitionism that enables us to happily spread personal, and sometimes intimate, information about ourselves? Is this the modern version of narcissism?

Narcissism is a term with a wide range of meanings, depending on whether it is used to describe a central concept of psychoanalytic theory, a mental illness, a social or cultural problem, or simply a personality trait. Except in the sense of primary narcissism or healthy self-love, “narcissism” usually is used to describe some kind of problem in a person or group’s relationships with self and others. (Wikipedia)

We have always “leaked” information but most of the time we have applied different strategies of control. One such strategy is compartmentalization – which is the attempt to deliver different information to different groups. For example my mother, my wife, my co-workers, my friends and my children do not need to know the same stuff about me. But social media technology defies the strategy of compartmentalization.

At the same time as this is happening our social and legal norms have remained firm in the analog age and focus on the gaze from without.

Then the lecture moved from data collection to data analysis. Today this is enabled by the fact that all users have sold away their rights via their End-User License Agreements (EULA). The EULA is based upon the illusion of contracts as agreements between equals. However, as most people do not read the license, or if they read the license they don’t understand it, or if they understand it the license is apt to change without notice.

Today we have a mix of sur, sous & autoveillance. And again: regulation mainly focuses on surveillance. This is leading to an idea about the end of privacy. Maybe privacy is a thing of the past? Privacy has not always been important and it may once again fall into disrepute.

With the end of privacy – everyone may know everything about everyone else. We may have arrived at a type of Hive Mind. The hive mind is a concept from science fiction (for example Werewolves in Twilight, The Borg in Star Trek and the agents in The Matrix). An interesting addition to this line of thinking is the recent work by the Swedish philosopher Torbjörn Tännsjö who argues that it is information inequality that is the problem.

The problem with Tännsjö’s arguments is that he is a safe person living in a tolerant society. He seems to really believe the adage: If you have done nothing wrong, you have nothing to fear. I seriously doubt that the stalked, cyberbullied, the disenfranchised etc will be happier with information equality – I think that they would prefer the ability to hide their weaknesses and to chose when and where this information will be disclosed.

The problem is that while we had a (theoretical) form of control over Big Brother we have no such control over corporations to whom we are less than customers:

If you are not paying for it, you’re not the customer; you’re the product being sold.

The lecture closed with reminders from Eli Pariser’s The Filter Bubble that with the personalization of information we will lose our identities and end up with a diet of informational junk food (the stuff we maybe want but should not eat to much of).

Then a final word of warning from Evgeny Morozov (The Net Delusion) to remind the audience that there is nothing inherently democratic about technology – our freedom and democracy will not be created, supported or spread just because we have iPods…

Great idea: Nordic Techpolitics

Thanks to the hard work of people like Bente Kalsnes the first Nordic Techpolitics conference will be held in Oslo on Friday 2 September. The event

…is a must-attend conference for everyone interested in how technology is changing politics, government and societies in the Nordic countries.

What:

Technological changes affect every aspects of society, and institutions and policy makers are struggling to catch up with the latest tools and possibilities. This conference will explore how we can use technology to improve politics and governance, increase participation and create smarter solutions in everyday life. We will also peek into some of the dark sides of how technology is changing society for worse.

Expect to learn about how transparency, innovation and collaboration are the driving forces for change in modern societies.

Check out the schedule here.