Design and Access to the City: Notes on a lecture

What is a city? Who gets to decide how it should be used and by which groups? In order to address this I began with two examples intended to demonstrate the conflict. I purposely chose not to use large scale examples.

The first example was in 2009 when the Harvard professor Henry Louis Gates Jr was arrested for breaking into his own home. Despite being able to identify himself and that it was his own address the police “…arrested, handcuffed and banged in a cell for four hours arguably the most highly respected scholar of black history in America.”

The second example was Oscar-winner Forest Whitaker being accused of shoplifting and patted down by an overzealous employee at the Milano Market on the Upper East Side in Manhattan. This latter example is interesting because the market apologized and said of the employee: “He’s a decent man, I’m sure he didn’t mean any by wrong doing, he was just doing his job” “a sincere mistake”. An interesting thing about this is that if you search the term “Forest Whitaker deli” most of the hits are for the apology and not for the action itself.

These two minor events would never had come to the attention of anyone unless they had happened to celebrities with the power to become part of the news. They demonstrate that even among sincere well meaning people there are groups thought to have less access to the city.

Ta-Nehisi Coates wrote an excellent op-ed called The Good, Racist People , which he ends writing about the deli:

The other day I walked past this particular deli. I believe its owners to be good people. I felt ashamed at withholding business for something far beyond the merchant’s reach. I mentioned this to my wife. My wife is not like me. When she was 6, a little white boy called her cousin a nigger, and it has been war ever since. “What if they did that to your son?” she asked.

And right then I knew that I was tired of good people, that I had had all the good people I could take.

Following this introduction the lecture moved on to demonstrate the power of maps. I began with a description of the events leading up to Dr John Snow identifying the Broad Street Pump as the cause for the Soho Cholera outbreak of 1854.

Dr Snow did not believe in the miasma (bad air) theory as the cause of cholera and in order to prove that the cause was connected to the public water pump on broad street he began mapping out the cholera victims on a map. They formed a cluster around the pump.

pumpWith the help of this illustration he was able to show that the disease was local and get the pump handle removed. The cholera cases decreased rapidly from that point.

The immediate cause of the outbreak was the introduction of human waste into the water system – most probably from a mother washing an infected child’s diapers. But the fundamental reason for the huge death count was the lack of sewer and sanitation systems in this poorer area of the city. By insisting on the miasma theory the city could claim to be free from responsibility.

In the following part of the lecture I wanted to discuss how cities can maintain segregation and inequality of services despite the ways in which the rules are presented as fair and non-biased. In order to do this I used a list of maps demonstrating cities segregation by race and ethnicity created by Eric Fischer.

One dot for each 500 residents. Red is White, Blue is Black, Green is Asian, Orange is Hispanic, Yellow is Other. Images are licensed CC BY SA. There are several maps of interest and they are well worth studying. Here I will only present Philadelphia and Chicago:

Chicago: One dot for each 500 residents. Red is White, Blue is Black, Green is Asian, Orange is Hispanic, Yellow is Other.

Chicago: One dot for each 500 residents. Red is White, Blue is Black, Green is Asian, Orange is Hispanic, Yellow is Other.

Philadelphia: One dot for each 500 residents. Red is White, Blue is Black, Green is Asian, Orange is Hispanic, Yellow is Other.

Philadelphia: One dot for each 500 residents. Red is White, Blue is Black, Green is Asian, Orange is Hispanic, Yellow is Other.

As we are in Philadelphia I also included a map of household income (Demographics of Philadelphia)

 Median household income in Center City and surrounding sections, 2000 Census.

Median household income in Center City and surrounding sections, 2000 Census.

At this point I moved the discussion to the distinction between public and private spaces. I used definitions of these from Wikipedia

A public space is a social space that is generally open and accessible to people. Roads (including the pavement), public squares, parks and beaches are typically considered public space.

To a limited extent, government buildings which are open to the public, such as public libraries are public spaces, although they tend to have restricted areas and greater limits upon use.

Although not considered public space, privately owned buildings or property visible from sidewalks and public thoroughfares may affect the public visual landscape, for example, by outdoor advertising.

As the distinctions between private/public will be discussed in depth in a future lecture I left this as a relatively vague discussion and went into the problems of two of our rights as practiced in the “public space”

Free Speech: Not wanting to delve into the theory of this fascinating space I jumped straight into the heart of the discussion with a quote from Salman Rushdie: “What is freedom of expression? Without the freedom to offend, it ceases to exist” The point being that we don’t need protection to conform but we do need it to evolve. 

For this lecture I brought up outdoor advertising. This is an activity which is globally dominated by one corporation: The Clear Channel Outdoor Holdings is probably the biggest controller of outdoor communication in the world. They have the ability to decide which messages are transmitted and which are not. They have accepted advertising for fashion brands which transmit harmful body images and even brands which have been accused of glorifying gang rape. For a look at this disturbing trend in advertising see 15 Recent Ads That Glorify Sexual Violence Against Women.

The messages being pushed out on billboards can arguably seen as a one-sided participation of the public debate. Changing messages (adbusting) or even correcting willfully false information on billboards is seen as vandalism. As a demonstration that something can be done I showed a clip of a report about the clean city law, where the city of Sao Paulo has forbidden outdoor advertising.

However, when Baltimore in 2013 attempted to introduce a billboard tax Clear Channel Outdoor argued that billboards should be protected as free speech by the First Amendment and this tax would therefore be a limitation of the corporations human rights.

In order to demonstrate the right of assembly I used the demonstrations at Wall Street where the desire to protest was supported (in theory) by Mayor Bloomberg

“people have a right to protest, and if they want to protest, we’ll be happy to make sure they have locations to do it.”

Despite this sentiment the parks of New York close (even the ones without gates) at dusk or 1 am. This prevents demonstrators staying overnight. In order to circumvent this and continue the protests the demonstrators went to the privately owned Zuccotti Park where they could stay overnight. Eventually the protestors where dispersed when it was argued that the conditions were unsanitary.

Health hazard! by Seema Krishnakumar CC by nc sa

Health hazard! by Seema Krishnakumar CC by nc sa

The slides I used are here:

 

Technologies of Control & Desire Notes

The first class discussion & lecture of the Civic Media course began with the suitably vague title Technologies of Control and Desire. The purpose of this lecture was to introduce technology into the discussion of ethics and communication. The idea was to talk about the ways in which technologies have been seen both as a source of salvation and as a threat to the society in which they are introduced.

Unsurprisingly, in my eagerness I forgot to talk about the first slide which was a advertisement for an early television remote control.

Eugene PolleyOften the invention of the remote control is credited to Eugene Polley (1915 – 2012) it was an invention that had to happen. People didn’t want to have to stand up to change the channel. What we tend not to think about is that the invention of the remote control allowed for many changes. Thousands of channels would not be able to compete or exist without a remote control. Also advertising was forced to adapt once people could effortlessly change channels or lower the volume. Eugene Polley didn’t create the couch potato but he certainly made life easier for this group.

The first section of the presentation was a very, very brief introduction to technology ethics in order to arrive at the discussion of whether we have free choice or not. Are we choosing to do what we do based on ethical decision making? Or maybe on chance? Or maybe something else? What is the role of technology in forming our worlds and “assisting” our choices.

I included a quote from the composer Stravinsky

In America I had arranged with a gramophone firm to make some of my music. This suggested the idea that I should compose something whose length should be determined by the capacity of the record.

This is a nice illustration where art is no longer necessarily a choice of the creator but rather a decision based on technological limitations. Keeping on the theme of technology I also introduced the idea of technology enabling us to act – or to put it more extremely – technology “forcing” us to act.

To illustrate this I showed them the web page for the iPod Classic which has the line “Your top 40 000”. This refers to the capacity of the device to store 40 000 songs. But how would someone go about collecting so much music? Could it be done legally? Or does this tagline implicitly encourage piracy?

From this point I introduced technological determinism and the idea of choice. Without refuting that we always have choice I gave examples of social and technological mass choices that seem to indicate a high level of determinism.

From this position I pointed out that the way in which technology is accepted depends on the way in which we see it either as a threat or a benefit to our lifestyle. Using weird and wonderful advertisements and technical articles from the past I demonstrated a utopian vision where farmers work from home, students learn without reading and asthma is cured with cigarettes.

In order to demonstrate techno-pessimism I used quotes from Plato (against writing), a snippet against books from Johannes Trithemius’ (1494) In Praise of Scribes

The word written on parchment will last a thousand years. The printed word is on paper. How long will it last? The most you can expect a book of paper to survive is two hundred years.

Referencing social media I pointed at George M Beard’s (1881) concern that newspapers and telegraph create nervous disorders by exposing people to “the sorrows of individuals everywhere”

In closing I reminded the audience of Postman’s comparison between Orwell and Huxley’s visions of the future: Orwell was concerned that we would be oppressed by a technology wielding state. Huxley was concerned that we would all be sucked into the shallow pleasures offered by technology. I pointed out that it has become popular to say Huxley has “won” because social media seems to be people settling for shallow pleasures. However, this is not entirely true and states are increasingly using Orwellian means to control those who would engage in deeper discussions that threaten the state.

I finished off with a short video of Morosov’s work (which can be found online here) and a class discussion. The slides I used for the class are online here

 

Police, Evidence and Facebook

One of the things I presented at IR13 was in a 10-minute panel presentation on the regulation of Internet by spaces such as Facebook. I wanted to use this all to brief time to enter into the discussion of a problem of police, policing, procedural rules and technological affordances – easy right?

This is going to be a paper soon but I need to get some of the ideas out so that I remember the order they are in and so that people who know better can tell me how horribly wrong, ignorant and uniformed I am about the rules of evidence in different jurisdictions.

So the central argument is that computers have been used for a long time in police work and we have created safeguards to ensure that these computers and databases are not abused. In order to prevent abuse most countries have rules dictating when the police can search databases for information about someone.

Additionally, many countries have more or less developed rules surrounding undercover work, surveillance work and the problem of what to do with excess information (i.e. information gained through surveillance but not relating to the investigation that warranted the surveillance). As you can tell I need to do more reading here. These will all be in the article but here I want to focus on a weakness in the rules of evidence, which may be presented to the courts. This weakness, I argue, may act as an encouragement to certain police officers to abuse their authority.

Facebook comes along and many government bodies (not limited to the police) are beginning to use it as an investigative tool. The anecdotal evidence I have gathered suggests no limitations within the police to using Facebook to get better photos of suspects, finding suspects by “trawling” Facebook and even going undercover to become friends with suspects.

Now here is an interesting difference between Anglo-American law and Swedish Law (I need to check if this applies to most/all civil code countries): The Anglo-American system is much better at regulating this are in favor of individual rights. Courts routinely decide whether or not information gathered is admissible. If a police officer in America gathers information illicitly it may not be part of the proceedings.

In Swedish law all information is admissible. The courts are deemed competent to handle the information and decide upon its value. If a police officer gathers information illicitly in Sweden it is still admissible in court but he may face disciplinary actions by his employer.

So here’s the thing: If an officer decides he doesn’t like the look of me. He has no right to check me up. But there is no limitation to going online.

He may then find out that some of my friends have criminal records (I have several activist friends with police records) or find politically incorrect, borderline illegal status updates I wrote while drunk (I have written drunk statements on Facebook).

This evidence may be enough to enable him to argue probable cause for a further investigation – or at least (and here is the crux of my argument) ensure that he will not be disciplined harshly in any future hearing (should such a hearing arise).

The way the rules are written Facebook provides a tool that can be used to legitimize abuse of police power. And the ways the rules are written in Swedish law are much more open to such abuse.

Here are the slides I used for the presentation

Is there an inverse Filter Bubble?

The whole concept of Filter Bubbles is fascinating. It’s the idea that services like Google & Facebook (and many more) live on collecting data about us. In order to do this more efficiently they need to make us happy. Happy customers keep using the service ergo more data. To keep us happy they organize and filter information and present it to us in a pleasing way. Pleasing me requires knowing me. Or as Bernard Shaw put it “Do not do unto others as you would that they should do unto you. Their tastes may be different”

Its this organizing that makes creates problems. At its most benign Google attempts to provide me with the right answer for me. So if I search for the word “bar” Google may, based on my previous interests (searches, mail analysis, Youtube views etc), present me with drinking establishments rather than information about pressure. Maybe useful, maybe annoying. The problem occurs when we move on to more difficult concepts. The filter bubble argument is that this organization is in fact a form of censorship as I will not be provided with a full range of information. (Some other terms of interest: echo chamber & daily me & daily you).

Recently I have been experimenting with filter bubbles and have begun to wonder if there is also an “inverse” filter bubble on Facebook. The inverse filter bubble occurs when a social media provider insists on keeping a person or subject in your feed and advertising despite all user attempts to ignore the person or topic.

So far I am working with several hypothesis:

  1. The bubble is not complete
  2. The media provider wants me to include the person/topic into my bubble
  3. The media provider thinks or knows of a connection I do not recognize
  4. The person I am ignoring is associating heavily with me (reading posts, clicking images etc)

This is a fascinating area and I need to set up some ways of testing the ideas. As usual all comments and suggestions appreciated.

Tolerance is law

Enjoying the great feeling of seeing my latest article (together with Jan Nolin) in (digital) print! Please check out Tolerance is law: Remixing Homage, Parodying Plagiarism which has been published today in the open journal Scripted.

Would like to thank the reviewers for pointing out the flaws and helping us improve the article. But I still want more so every and all comment is appreciated.

The abstract is boring but the article is (hopefully) much more interesting. Abstract:

Three centuries have passed since copyright law was developed to stimulate creativity and promote learning. The fundamental principles still apply, despite radical developments in the technology of production and distribution of cultural material. In particular the last decades’ developments and adoption of ICTs have drastically lowered barriers, which previously prevented entry into the production and distribution side of the cultural marketplace, and led to a widening of the base at which cultural production occurs and is disseminated. Additionally, digitalisation has made it economically and technically feasible for users to appropriate and manipulate earlier works as method of production.
The renegotiation of barriers and the increasing number of creators who publish their works has led to an increase in copyright violations and a pressure on copyright legislation. Many of these potential violations are tolerated, in some cases have become common practice, and created social norms. Others have not been so fortunate and the law has been rigidly enforced. This arbitrary application decreases the predictability of law and creates a situation where creation relies on the tolerance of the other copyright holders. This article analyses different cases of reuse that test the boundaries of copyright. Some of these are tolerated, others not. When regulation fails to capture the rich variation of creative reuse, it becomes difficult to predict which works will be tolerated. The analysis suggests that as copyright becomes prohibitive, social norms, power and the values of the copyright holder dominate and not law.

M Klang & J Nolin, “Tolerance is law: Remixing Homage, Parodying Plagiarism”, (2012) 9:1 SCRIPTed 7 http://script-ed.org/?p=476

Expressions in Code and Freedom: Notes from a lecture

Being invited to give an opening keynote is both incredibly flattering and intimidating. Addressing the KDE community at their Akademy is even more intimidating: I want to be light, funny, deep, serious, relevant, insightful and create a base for discussion. No wonder I couldn’t stop editing my slides until long after sundown.

Tweet: doubly useless

The goal of my talk was to address the problem of the increased TiVo-ization of life, democracy and policy. Stated simply TiVo-ization is following the letter of rules/principles while subverting them by changing what is physically possible (wikipedia on origins and deeper meaning)

In order to set the stage I presented earlier communications revolutions. Reading and writing are 6000 years old, but punctuation took almost 4000 years to develop and empty spaces between words are only 1000 years old. What we see here is that communication is a code that evolves, it gets hacked and improved. Despite its accessibility it retains several bugs for millennia.

The invention of writing is a paradigm shift. But its taken for granted. printing on the other hand is seen as an amazing shift. In my view Gutenberg was the Steve Jobs of his day, Gutenberg built on the earlier major shifts and worked on packaging – he gets much more credit for revolution than he deserves.

Tweet: Gutenberg

Communication evolves nicely (telegraphs, radio, television) but the really exciting and cool stuff occurs with digitalization. This major shift is today easily overlooked, together with the Internet, and we focus on the way in which communication is packaged rather than the infrastructure that makes it possible.

The WWW is one on these incredible packages that was created with an openness ideal. We should transmit whatever we liked as long as we followed the protocol for communication. So far so good. Our communications follow the Four Freedoms of Free Software, Communication is accessible, hackable and usable.

Tweet: Stallman

Unfortunately this total freedom inevitably creates the environment that invites convenience. Here corporations provide this convenience but at the cost of individual freedom and, in the long run, maybe at the cost of the WWW.

The risk to the WWW emerges from the paradox of our increasing use of the Web. Our increased use has brought with it a subtle shift in our linking habits. We are sending links to each other via social media on an unimaginable level. Sharing is the point of social media. The early discussion on blogging was all about user generated content. This is still important, but the focus of social media today is not on content generation but on sharing.

Focusing on sharing rather than content creation means we are creating less and linking less. Additionally the links we share are all stored in social media sites. These are impermanent and virtually unsearchable – they are virtually unhistoric. Without the links of the past there is no web “out in the wild” – the web of the future will exist only within the manicured and tamed versions within social network nature preserves (read more Will the web fail?)

On an individual level the sharing has created a performance lifestyle. This is the need to publicize elements of your life in order to enhance the quality of it. (Read more Performance Lifestyle & Coffee Sadism).

Tweet: coffee

This love of tech is built on the ideology that technology creates freedom, openness and democracy – in truth technology does not automatically do this. Give people technology and in all probability what will be created is more porn.

The problem is not that social media cannot be used for deeper things, but rather that the desire of the corporations controlling social media is to enable shallow sharing as opposed to deep interaction. Freedom without access to the code is useless. Without access to the code what we have is the TiVo-ization of everyday life. If you want a picture then this is a park bench that cannot be used by homeless people.

image from Yumiko Hayakawa essay Public Benches Turn ‘Anti-Homeless’ (also recommend Design with Intent)

Park benches which are specifically designed to prevent people from sleeping on benches. In order to exclude an undesirable group of people from a public area the democratic process must first define a group as undesirable and then obtain a consensus that this group is unwelcome. All this must be done while maintaining the air of democratic inclusion – it’s a tricky, almost impossible task. But by buying a bench which you cannot sleep on, you exclude those who need to sleep on park benches (the homeless) without even needing to enter into a democratic discussion.Only homeless people are affected. This is the TiVo-iztion of everyday life.

The more technology we embed into our lives the less freedom we have. The devices are dependent on our interaction as we are dependent upon them. All to often we adapt our lives to suit technology rather than the other way around.

In relation to social media the situation becomes worse when government money is spent trying to increase participation via social networks. The problem is that there is little or no discussion concerning the downsides or consequences of technologies on society . We no longer ask IF we should use laptops/tablets/social media in eduction but only HOW.

Partly this is due to the fear of exclusion. Democracy is all about inclusion, and pointing out that millions of users are “on” Facebook seems to be about inclusion. This is naturally a con. Being on/in social media is not democratic participation and will not democratize society. Why would you want to be Facebook friends with the tax authority. And how does this increase democracy?

The fear of lack of inclusion has led to schools teaching social media and devices instead of teaching Code and Consequences. By doing this, we are being sold the con that connection is democracy.

Tweet: Gadgets

So what can we do about it?

We need to hack society to protect openness. Not openness without real function (TiVo-ization) but openness that cannot be subverted. This is done by forcing social media to follow law and democratic principles. If they cannot be profitable within this scenario – tough.

This is done by being very, very annoying:
1. Tell people what the consequences of their information habits will have.
2. Always ask who controls the ways in which our gadgets affect our lives. Are they accountable?
3. Read ALL your EULA… Yes, I’m talking to you!
4. Always ask what your code will do to the lives of others. Always ask what your technology use will do to the lives of others…

 

The slides are here:

Cybercontrol 2.0

In a continuing discussion (original & response & reply) on the battles over Internet regulation. Both Nicklas & I are taking points from the past and drawing lines into the future, while taking into consideration the changes created by new technologies. In his last post Nicklas summed it up beautifully:

But as technology becomes more and more powerful, the control over technology will slowly converge with control over people.

Actually for me, control over technology has always been about control over people. Control over technology alone is unimportant. But Nicklas’ point is that our technology is creeping deeper into our lives and minds and therefore control over this technology will not only control the bodies but also the minds of the populous.

The point where we disagree is where we are turning at the moment. For Nicklas

The thing that sometimes worries me is that the alternative is not the status quo. It is not tinkering with the net as is. Because the net will continue to evolve and technology will make us even more powerful. The second time around the alternative to Barlow is not Lessig or even Wu&Goldsmith. It is Solzhenitsyn.

The thing is that Solzhenitsyn is too “easily” seen and eventually resisted. I fear a world where the alternative is Rupert Murdoch an intelligent and powerful man who happily(?) feeds the world Fox news and other trash – knowing that by entertaining us with garbage he controls us and our incomes. Increasingly I think we do not need totalitarian states to control us, its much cheaper to feed us garbage, entertain us with varying levels of porn and gossip and debase politics into punchlines. When the majority is busy with this, the minorities of protesters will not have the power to engage us into major social change.

As an aside: I like the fact that online regulation retains the cyber prefix. It’s dated but ties nicely back to the period when the question was still hotly debated.

Regulation is everything, or power abhors a vacuum

Can we really control the Internet? This is question has been around long enough to be deemed a golden oldie. But like a fungal infection it keeps coming back…

The early battle lines were drawn up in 1996. In an age where cyberspace was both a cool and correct term lawyers like Johnson & Post wrote “Law And Borders: The Rise of Law in Cyberspace” and activists like John Perry Barlow wrote his epic “A Declaration of the Independence of Cyberspace“.These were the cool and heady days of the cyberlibertarians vs cyberpaternalists. The libs believed that the web should & could not be regulated while the pats meant that it could and should. (I covered this in my thesis pdf here) Since then the terminology has changed but sentiments remain the same.

I miss the term cyberspace. But more to the point the “could/should” control argument continues. Nicklas has written an interesting point on the could part:

Fast forward twenty years. Bandwidth has doubled once, twice, three times. Devices capable of setting up ad hoc networks – large ones – are everywhere. Encrypted protocols are of state-defying strength and available to everyone. Tech savvy generations have grown up to expect access to the Internet not only as a given, but as unassailable. Networks like Anonymous has iterated, several times, and found topologies, communication practices and collaboration methods that defy tracking. The once expensive bottleneck technologies have become cheaper, the cost of building a network slowly approaching zero. The Internet has become a Internet that can be re-instantiated for a large swath of geography by a single individual.

So far so good. Not one internet but personal portable sharable spaces. The inability to control will lead to a free internet. But something feels wrong. Maybe its a cynical sadness of having heard this all before and seeing it all go wrong? From his text I get images of Johnny Mnemonic and The Matrix basically the hacker hero gunslinger fighting the anonymous faceless oppressive society. Its cool, but is it true?

The technology is (on some level) uncontrollable (without great oppression) but the point is that it does not have to be completely controlled. The control in society via technology is not about having 100% surveillance and pure systems which cannot be hacked. Control is about having reasonable amounts of failure in the system (System failures allow dissidents to believe they are winning).

The issue I have with pinning my hopes on the unregulatable internets is that they are – in social terms – an end to themselves. Who will connect to these nets? Obviously those who are in the know. You will connect when you know where & how to connect. This is a vital goal in itself but presents a problem for using these nets in wider social change. Getting information across to a broader section of the population.

Civil disobedience is a fantastic tool. But if the goal is disobedience in itself it is hardly justifiable in a group. If the goal is to bring about social change: ie. the goal is for a minority to convince a majority then the minority must communicate with the majority. If the nets are going to work we need to find ways for the majority to connect to them. If the majority can connect to them then so can the oppressive forces of regulation.

On the field of pats & libs I think I am what is a cynical libertarian. I am convinced of the power, value, social & individual power of non-regulation of technology but I don’t believe that politicians and lobbyists will leave technology alone. It’s an unfortunate truth: power hates a vacuum.

Dangerous Bits of Information: Notes from a lecture

Last week was an intense week of lecturing, which means that I have fallen behind with other work – including writing up lecture notes. One of the lectures was Dangerous bits of information and was presented at the NOKIOS conference in Trondheim Norway. Unfortunately I did not have much time in the city of Trondheim but what I saw was wonderful sunny city with plenty of places to sit and relax by the river that flows through the center. But there was not much sitting outside on this trip.

The lecture was part of the session “Ny teknologi i offentlig forvaltning – sikkerhet og personvern” (New Technology and Public Administration – security and data security). In the same session was Bjørn Erik Thon, Head of the Norwegian and Storm Jarl Landaasen, Chief Security Officer Market Divisions, Telenor Norge.

My lecture began with an introduction to the way in which many organizations fail to think about the implications of cloud technology. As an illustration I told of the process that surrounded my universities adoption of a student email system. When the university came to the realization that they were not really excellent at maintaining a student email system they decided to resolve this.

The resolution was not a decision of letting individuals chose their system. But the technical group (it was after all seen as a tech problem) was convened and decided in an either – or situation. The decision placed before the group was whether we go with Google or with Microsoft. The group chose Google out of a preference for the interface.

When I wrote a critique of this decision I was told that the decision was formally correct since all the right people (i.e. representatives) where present at the meeting. My criticism was, however, not based on the formality of the process but rather about the way in which the decision was framed and the lack of information given to the students who would be affected by the system.

My critique is based on four dangers of cloud computing (especially by public bodies) and the lack of discussion. The first issue is one of surveillance. Swedish FRA legislation, which allows the state to monitor all communication, was passed with the explicit (though rather useless) understanding that only cross border communication will be monitored. The exception is rather useless as most Internet communication crosses borders even if both sender and receiver is within the same small state. But this cross-border communication becomes even more certain when the email servers are based abroad – as those of gmail are.

The second problem is that some of the communication between student and lecturer is sensitive data. Also the lecturer in Sweden is a government official. This is a fact most of us often forget but should not. Now we have sensitive data being transferred to a third party. This is legal since the users (i.e. the students) have all clicked that they agree the licensing agreements that gmail sets. The problem is that the students have no choice (or very little & uninformed – see below) but to sign away their rights.
The third problem is that nothing is really deleted. This is because – as the important quote states – “If you are not paying for it you are not the customer but the product being sold” – the business model is to collect, analyze and market the data generated by the users.

But for me the most annoying of the problems is the lack of interest public authorities has in protecting citizens from eventual integrity abuses arising from the cloud. My university, a public authority, happily delivered 40000 new customers (and an untold future number due to technology lock-in) to Google and, adding insult to injury, thanking Google for the privilege.

Public authorities should be more concerned about their actions in the cloud. People who chose to give away their data need information about what they are doing. Maybe they even need to be limited. But when public bodies force users to give away data to third parties – then something is wrong. Or as I pointed out – smart people do dumb things.

The lecture continued by pointing out that European Privacy Law has a mental age of pre-1995 (the year of the Data Protection Directive). But do you remember the advice we gave and took about integrity and the Internet in the early days? They contained things like:

  • Never reveal your identity
  • Never reveal your address
  • Never reveal your age
  • Never reveal your gender

Post-Facebook points such as these become almost silly. Our technology has developed rapidly but our society and law is still based on the older analogue norms – the focus in law remains on protecting people from an outer gaze looking in. This becomes less important when the spreading of information is from us individuals and our friends.

The problem in this latter situation is that it extremely difficult to create laws to protect against the salami-method (i.e. where personal data is given away slice by slice instead of all at once).

At this stage I presented research carried out by Jan Nolin and myself on social media policies in local municipalities. We studied 26 policies ranging between < 1 page to 20 pages long. The policies made some interesting points but their strong analogue bias was clear throughout and there were serious omissions. They lacked clear definitions of social media, they confused social media carried out during work or free time. More importantly the policies did not address issues with cloud or topics such as copyright. (Our work is published in To Inform or to Interact, that is the question: The role of Freedom of Information & Disciplining social media: An analysis of social media policies in 26 Swedish municipalities)

Social media poses an interesting problem for regulators in that it is not a neutral infrastructure and it does not fall under the control of the state. The lecture closed with a discussion on the dangers of social media – in particular the increase in personalization, which leads to the Pariser Filter Bubble. In this scenario we see that the organizations are tailoring information to suit our needs or rather our wants. We are increasingly getting what we want rather than what we need. If we take a food analogy we want food with high fat and high sugar content – but this is not what our bodies need. The same applies to information. I may want entertainment but I probably need less of it than I want. Overdosing in fatty information will probably harm me and make me less of a balanced social animal.

Is there an answer? Probably not. The only way to control this issue is to limit individual’s autonomy. In much the same way as we have been forced to wear seat belts for our own security we may need to do the same with information. But this would probably be a political disaster for any politician attempting to suggest it.

Surveillance, Sousveillance & Autoveillance: Notes from a lecture

The theme for today’s lecture was about online privacy and was entitled Surveillance, Sousveillance & Autoveillance.

The lecture had to open up with a minor discussion on the concept of privacy and the problem of finding a definition that many can agree upon. Privacy is a strange mix of natural human need and social construct. The former is not easily identifiable and the latter varies between different cultures.

It is not enough to state that privacy may have a natural component – sure, put too many rats in a cage and they start to kill each other – you also need the technology to enable our affinity for privacy to develop.

For example in At Home: A Short History of Private Life, Bill Bryson writes that the hallway was absolutely essential for private life. Without the hallway people could not pass by other rooms to get to the room you need to go to – but they would have to pass through the other rooms. Our ideas of privacy were able to develop after the “invention” of the hallway.

In order to settle on a definition I picked one off Wikipedia …(from Latin: privatus “separated from the rest, deprived of something, esp. office, participation in the government”, from privo “to deprive”) is the ability of an individual or group to seclude themselves or information about themselves and thereby reveal themselves selectively.

And to fix the academic discussion I quoted from Warren and Brandeis The Right to Privacy, 4 Harvard Law Review 193 (1890)

The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world…solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress…

I like this quote because it also points to the effects of modern inventions on the loss of privacy.

In closing the lecture introduction I pointed out that privacy intervention consists of both data collection and data analysis – even though most of the history of privacy focused on the data collection side of the equation. In addition to this I broke down the data collection issue by pointing out that integrity consists of both information privacy (the stuff that resides in archives) and spatial privacy (for example surveillance cameras & the “right” to be groped at airports).

For the next section the lecture did a quick review of the role of technology in the privacy discussion. Without technology the ability to conduct surveillance is extremely limited. The early origins of tax records and collections like the Domesday book were fundamental for controlling society. However, real surveillance did not really begin until the development of technology such as the wonderful Kodak nr 1 in 1888. The advantages of this technology was that it provided a cheap, easy to use, portable ability to take photographs. Photographs could be snapped without the object standing still. A whole new set of problems was instantly born. One such problem was kodakers (amateur photographers, see “’Kodakers Lying in Wait’: Amateur Photography and the Right to Privacy in New York, 1885-1915”, American Quarterly, Vol 43, No 1 March 1991) who were able to suddenly able to take photographs at of unsuspecting victims.

Surveillance: A gaze from above

The tradition concerns of surveillance deal with the abuse of state (or corporate) power. The state legitimizes its own ability to collect information about its citizens. The theoretical concerns with surveillance are the abuse from the Big Brother state and foremost in this area is the work of Foucault and his development of the Panopticon (all-seeing eye prison). Foucault meant that in a surveillance society the surveilled, not knowing if anyone was looking, would internalize his own control.

Sousveillance: A gaze from below

The concept of sousveillance was originally developed within computer science and “…refers to the recording of an activity by a participant in the activity typically by way of small wearable or portable personal technologies…” Wikipedia

But in the context of privacy the idea was that our friends and peers (especially tricky concepts in Social Media) will be the ones who collect and spread information about us online.

We are dependent upon our social circle, as Granovetter states: “Weak ties provide people with access to information and resources beyond those available in their own social circle; but strong ties have greater motivation to be of assistance and are typically more easily available.” (Granovetter, M.S. (1983). “The Strength of the Weak Tie: Revisited”, Sociological Theory, Vol. 1, 201-33., pp 209).

This ability of others to “out” us in social media will become more interesting with the development of facial recognition applications. These have already begun to challenge social and legal norms (Facebook facial recognition software violates privacy laws, says Germany – The Guardian 3 August 2011).

Autoveillance: a gaze from within

The final level is Autoveillance – this is obviously not the fact that we are looking at ourselves but attempts to address the problems of our newfound joy in spreading personal information about ourselves.

Is this a form of exhibitionism that enables us to happily spread personal, and sometimes intimate, information about ourselves? Is this the modern version of narcissism?

Narcissism is a term with a wide range of meanings, depending on whether it is used to describe a central concept of psychoanalytic theory, a mental illness, a social or cultural problem, or simply a personality trait. Except in the sense of primary narcissism or healthy self-love, “narcissism” usually is used to describe some kind of problem in a person or group’s relationships with self and others. (Wikipedia)

We have always “leaked” information but most of the time we have applied different strategies of control. One such strategy is compartmentalization – which is the attempt to deliver different information to different groups. For example my mother, my wife, my co-workers, my friends and my children do not need to know the same stuff about me. But social media technology defies the strategy of compartmentalization.

At the same time as this is happening our social and legal norms have remained firm in the analog age and focus on the gaze from without.

Then the lecture moved from data collection to data analysis. Today this is enabled by the fact that all users have sold away their rights via their End-User License Agreements (EULA). The EULA is based upon the illusion of contracts as agreements between equals. However, as most people do not read the license, or if they read the license they don’t understand it, or if they understand it the license is apt to change without notice.

Today we have a mix of sur, sous & autoveillance. And again: regulation mainly focuses on surveillance. This is leading to an idea about the end of privacy. Maybe privacy is a thing of the past? Privacy has not always been important and it may once again fall into disrepute.

With the end of privacy – everyone may know everything about everyone else. We may have arrived at a type of Hive Mind. The hive mind is a concept from science fiction (for example Werewolves in Twilight, The Borg in Star Trek and the agents in The Matrix). An interesting addition to this line of thinking is the recent work by the Swedish philosopher Torbjörn Tännsjö who argues that it is information inequality that is the problem.

The problem with Tännsjö’s arguments is that he is a safe person living in a tolerant society. He seems to really believe the adage: If you have done nothing wrong, you have nothing to fear. I seriously doubt that the stalked, cyberbullied, the disenfranchised etc will be happier with information equality – I think that they would prefer the ability to hide their weaknesses and to chose when and where this information will be disclosed.

The problem is that while we had a (theoretical) form of control over Big Brother we have no such control over corporations to whom we are less than customers:

If you are not paying for it, you’re not the customer; you’re the product being sold.

The lecture closed with reminders from Eli Pariser’s The Filter Bubble that with the personalization of information we will lose our identities and end up with a diet of informational junk food (the stuff we maybe want but should not eat to much of).

Then a final word of warning from Evgeny Morozov (The Net Delusion) to remind the audience that there is nothing inherently democratic about technology – our freedom and democracy will not be created, supported or spread just because we have iPods…