Stallman lecture in Göteborg

Next week its finally time for the annual FSCONS conference. This year is the fifth year running and it keeps getting better all the time. This year brings an additional bonus as  Richard Stallmanwill give  a presentation at Runan in Gothenburg the day before the conference begins “for real”

About the talk: Activities directed at including” more people in the use of digital technology are predicated on the assumption that such inclusion is invariably a good thing. It appears so, when judged solely by immediate practical convenience. However, if we also judge in terms of human rights, whether digital inclusion is good or bad depends on what kind of digital world we are to be included in. If we wish to work towards digital inclusion as a goal, it behooves us to make sure it is the good kind.

 

Soon time for FSCONS 2011

It’s soon time for my favorite annual Free Culture event. This time, it’s the 5th FSCONS conference will be between 11th and 13th of November. As usual it is held in Gothenburg, Sweden.

FSCONS is the Nordic countries’ largest gathering for free culture, free software and a free society. The conference is organised yearly with 250-300 participants primarily from northern Europe. The main organiser is the Society for Free Culture and Software.

This years keynote speakers will be Richard Stallman & Christina Haralanova.

This year’s track are Building Together — Manufacturing Solidarity, Development for Embedded Systems, Development in Free Software Communities, Free Desktop Environments, Free Software in Politics, Human Rights and Digital Freedoms, Social Events, The Future of Money, Universal Design — Aiming for Accessibility.

Since I am not a coder I am especially looking forward to attending Book scanning, proofreading, and advanced reuse & Bitcoin: decentralised currency & Policy issues around Free Software & Privacy or welfare – pick one: Cryptocurrencies, taxation, and the legibility of culture & WikiLeaks, Whistleblowing and the Mainstream Audience & Internet and Civil Rights In LATAM & many more. Not to mention the great discussions and beer drinking nights.

Oh, and I will be giving the presentation Off the grid: Is anonymity possible?

Registration here.

Is user education a red herring?

The BBC podcast of The Media Show with Steve Hewlett is always interesting to listen to. The latest show I listened to (episode 28 September 2011) contained a segment on the recent changes to Facebook and what these may mean for privacy. Hewlett interviewed Facebook’s Christian Hernandez and attempted to get him to see the privacy effects of the new changes.

Basically the new changes will mean that your friends will see what you are doing online – unless you opt out of showing those specific pages. In other words Facebook will happily announce to your “friends” that you have been looking at pages on weight loss (or whatever) and naturally let them draw their own conclusions from what they see of what I saw.

Hernandez was quick to stress the elements of user control over his/her information. If you chose you may opt-out of showing friends the specific pages you are viewing right now. Additionally if you forget you can remove the pages after the fact.

My problem with the former is that I need to be aware that my Facebook friends will always be looking over my shoulder. I am easily going to forget this. As for the latter – well once my friends know what I have looked at, removing the links/pages/information is not effective… I have already outed myself.

When pressed for a reasoning to why the privacy encroaching changes were made Hernandez talked about Zuckerbergs vision of a social net. When pressed further he returned back to the concept of user control. Eventually he did accept that these changes will require user education.

In other words we, the users, need to learn new proactive, protective forms of behavior. The platform owner has washed their hands – its our problem that they have given us the gift of freedom and control. Wonderful terms like freedom and control become red herrings in the world of data harvesting.

But if we are in danger from social media shouldn’t we be able to expect that the state will somehow regulate to protect us from our own behavior. They did so in areas such as smoking, seat-belts and motorcycle helmets… Sure there is a lot of interest in attempting to update privacy regulation from the pre-social media age – but its tricky. Also not everyone is in favor of regulation.

An example of this is Jeff Jarvis’ recent book Private Parts – Gordon Crovitz reviewed it in the Wall Street Journal

“Congress is considering several privacy bills. But Mr. Jarvis calls it a ‘dire mistake to regulate and limit this new technology before we even know what it can do.’

“Privacy is notoriously difficult to define legally. Mr. Jarvis says we should think about privacy as a matter of ethics instead. We should respect what others intend to keep private, but publicness reflects the choices ‘made by the creator of one’s own information.’ The balance between privacy and publicness will differ from person to person in ways that laws applying to all can’t capture.”

Jarvis is right that it is complex to regulate what we do not fully understand but this means that in the meantime we are losing our integrity rights every time the platform owners make changes – nominally to increase our freedom and control – but in reality to increase their control and profits. Lets never forget what MetaFilter user blue_beetle wrote “if you’re not paying for something, you’re not the customer; you’re the product being sold”.

Profiteers may act to protect access to raw material – not the rights of raw material.

 

Dangerous Bits of Information: Notes from a lecture

Last week was an intense week of lecturing, which means that I have fallen behind with other work – including writing up lecture notes. One of the lectures was Dangerous bits of information and was presented at the NOKIOS conference in Trondheim Norway. Unfortunately I did not have much time in the city of Trondheim but what I saw was wonderful sunny city with plenty of places to sit and relax by the river that flows through the center. But there was not much sitting outside on this trip.

The lecture was part of the session “Ny teknologi i offentlig forvaltning – sikkerhet og personvern” (New Technology and Public Administration – security and data security). In the same session was Bjørn Erik Thon, Head of the Norwegian and Storm Jarl Landaasen, Chief Security Officer Market Divisions, Telenor Norge.

My lecture began with an introduction to the way in which many organizations fail to think about the implications of cloud technology. As an illustration I told of the process that surrounded my universities adoption of a student email system. When the university came to the realization that they were not really excellent at maintaining a student email system they decided to resolve this.

The resolution was not a decision of letting individuals chose their system. But the technical group (it was after all seen as a tech problem) was convened and decided in an either – or situation. The decision placed before the group was whether we go with Google or with Microsoft. The group chose Google out of a preference for the interface.

When I wrote a critique of this decision I was told that the decision was formally correct since all the right people (i.e. representatives) where present at the meeting. My criticism was, however, not based on the formality of the process but rather about the way in which the decision was framed and the lack of information given to the students who would be affected by the system.

My critique is based on four dangers of cloud computing (especially by public bodies) and the lack of discussion. The first issue is one of surveillance. Swedish FRA legislation, which allows the state to monitor all communication, was passed with the explicit (though rather useless) understanding that only cross border communication will be monitored. The exception is rather useless as most Internet communication crosses borders even if both sender and receiver is within the same small state. But this cross-border communication becomes even more certain when the email servers are based abroad – as those of gmail are.

The second problem is that some of the communication between student and lecturer is sensitive data. Also the lecturer in Sweden is a government official. This is a fact most of us often forget but should not. Now we have sensitive data being transferred to a third party. This is legal since the users (i.e. the students) have all clicked that they agree the licensing agreements that gmail sets. The problem is that the students have no choice (or very little & uninformed – see below) but to sign away their rights.
The third problem is that nothing is really deleted. This is because – as the important quote states – “If you are not paying for it you are not the customer but the product being sold” – the business model is to collect, analyze and market the data generated by the users.

But for me the most annoying of the problems is the lack of interest public authorities has in protecting citizens from eventual integrity abuses arising from the cloud. My university, a public authority, happily delivered 40000 new customers (and an untold future number due to technology lock-in) to Google and, adding insult to injury, thanking Google for the privilege.

Public authorities should be more concerned about their actions in the cloud. People who chose to give away their data need information about what they are doing. Maybe they even need to be limited. But when public bodies force users to give away data to third parties – then something is wrong. Or as I pointed out – smart people do dumb things.

The lecture continued by pointing out that European Privacy Law has a mental age of pre-1995 (the year of the Data Protection Directive). But do you remember the advice we gave and took about integrity and the Internet in the early days? They contained things like:

  • Never reveal your identity
  • Never reveal your address
  • Never reveal your age
  • Never reveal your gender

Post-Facebook points such as these become almost silly. Our technology has developed rapidly but our society and law is still based on the older analogue norms – the focus in law remains on protecting people from an outer gaze looking in. This becomes less important when the spreading of information is from us individuals and our friends.

The problem in this latter situation is that it extremely difficult to create laws to protect against the salami-method (i.e. where personal data is given away slice by slice instead of all at once).

At this stage I presented research carried out by Jan Nolin and myself on social media policies in local municipalities. We studied 26 policies ranging between < 1 page to 20 pages long. The policies made some interesting points but their strong analogue bias was clear throughout and there were serious omissions. They lacked clear definitions of social media, they confused social media carried out during work or free time. More importantly the policies did not address issues with cloud or topics such as copyright. (Our work is published in To Inform or to Interact, that is the question: The role of Freedom of Information & Disciplining social media: An analysis of social media policies in 26 Swedish municipalities)

Social media poses an interesting problem for regulators in that it is not a neutral infrastructure and it does not fall under the control of the state. The lecture closed with a discussion on the dangers of social media – in particular the increase in personalization, which leads to the Pariser Filter Bubble. In this scenario we see that the organizations are tailoring information to suit our needs or rather our wants. We are increasingly getting what we want rather than what we need. If we take a food analogy we want food with high fat and high sugar content – but this is not what our bodies need. The same applies to information. I may want entertainment but I probably need less of it than I want. Overdosing in fatty information will probably harm me and make me less of a balanced social animal.

Is there an answer? Probably not. The only way to control this issue is to limit individual’s autonomy. In much the same way as we have been forced to wear seat belts for our own security we may need to do the same with information. But this would probably be a political disaster for any politician attempting to suggest it.

Surveillance, Sousveillance & Autoveillance: Notes from a lecture

The theme for today’s lecture was about online privacy and was entitled Surveillance, Sousveillance & Autoveillance.

The lecture had to open up with a minor discussion on the concept of privacy and the problem of finding a definition that many can agree upon. Privacy is a strange mix of natural human need and social construct. The former is not easily identifiable and the latter varies between different cultures.

It is not enough to state that privacy may have a natural component – sure, put too many rats in a cage and they start to kill each other – you also need the technology to enable our affinity for privacy to develop.

For example in At Home: A Short History of Private Life, Bill Bryson writes that the hallway was absolutely essential for private life. Without the hallway people could not pass by other rooms to get to the room you need to go to – but they would have to pass through the other rooms. Our ideas of privacy were able to develop after the “invention” of the hallway.

In order to settle on a definition I picked one off Wikipedia …(from Latin: privatus “separated from the rest, deprived of something, esp. office, participation in the government”, from privo “to deprive”) is the ability of an individual or group to seclude themselves or information about themselves and thereby reveal themselves selectively.

And to fix the academic discussion I quoted from Warren and Brandeis The Right to Privacy, 4 Harvard Law Review 193 (1890)

The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world…solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress…

I like this quote because it also points to the effects of modern inventions on the loss of privacy.

In closing the lecture introduction I pointed out that privacy intervention consists of both data collection and data analysis – even though most of the history of privacy focused on the data collection side of the equation. In addition to this I broke down the data collection issue by pointing out that integrity consists of both information privacy (the stuff that resides in archives) and spatial privacy (for example surveillance cameras & the “right” to be groped at airports).

For the next section the lecture did a quick review of the role of technology in the privacy discussion. Without technology the ability to conduct surveillance is extremely limited. The early origins of tax records and collections like the Domesday book were fundamental for controlling society. However, real surveillance did not really begin until the development of technology such as the wonderful Kodak nr 1 in 1888. The advantages of this technology was that it provided a cheap, easy to use, portable ability to take photographs. Photographs could be snapped without the object standing still. A whole new set of problems was instantly born. One such problem was kodakers (amateur photographers, see “’Kodakers Lying in Wait’: Amateur Photography and the Right to Privacy in New York, 1885-1915”, American Quarterly, Vol 43, No 1 March 1991) who were able to suddenly able to take photographs at of unsuspecting victims.

Surveillance: A gaze from above

The tradition concerns of surveillance deal with the abuse of state (or corporate) power. The state legitimizes its own ability to collect information about its citizens. The theoretical concerns with surveillance are the abuse from the Big Brother state and foremost in this area is the work of Foucault and his development of the Panopticon (all-seeing eye prison). Foucault meant that in a surveillance society the surveilled, not knowing if anyone was looking, would internalize his own control.

Sousveillance: A gaze from below

The concept of sousveillance was originally developed within computer science and “…refers to the recording of an activity by a participant in the activity typically by way of small wearable or portable personal technologies…” Wikipedia

But in the context of privacy the idea was that our friends and peers (especially tricky concepts in Social Media) will be the ones who collect and spread information about us online.

We are dependent upon our social circle, as Granovetter states: “Weak ties provide people with access to information and resources beyond those available in their own social circle; but strong ties have greater motivation to be of assistance and are typically more easily available.” (Granovetter, M.S. (1983). “The Strength of the Weak Tie: Revisited”, Sociological Theory, Vol. 1, 201-33., pp 209).

This ability of others to “out” us in social media will become more interesting with the development of facial recognition applications. These have already begun to challenge social and legal norms (Facebook facial recognition software violates privacy laws, says Germany – The Guardian 3 August 2011).

Autoveillance: a gaze from within

The final level is Autoveillance – this is obviously not the fact that we are looking at ourselves but attempts to address the problems of our newfound joy in spreading personal information about ourselves.

Is this a form of exhibitionism that enables us to happily spread personal, and sometimes intimate, information about ourselves? Is this the modern version of narcissism?

Narcissism is a term with a wide range of meanings, depending on whether it is used to describe a central concept of psychoanalytic theory, a mental illness, a social or cultural problem, or simply a personality trait. Except in the sense of primary narcissism or healthy self-love, “narcissism” usually is used to describe some kind of problem in a person or group’s relationships with self and others. (Wikipedia)

We have always “leaked” information but most of the time we have applied different strategies of control. One such strategy is compartmentalization – which is the attempt to deliver different information to different groups. For example my mother, my wife, my co-workers, my friends and my children do not need to know the same stuff about me. But social media technology defies the strategy of compartmentalization.

At the same time as this is happening our social and legal norms have remained firm in the analog age and focus on the gaze from without.

Then the lecture moved from data collection to data analysis. Today this is enabled by the fact that all users have sold away their rights via their End-User License Agreements (EULA). The EULA is based upon the illusion of contracts as agreements between equals. However, as most people do not read the license, or if they read the license they don’t understand it, or if they understand it the license is apt to change without notice.

Today we have a mix of sur, sous & autoveillance. And again: regulation mainly focuses on surveillance. This is leading to an idea about the end of privacy. Maybe privacy is a thing of the past? Privacy has not always been important and it may once again fall into disrepute.

With the end of privacy – everyone may know everything about everyone else. We may have arrived at a type of Hive Mind. The hive mind is a concept from science fiction (for example Werewolves in Twilight, The Borg in Star Trek and the agents in The Matrix). An interesting addition to this line of thinking is the recent work by the Swedish philosopher Torbjörn Tännsjö who argues that it is information inequality that is the problem.

The problem with Tännsjö’s arguments is that he is a safe person living in a tolerant society. He seems to really believe the adage: If you have done nothing wrong, you have nothing to fear. I seriously doubt that the stalked, cyberbullied, the disenfranchised etc will be happier with information equality – I think that they would prefer the ability to hide their weaknesses and to chose when and where this information will be disclosed.

The problem is that while we had a (theoretical) form of control over Big Brother we have no such control over corporations to whom we are less than customers:

If you are not paying for it, you’re not the customer; you’re the product being sold.

The lecture closed with reminders from Eli Pariser’s The Filter Bubble that with the personalization of information we will lose our identities and end up with a diet of informational junk food (the stuff we maybe want but should not eat to much of).

Then a final word of warning from Evgeny Morozov (The Net Delusion) to remind the audience that there is nothing inherently democratic about technology – our freedom and democracy will not be created, supported or spread just because we have iPods…

Powerpoint and kittens

Not for the first time during a conference I sit thinking about Edward Tuft. He was a critic of slideshow presentations and it is easy to understand why. Most of the time you find intelligent people failing to interact with their audience – not because the audience lacks the ability to comprehend but the technology used acts as a inhibitor rather than an enabler: The short version from Wikipedia

In his essay “The Cognitive Style of PowerPoint”, Tufte criticizes many properties and uses of the software:

  • It is used to guide and to reassure a presenter, rather than to enlighten the audience;
  • It has unhelpfully simplistic tables and charts, resulting from the low resolution of early computer displays;
  • The outliner causes ideas to be arranged in an unnecessarily deep hierarchy, itself subverted by the need to restate the hierarchy on each slide;
  • Enforcement of the audience’s linear progression through that hierarchy (whereas with handouts, readers could browse and relate items at their leisure);
  • Poor typography and chart layout, from presenters who are poor designers and who use poorly designed templates and default settings (in particular, difficulty in using scientific notation);
  • Simplistic thinking, from ideas being squashed into bulleted lists, and stories with beginning, middle, and end being turned into a collection of disparate, loosely disguised points. This may present an image of objectivity and neutrality that people associate with science, technology, and “bullet points”.

It is also easy to remember Edward Tufte from this wonderful illustration by Mark Goetz:

I have many kittens on my conscious – did Dante have a level for powerpoint abusers?

Nomination period open for Nordic Free Software Award

About
The Nordic Free Software Award is given to people, projects or organisations in the Nordic countries that have made a prominent contribution to the advancement of Free Software. The award will be announced during FSCONS 2011 in Gothenburg.

Nominate
Send an email to award [AT] fscons.org (moderated mailing list) with the following information:

* Name of nominee
* Bio of nominee
* Website
* Contact info
* Motivation

The nomination period ends October 22

Join the award committee
Send an email to award [AT] fscons.org (moderated mailing list) with the following information:

* Your name
* Your email
* Motivation why you want to join the award committee

List of nominated 2011
Will be presented in October

Previous Award winners
* 2010 Bjarni Rúnar Einarsson (more info)
* 2009 Simon Josefsson and Daniel Stenberg (more info)
* 2008 Mats Östling (more info)
* 2007 SkoleLinux (more info)

Increased surveillance it obviously cheaper than social change

“Everyone watching these horrific actions will be struck by how they were organized via social media. Free flow of information can be used for good. But it can also be used for ill. And when people are using social media for violence we need to stop them. So we are working with the police, the intelligence services and industry to look at whether it would be right to stop people communicating via these Web sites and services when we know they are plotting violence, disorder and criminality.”

Prime Minister David Cameron of Britain addressing Parliament during a special debate on the UK riots.

(via BoingBoing)

Increased surveillance it obviously cheaper than social change. Riots are bad but they are a incredibly potent symbol that something is wrong in society. So far the focus has been on “bad kids”, “bad parents” and “bad social media”. It’s all about blaming the individuals and preventing the possibility of rioting – Nothing about the need to create a society were people don’t want to riot.

Death, Internet & Law PhD

This is so cool! Almost makes me want to do a second PhD… More info here.

PhD Studentship in

Law

University of Strathclyde – Faculty of Humanities and Social Sciences – School of Law -– Legal Aspects of Transmission of Digital Assets on Death

The School of Law in the University of Strathclyde invites applications for a PhD studentship which will research the area of how the law regulates the transmission of digital assets on death, including notions of access, control, propertisation, and ownership. These assets might include: Facebook profiles, photos on Flickr, tweets, virtual assets in online game worlds such as Second Life, e-money, blog texts, eBay trading accounts, etc. This is a novel area where the student will be expected to research independently into appropriate areas of private law (eg property, succession, probate, contract) as well as intellectual property law, personality law and privacy law. A back ground in technology law is not essential, nor a technology qualification, but an interest in the information society is probably essential.

Applicants from any jurisdiction (including non-UK EU jurisdictions) are welcomed but English law will most likely form one of the jurisdictions of the study. Applicants should hold a first or upper second class Honours degree or equivalent in an appropriate discipline. A Masters qualification may be helpful. The studentship is funded by the Horizon Digital Economy Research Hub (https://www.horizon.ac.uk/) who are a major interdisciplinary centre for study of the Internet and ubiquitous computing funded by the RCUK Digital Economy programme and based at Nottingham University; the successful candidate will be based within the expanding Centre for Internet Law and Policy at Strathclyde Law School, but will have opportunities to participate in Horizons activities. The student will be supervised by the Director of CILP, Professor Lilian Edwards.

Applicants should submit, by SEPTEMBER 16 2011, a full CV, two academic references, evidence of academic qualifications to date and a covering letter detailing interest in the area of research to:

Janet Riddell (Horizon Digital Economy Scholarship), Graduate School Manager, Faculty of Humanities and Social Science, Room LT205, Livingstone Tower, 26 Richmond Street, Glasgow, G1 1XH

Or by e-mail to: hass-postgrad@strath.ac.uk

Successful applicants will have their fees at home/EU rates only ((sadly)) waived for three years together with an annual maintenance award for three years of £13,590. The scholarship is for one year in the first instance and subject to satisfactory progress, will normally be renewed up to the maximum of a further 2 years.

Visit www.strath.ac.uk/postgrad for general information on postgraduate research study at the University of Strathclyde and http://www.strath.ac.uk/humanities/courses/law/courses/lawbyresearch/ for further information on research degrees in the Law School.

Informal enquiries may be addressed to: lilian.edwards@strath.ac.uk