What is platform literacy?

Anyone who is trying to think about platforms and their impact should be following Mark Carrigan. His Platform Capitalism Reading Group includes key readings and is a discussion I wish I could have attended. And then their is this simple text What is platform literacy? in connection with a call for reading materials. This is why this is a fascinating question:

“In the last couple of years, I’ve found myself returning repeatedly to the idea of platform literacy. By this I mean a capacity to understand how platforms shape the action which takes place through them, sometimes in observable and explicit ways but usually in unobservable and implicit ones. It concerns our own (inter)actions and how this context facilitates or frustrates them, as well as the unseen ways in which it subtly moulds them and the responses of others to them.”

Nothing is more beautiful (and frustrating) to an academic to read a simple paragraph that nails the questions rattling around in your own mind. Thanks Mark!

The goal of platform literacy is to be able to identify the subtle ways in which actions are directed, controlled, regulated and censored in the online environment. To mangle the great “Whereof one cannot speak, thereof one must be silent.” (Wittgenstein) – We cannot protest that which prevents us from protesting.

Seduction rather than policing

“The great majority of people – men as well as women – are today integrated through seduction rather than policing, advertising rather than indoctrinating, need- creation rather than normative regulation.”

Bauman, Z. (1998) On Postmodern Uses of Sex p.23

Being Passive Aggressive on Facebook

How do you know when you’ve made a faux pas on a social network? If you let slip a politically incorrect comment in real life you should be able to tell that you have crossed a line by the pained expressions and the nervous squirms – but how do people squirm on social media?

This social squirming is important. It is a way in which we are schooled and taught the social boundaries of our world. Naturally some overly boorish person may actually say “we don’t accept that behavior here” but this is really unnecessary. We are usually good at picking up cues, the squirms are enough.

So how do people squirm on Facebook? Well they do so in the most passive aggressive way. Rarely do you find the boorish reproachful comment. Most often what we are met with is silence. Sure, offscreen it silence is a passive aggressive strategy but online it is the most commonly used.

Try it! Say something incorrect on FB and you will be frozen out of the social circle. Keep it up and people may begin to block you. Of course this means that the time nobody liked your post… it could have been that you crossed a social line.

Controlled by the path of least resistance

Technological systems leave their mark on the way in which we live our lives. An obvious example is this fascinating nighttime photo of North and South Korea taken from the International Space Station. It’s obvious because the two countries are separated by access to the basic supply of electricity.

North Korea is almost completely dark compared to neighboring South Korea and China. The darkened land appears as if it were a patch of water joining the Yellow Sea to the Sea of Japan. Its capital city, Pyongyang, appears like a small island, despite a population of 3.26 million (as of 2008). The light emission from Pyongyang is equivalent to the smaller towns in South Korea.

Astronaut photograph ISS038-E-38300 was acquired on January 30, 2014, with a Nikon D3S digital camera using a 24 millimeter lens, and is provided by the ISS Crew Earth Observations Facility and the Earth Science and Remote Sensing Unit, Johnson Space Center.

An even more brilliant (bad pun) illustration is the images of Berlin by night taken from the International Space Station. The photo, taken by Colonel Chris Hadfield, shows that the city still carries with it the heritage of the division. The Berlin Wall came down in 1989 and since then Berlin has been rapidly unifying and developing. Despite this, the East-West divide can be seen in the color of the street lights.

Colonel Chris Hadfield’s photograph of Berlin at night shows a divide between the whiter lights of former west Berlin and the yellower lights of the east. Photograph: Nasa

The technological systems follow the political and administrative lines of the past and cannot be removed as easily as the wall which divided them. The Guardian explains the different colors:

Daniela Augenstine, of the city’s street furniture department, says: “In the eastern part there are sodium-vapour lamps with a yellower colour. And in the western parts there are fluorescent lamps – mercury arc lamps and gas lamps – which all produce a whiter colour.” The western Federal Republic of Germany long favoured non-sodium lamps on the grounds of cost, maintenance and carbon emissions, she says.

These examples are of traces of systems that have failed (or are going to fail). They work on the principle that by controlling users with force they can maintain power. In the end, systems like these, will collapse because the effort of keeping control outstrips the ability to control. Real control is efficient when (1) the users internalize the surveillance/supervision (Foucault: Panopticon) AND (2) users believe that they are acting in their own convenience and desire.

What fascinates me with these examples is the way in which our technology use marks our surroundings. An obvious example of this is the desire path that line which appears in the snow or bare track in the grass that shows how the world is really used by people as opposed to the idea which the designer believed the technology would be used.

The difference between expected use and actual use. Technology use leaves its traces in our consumption and adaption to the technology upon which we rely. However, it works both ways. By controlling the technology we rely on, we the users, can be led to believe that we desire the features of control that are provided.

An example of this is the way in which the popularity of the iPhone is no way diminished by, from a usability point of view, android operating systems are infinitely more adaptable to different needs. Or the ways in which the collection of data from technology users is all but ignored by the users in their desire for convenience.

If the iTunes/iPhone is to be compared to a silo keeping its users locked in, then it can only succeeded if (1) the users can be convinced that they are happy with the surveillance/control (Foucault: Panopticon) AND (2) any other alternative would be less convenient. If (1) fails then users would happily jailbreak their devices (on a much larger scale than now) and if (2) fails then the system will eventually collapse under its own weight when users realize that life is better on the other side of the wall.

We will all be controlled by the path of least resistance.

 

Expressions in Code and Freedom: Notes from a lecture

Being invited to give an opening keynote is both incredibly flattering and intimidating. Addressing the KDE community at their Akademy is even more intimidating: I want to be light, funny, deep, serious, relevant, insightful and create a base for discussion. No wonder I couldn’t stop editing my slides until long after sundown.

Tweet: doubly useless

The goal of my talk was to address the problem of the increased TiVo-ization of life, democracy and policy. Stated simply TiVo-ization is following the letter of rules/principles while subverting them by changing what is physically possible (wikipedia on origins and deeper meaning)

In order to set the stage I presented earlier communications revolutions. Reading and writing are 6000 years old, but punctuation took almost 4000 years to develop and empty spaces between words are only 1000 years old. What we see here is that communication is a code that evolves, it gets hacked and improved. Despite its accessibility it retains several bugs for millennia.

The invention of writing is a paradigm shift. But its taken for granted. printing on the other hand is seen as an amazing shift. In my view Gutenberg was the Steve Jobs of his day, Gutenberg built on the earlier major shifts and worked on packaging – he gets much more credit for revolution than he deserves.

Tweet: Gutenberg

Communication evolves nicely (telegraphs, radio, television) but the really exciting and cool stuff occurs with digitalization. This major shift is today easily overlooked, together with the Internet, and we focus on the way in which communication is packaged rather than the infrastructure that makes it possible.

The WWW is one on these incredible packages that was created with an openness ideal. We should transmit whatever we liked as long as we followed the protocol for communication. So far so good. Our communications follow the Four Freedoms of Free Software, Communication is accessible, hackable and usable.

Tweet: Stallman

Unfortunately this total freedom inevitably creates the environment that invites convenience. Here corporations provide this convenience but at the cost of individual freedom and, in the long run, maybe at the cost of the WWW.

The risk to the WWW emerges from the paradox of our increasing use of the Web. Our increased use has brought with it a subtle shift in our linking habits. We are sending links to each other via social media on an unimaginable level. Sharing is the point of social media. The early discussion on blogging was all about user generated content. This is still important, but the focus of social media today is not on content generation but on sharing.

Focusing on sharing rather than content creation means we are creating less and linking less. Additionally the links we share are all stored in social media sites. These are impermanent and virtually unsearchable – they are virtually unhistoric. Without the links of the past there is no web “out in the wild” – the web of the future will exist only within the manicured and tamed versions within social network nature preserves (read more Will the web fail?)

On an individual level the sharing has created a performance lifestyle. This is the need to publicize elements of your life in order to enhance the quality of it. (Read more Performance Lifestyle & Coffee Sadism).

Tweet: coffee

This love of tech is built on the ideology that technology creates freedom, openness and democracy – in truth technology does not automatically do this. Give people technology and in all probability what will be created is more porn.

The problem is not that social media cannot be used for deeper things, but rather that the desire of the corporations controlling social media is to enable shallow sharing as opposed to deep interaction. Freedom without access to the code is useless. Without access to the code what we have is the TiVo-ization of everyday life. If you want a picture then this is a park bench that cannot be used by homeless people.

image from Yumiko Hayakawa essay Public Benches Turn ‘Anti-Homeless’ (also recommend Design with Intent)

Park benches which are specifically designed to prevent people from sleeping on benches. In order to exclude an undesirable group of people from a public area the democratic process must first define a group as undesirable and then obtain a consensus that this group is unwelcome. All this must be done while maintaining the air of democratic inclusion – it’s a tricky, almost impossible task. But by buying a bench which you cannot sleep on, you exclude those who need to sleep on park benches (the homeless) without even needing to enter into a democratic discussion.Only homeless people are affected. This is the TiVo-iztion of everyday life.

The more technology we embed into our lives the less freedom we have. The devices are dependent on our interaction as we are dependent upon them. All to often we adapt our lives to suit technology rather than the other way around.

In relation to social media the situation becomes worse when government money is spent trying to increase participation via social networks. The problem is that there is little or no discussion concerning the downsides or consequences of technologies on society . We no longer ask IF we should use laptops/tablets/social media in eduction but only HOW.

Partly this is due to the fear of exclusion. Democracy is all about inclusion, and pointing out that millions of users are “on” Facebook seems to be about inclusion. This is naturally a con. Being on/in social media is not democratic participation and will not democratize society. Why would you want to be Facebook friends with the tax authority. And how does this increase democracy?

The fear of lack of inclusion has led to schools teaching social media and devices instead of teaching Code and Consequences. By doing this, we are being sold the con that connection is democracy.

Tweet: Gadgets

So what can we do about it?

We need to hack society to protect openness. Not openness without real function (TiVo-ization) but openness that cannot be subverted. This is done by forcing social media to follow law and democratic principles. If they cannot be profitable within this scenario – tough.

This is done by being very, very annoying:
1. Tell people what the consequences of their information habits will have.
2. Always ask who controls the ways in which our gadgets affect our lives. Are they accountable?
3. Read ALL your EULA… Yes, I’m talking to you!
4. Always ask what your code will do to the lives of others. Always ask what your technology use will do to the lives of others…

 

The slides are here:

Regulation is everything, or power abhors a vacuum

Can we really control the Internet? This is question has been around long enough to be deemed a golden oldie. But like a fungal infection it keeps coming back…

The early battle lines were drawn up in 1996. In an age where cyberspace was both a cool and correct term lawyers like Johnson & Post wrote “Law And Borders: The Rise of Law in Cyberspace” and activists like John Perry Barlow wrote his epic “A Declaration of the Independence of Cyberspace“.These were the cool and heady days of the cyberlibertarians vs cyberpaternalists. The libs believed that the web should & could not be regulated while the pats meant that it could and should. (I covered this in my thesis pdf here) Since then the terminology has changed but sentiments remain the same.

I miss the term cyberspace. But more to the point the “could/should” control argument continues. Nicklas has written an interesting point on the could part:

Fast forward twenty years. Bandwidth has doubled once, twice, three times. Devices capable of setting up ad hoc networks – large ones – are everywhere. Encrypted protocols are of state-defying strength and available to everyone. Tech savvy generations have grown up to expect access to the Internet not only as a given, but as unassailable. Networks like Anonymous has iterated, several times, and found topologies, communication practices and collaboration methods that defy tracking. The once expensive bottleneck technologies have become cheaper, the cost of building a network slowly approaching zero. The Internet has become a Internet that can be re-instantiated for a large swath of geography by a single individual.

So far so good. Not one internet but personal portable sharable spaces. The inability to control will lead to a free internet. But something feels wrong. Maybe its a cynical sadness of having heard this all before and seeing it all go wrong? From his text I get images of Johnny Mnemonic and The Matrix basically the hacker hero gunslinger fighting the anonymous faceless oppressive society. Its cool, but is it true?

The technology is (on some level) uncontrollable (without great oppression) but the point is that it does not have to be completely controlled. The control in society via technology is not about having 100% surveillance and pure systems which cannot be hacked. Control is about having reasonable amounts of failure in the system (System failures allow dissidents to believe they are winning).

The issue I have with pinning my hopes on the unregulatable internets is that they are – in social terms – an end to themselves. Who will connect to these nets? Obviously those who are in the know. You will connect when you know where & how to connect. This is a vital goal in itself but presents a problem for using these nets in wider social change. Getting information across to a broader section of the population.

Civil disobedience is a fantastic tool. But if the goal is disobedience in itself it is hardly justifiable in a group. If the goal is to bring about social change: ie. the goal is for a minority to convince a majority then the minority must communicate with the majority. If the nets are going to work we need to find ways for the majority to connect to them. If the majority can connect to them then so can the oppressive forces of regulation.

On the field of pats & libs I think I am what is a cynical libertarian. I am convinced of the power, value, social & individual power of non-regulation of technology but I don’t believe that politicians and lobbyists will leave technology alone. It’s an unfortunate truth: power hates a vacuum.

Democracy cannot ignore technology: Notes from a lecture

Not really sure if this should be called a lecture as it was part of a panel presentation where we were allotted 15 minutes each and then questions. The setting was interesting as it was part of the Swedish Parliaments annual conference about the future and the people asking the questions were all politicians. So for my 15 minutes (of fame?) I chose to expand on the ill effects of politics ignoring technology (or taking it as a stable, neutral given).

The presentation began with a quote from Oliver Wendell Holmes jr.

It cannot be helped, it is as it should be, that the law is behind the times.

What I wanted to do was to explain that the law has always been seen as playing catchup. This is not a bug in the law it is a feature of the law. Attempting to create laws that are before the time would be wasteful, unpopular and quite often full of errors about what we think future problems would be. I wanted to include a quote from Niels Bohr

Prediction is very difficult, especially about the future.

But time was short and I needed to avoid meandering down interesting – but unhelpful – alleyways.

Instead I reminded the audience that many of our fundamental rights and freedoms are 300 years old and, despite being updated, they are prone to being increasingly complex to manage or even outdated when the basic technological realities have changed. This was the time of Voltaire who is today mainly famous for saying

I disapprove of what you say, but I will defend to the death your right to say it

(Actually he never said this. The words were put in his mouth by the later writer Evelyn Beatrice Hall. But lets not let the truth get in the way of a good story.)

The period saw the development of fantastic modern concepts such as democracy, free speech and autonomy may be seen as products of the enlightenment. They remain core values in spite of the fact that our technological developments have totally changed the world in which we live. For the sake of later comparison I added that the killer app of the time was the quill. Naturally there were printing presses but as these are not personal communication devices they provide easier avenues of control for states. In other words developing concepts of free speech must be seen in the light of what individuals had the ability to do.

As I had been asked to talk about technology and society I chose to exemplify with the concept of copyright which was launched by the Statute of Ann in 1710. In Sweden copyright was introduced into law in the 19th century and the most recent thorough re-working of the law was in the 1950 with the modern (and present) Swedish copyright Act entering into the books in 1960. The law has naturally been amended since then but has received no major reworking since then. The killer app of the 1960s? Well it probably was the pill – but that’s hardly relevant, so I looked at radio and tv. The interesting thing about these is that they are highly regulated and controlled mass mediums. While they are easy to access for the consumer, they are hardly platforms of speech for a wider group of people.

Moving along to the Internet, the web, social media and the massive increases in personal devices have created a whole new ball game. These have create a whole new way of social interaction among citizens. The mass medium of one to many is not the monopoly player any more. So what should the regulator be aware of? Well they must take into consideration the ways in which new technologies are changing actual social interaction on many levels and also the changes in fundamental social values that are coupled with our expectations on the justice system.

The problem is that all to often regulators (as they are ordinary people) tend to take as their starting point, their own user experience. In order to illustrate what I meant I include one of my favorite Douglas Adams quotes (it’s from The Salmon of Doubt)

Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.

Therefore it is vital not to ignore the role of technology, or to underestimate its effects. Looking at technology as – simply technology – i.e. as a neutral tool that does not effect us is incredibly dangerous. If we do not understand this then we will be ruled by technology. Naturally not by technology but by those who create and control technology. Law and law makers will become less relevant to the way in which society works. Or does not work.

In order to illustrate this, I finished off with a look at anti-homless technology – mainly things like park benches which are specifically designed to prevent people from sleeping on benches. In order to exclude an undesirable group of people from a public area the democratic process must first define a group as undesirable and then obtain a consensus that this group is unwelcome. All this must be done while maintaining the air of democratic inclusion – it’s a tricky, almost impossible task. But by buying a bench which you cannot sleep on, you exclude those who need to sleep on park benches (the homeless) without even needing to enter into a democratic discussion.

If this is done with benches, then what power lies in the control of a smart phone?

Here are the slides used with the lecture.

Dangerous Bits of Information: Notes from a lecture

Last week was an intense week of lecturing, which means that I have fallen behind with other work – including writing up lecture notes. One of the lectures was Dangerous bits of information and was presented at the NOKIOS conference in Trondheim Norway. Unfortunately I did not have much time in the city of Trondheim but what I saw was wonderful sunny city with plenty of places to sit and relax by the river that flows through the center. But there was not much sitting outside on this trip.

The lecture was part of the session “Ny teknologi i offentlig forvaltning – sikkerhet og personvern” (New Technology and Public Administration – security and data security). In the same session was Bjørn Erik Thon, Head of the Norwegian and Storm Jarl Landaasen, Chief Security Officer Market Divisions, Telenor Norge.

My lecture began with an introduction to the way in which many organizations fail to think about the implications of cloud technology. As an illustration I told of the process that surrounded my universities adoption of a student email system. When the university came to the realization that they were not really excellent at maintaining a student email system they decided to resolve this.

The resolution was not a decision of letting individuals chose their system. But the technical group (it was after all seen as a tech problem) was convened and decided in an either – or situation. The decision placed before the group was whether we go with Google or with Microsoft. The group chose Google out of a preference for the interface.

When I wrote a critique of this decision I was told that the decision was formally correct since all the right people (i.e. representatives) where present at the meeting. My criticism was, however, not based on the formality of the process but rather about the way in which the decision was framed and the lack of information given to the students who would be affected by the system.

My critique is based on four dangers of cloud computing (especially by public bodies) and the lack of discussion. The first issue is one of surveillance. Swedish FRA legislation, which allows the state to monitor all communication, was passed with the explicit (though rather useless) understanding that only cross border communication will be monitored. The exception is rather useless as most Internet communication crosses borders even if both sender and receiver is within the same small state. But this cross-border communication becomes even more certain when the email servers are based abroad – as those of gmail are.

The second problem is that some of the communication between student and lecturer is sensitive data. Also the lecturer in Sweden is a government official. This is a fact most of us often forget but should not. Now we have sensitive data being transferred to a third party. This is legal since the users (i.e. the students) have all clicked that they agree the licensing agreements that gmail sets. The problem is that the students have no choice (or very little & uninformed – see below) but to sign away their rights.
The third problem is that nothing is really deleted. This is because – as the important quote states – “If you are not paying for it you are not the customer but the product being sold” – the business model is to collect, analyze and market the data generated by the users.

But for me the most annoying of the problems is the lack of interest public authorities has in protecting citizens from eventual integrity abuses arising from the cloud. My university, a public authority, happily delivered 40000 new customers (and an untold future number due to technology lock-in) to Google and, adding insult to injury, thanking Google for the privilege.

Public authorities should be more concerned about their actions in the cloud. People who chose to give away their data need information about what they are doing. Maybe they even need to be limited. But when public bodies force users to give away data to third parties – then something is wrong. Or as I pointed out – smart people do dumb things.

The lecture continued by pointing out that European Privacy Law has a mental age of pre-1995 (the year of the Data Protection Directive). But do you remember the advice we gave and took about integrity and the Internet in the early days? They contained things like:

  • Never reveal your identity
  • Never reveal your address
  • Never reveal your age
  • Never reveal your gender

Post-Facebook points such as these become almost silly. Our technology has developed rapidly but our society and law is still based on the older analogue norms – the focus in law remains on protecting people from an outer gaze looking in. This becomes less important when the spreading of information is from us individuals and our friends.

The problem in this latter situation is that it extremely difficult to create laws to protect against the salami-method (i.e. where personal data is given away slice by slice instead of all at once).

At this stage I presented research carried out by Jan Nolin and myself on social media policies in local municipalities. We studied 26 policies ranging between < 1 page to 20 pages long. The policies made some interesting points but their strong analogue bias was clear throughout and there were serious omissions. They lacked clear definitions of social media, they confused social media carried out during work or free time. More importantly the policies did not address issues with cloud or topics such as copyright. (Our work is published in To Inform or to Interact, that is the question: The role of Freedom of Information & Disciplining social media: An analysis of social media policies in 26 Swedish municipalities)

Social media poses an interesting problem for regulators in that it is not a neutral infrastructure and it does not fall under the control of the state. The lecture closed with a discussion on the dangers of social media – in particular the increase in personalization, which leads to the Pariser Filter Bubble. In this scenario we see that the organizations are tailoring information to suit our needs or rather our wants. We are increasingly getting what we want rather than what we need. If we take a food analogy we want food with high fat and high sugar content – but this is not what our bodies need. The same applies to information. I may want entertainment but I probably need less of it than I want. Overdosing in fatty information will probably harm me and make me less of a balanced social animal.

Is there an answer? Probably not. The only way to control this issue is to limit individual’s autonomy. In much the same way as we have been forced to wear seat belts for our own security we may need to do the same with information. But this would probably be a political disaster for any politician attempting to suggest it.

Regulation by Norms: The no clapping rule

Since Lessig’s book The Code came out in 1999 the discussion of Internet regulation has been increasingly popular. Its not that Lessig started the field but by the popularity of his work he made it a topic worthy of discussion – and it shows not sign of stopping. Breifly stated Lessig’s point was that there are 4 things that regulate/control behavior: Law, markets, norms and architecture. Since the point of The Code was to argue that code is law Lessig focused on architecture. If we simplify the world we could argue that Tech lawyers tend focus on architecture, environmental lawyers look to markets and black letter lawyers focus on the law as a regulatory instrument.

Many of the reasons for focusing on a regulatory instrument are beyond the control of the individual author. For example Christina Olsen-Lund, a colleague of mine doing environmental law will be defending her doctoral thesis on emission trading. A riveting 700+ page analysis of market-based regulation.

But it is a shame that not many lawyers study norms. They are so interesting. However the use of norms are regulatory instruments are both vague and incredibly complex. Take for example the no clapping rule.

In a fascinating lecture Hold Your Applause: Inventing and Reinventing the Classical Concert held in March Alex Ross dissected parts of this rule and explains social regulation in concert halls. Ross expresses concern that the rule of not clapping during concerts is partly responsible for the making classical music less accessible to beginners.

The origins of the no-clapping rule stem from an idea that the music should be received on an intellectual as well as emotional level, for example on the premier of Parsifal in 1882

Wagner requested that there be no curtain calls after Act II, so as not to “impinge on the impression,” as Cosima Wagner wrote in her diary. But the audience misunderstood these remarks to mean that they shouldn’t applaud at all, and total silence greeted the final curtain.

Wagner had no idea if the audience liked his work and attempted to instruct them that applause was appreciated. But…

…Cosima writes: “After the first act there is a reverent silence, which has a pleasant effect. But when, after the second, the applauders are again hissed, it becomes embarrassing.” Two weeks later, he slipped into his box to watch the Flower Maidens scene. When it was over, he called out, “Bravo!”—and was hissed. Alarmingly, Wagnerians were taking Wagner more seriously than he took himself.

Wagner is not the originator of the no clapping rule but he was instrumental in provide the audience with a social standard which they gladly accepted and rigorously enforced. So much so that today attempts to applaud in the wrong place are still frowned upon:

Even worse, in my opinion, is the hushing of attempted applause. People who applaud in the “wrong place”— usually the right place, in terms of the composer’s intentions—are presumably not in the habit of attending concerts regularly. They may well be attending for the first time. Having been hissed at, they may never attend again. And let’s remember that shushing is itself noise.

The rule is not enforced by the divisions within the audience alone but also by the musicians:

At a performance of the Pathétique by the Sydney Symphony, in 2003, the conductor Alexander Lazarev became so irritated by his audience that he mockingly applauded back…Even if Lazarev’s tactic had succeeded, is “embarrassed silence” the right state of mind in which to listen to the final movement of the piece?

Here the regulation is created by etiquette, by an imagined idea of what is, and what is not, done. Too many of us are fearful of being seen as outsiders or frauds and undeserving of the perceived social standing attending these events entails. But my sympathies lie with Arthur Rubinstein: “It’s barbaric to tell people it is uncivilized to applaud something you like.” – wonderful sentiment and brilliant quotation.

The idea that there is a right way in which to listen to music is strange and that there is a duty of the audience to pay up and shut up is decidedly odd:

During the applause debates of the 1920s, Ossip Gabrilowitsch spoke approvingly of “those countries in the south of Europe where they shout when they are pleased; and when they are not, they hiss and throw potatoes.” He then said something that deserves to be underlined: “It is a mistake to think you have done your part when you buy your tickets.”

Another reason for my appreciation of Ross’ lecture is that my own attitude towards applause has shifted gradually over time. My concern about “fitting in” is no longer strong, at least not strong enough to curtail my enthusiasm. I applaud happily when an actor, lecturer or speaker makes a point I appreciate & occasionally when music takes me. But I dislike the ritual of applauding over several curtain calls simply because it is expected. Refusing to applaud is more honest – like refusing to leave an extravagant tip at a bad restaurant. 

In order to better understand regulation through norms we require more studies and better cases. The largest part of social regulation has little or nothing to do with the law and everything to do with social norms – it is surprising then that so little study is carried out on the topic.

Three-strikes law is misguided

The three strikes approach to internet-regulation is a misguided approach to the problem. Read David Canton‘s arguments on the topic:

The three-strikes law is misguided, even if you believe such activity should be controlled.

Whether someone has violated copyright is often not a black-or-white issue. Copyright law is complex, and knowing in any given instance whether an infringement happened isn’t easy.

To implement these policies on a mass basis, in a similar manner to handing out parking tickets, ignores this complexity. And the penalty is more than paying a few dollars in parking fines.