Sharing, oversharing and selfies: Notes from a lecture

What are we doing online? How did we become the sharing group that we are today? And what are the implications of this change? These were the questions that we addressed today in class.

Social Media Timline 2014To begin with we began the discussion of what online safety looked like in the early 2000. The basic idea was that you should never put your real name, address, image, age or gender online. Bad things happened if you shared this openly online and the media joyously reported on the horrors of online life.

By the time Facebook came along everything changed. Real names and huge amounts of real information became the norm. Then we got cameras on phones (not an inevitable progression) so when we added smartphones to the mix, sharing exploded.

Sherry Turkle was one of the most prominent researchers involved in the early days of Internet life. In 1995 her book Life on the Screen was optimistic about the potential impact of technology and the way we could live our lives online. Following the development of social media, Turkle published a less positive perspective on technology in 2011 called “Alone Together: Why We Expect More from Technology and Less from Each Other”. In this work she is more concerned about the negative impact of internet connected mobile digital devices on our lives.

In a discussion of her work I took some key quotes from her Ted Talk on her Alone Together book.

The illusion of companionship without the demands of friendship…

Being alone feels like a problem that needs to be solved…

I share therefore I am… Before it was; I have a feeling, I want to make a call. Now it’s; I want to have a feeling, I need to send a text…

If we don’t teach our children to be alone, they will only know how to be lonely

The discussions in class around these quotes were ambivalent. Yes, there was a level of recognition in the ways in which technology was being portrayed but there was also a skepticism about the very negative image of technology.

Then there was the fact, that she mentions in her talk, that she was no longer just a young researcher, she was now the mother of teenagers. She looked at their use of technology and despaired. What did this mean? Was there a growing technophobia coming with age? Was her fear and generalization a nostalgic memory of the past that never was?

The Douglas Adams quote from Salmon of Doubt felt appropriate:

Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.

So is that what’s happening here? Is it just that technology has moved and to a point where the researcher feels they are “against the natural order of things”? A fruitful discussion was had.

From this point we moved the discussion over to the process of sharing. The ways in which – no matter what you think – technology has changed our behavior. One example of this is the way in which we feel the need to document things that happen around us on a level which we were unable to do before.

The key question is whether we are changing, and if so, whether technology is driving this change. Of course all our behavior is not a direct result of our technology. For example the claims that we are stuck in our devices and anti-social can be countered with images such as these

kubrick-subway-newspapersCommuters on trains were rarely sociable and talkative with each other and therefore they needed a distraction. Newspapers were a practical medium at the time and now they are being replaced by other mediums.

However, the key feature about social media may not be what we consume but it’s the fact that we are participating and creating the content (hence the term User generated content).

What we share and how we share has become a huge area of study and parody. The video below is a great example of this. Part of what is interesting is the fact that most who watch it feel a sting of recognition. We are all guilty of sharing in this way.

This sharing has raised concerns about our new lifestyles and where we are headed. One example of this techno-concern (or techno-pessimism) can be seen in the spoken poem Look Up by Gary Turk

Of course this is one point of view and it wouldn’t be social media if this wasn’t met up with another point of view. There are several responses to Look Up, my favorite is “Look Down (Look Up Parody)” by JianHao Tan.

From this point I moved to a discussion on a more specific form of sharing: The Selfie. The first thing to remember is that the selfie is not a new phenomenon. We have been creating selfies since we first learned to paint. Check out the awesome self portrait by Gustave Courbet.

Gustave_Courbet_-_Le_DésespéréBut of course, without our camera phones we would not be able to follow the impulse to photograph ourselves. Without our internet connections we would not have the ability to impulsively share. These things are aided by technology.

The Telegraph has an excellent short video introduction to the selfie and includes some of the most famous/infamous examples

In preparation of this class I had asked the students to email me a selfie (this was voluntary) and at this stage I showed them their own pictures (and my own selfie of course). The purpose of this was to situate the discussion of the selfie in their own images and not in an abstract ideology.

We discussed the idea of a selfie aesthetic the way in which the way in which we take pictures is learned and then we learn what is and is not acceptable to share. All this is a process of socialization into the communication of selfies.

Questions we discussed were:
– Why did you take that image?
– Why did you take it that way?
– Why did you share it?
– What was being communicated?

Then we moved to the limits of selfie sharing. What was permissible and not permissible. Naturally, this is all created and controlled in different social circles. We discussed the belfie as one possible outer limit for permissable communication.

But the belfie could be seen as tame compared to the funeral selfie a subgenre which has its own tumblr.

However, the selfie that sparked the most discussion was the Auschwitz Selfie which created a twitter storm when it was fist posted and continues to raise questions of what can and should be communicated and the manner in which it should be communicated.

The whole “selfie as communication” creates new ways of communication and innovation. One such example is the picture of a group of Brazilian politicians purported to be creating a selfie. brazilian politicians selfieThis is cool because the politicians want to be current and modern and therefore try to do what everyone is doing. They are following the selfie aesthetic which in itself has become a form of accepted communication online.

Here are the slides I used (I have taken out the student selfies)

Free & Open Source Software: Notes from a lecture

For a large period of time in computing history software was not seen as the primary component. It was all about the hardware, the machine. The code that made the machine work and useable was simply seen as part and parcel of the machine.

One reason for this may be the way in which we tended to understand software. Another reason may have been that hardware of that size and complexity was not sold, it was leased. The “buyer” therefore was paying for a solution rather than a system. This was a very lucrative way of doing business.

The early punch card system that became the solution for the US Census was the Hollerith Tabulating Machine, these were leased to the Census Bureau. Hollerith’s company would later merge with others to become IBM whose punch card tabulators were leased to governments and organizations around the world. One advantage of the leasing system is that the company could control which cards were used in the system and also charge for maintenance and training.

With digitalisation many companies made source code available and engineers could make changes to the software. Improvements could be included into the code and sold on to the next company.

In 1969, IBM began to charge separately for (mainframe) software and services, and ceased to supply source code. By withholding the source code, only the company could make changes (and presumably charge their buyers for these changes).

The ability to “own” software, or at least control it through copyright was beginning to become a discussion among programmers. For example in 1976 Dr Li-Chen Wang released Tiny Basic under a Copyleft license which included the catch phrase “All Wrongs Reserved” Copyleft_All_Wrongs_ReservedIt is fair to say that the history of free software (and copyleft) truly begins with Richard Stallman‘s attempts to create a “technical means to a social end.” The story behind the creation of free software starts with his attempts to make a printer work and the company’s (who owned the printer) refusal to give access to the necessary code. He launched the GNU Project in 1983.

Free software is all about ensuring that we have access to, and control over, the basic infrastructures of our lives. It is not about having software at no cost – it’s about ensuring that our technology works in ways that suit our lives. In order to enact this the software that is produced by teams and individuals around the world is licensed under the GPL (General Public License) summing up the license is a bit tricky but it is common to refer to the Four Freedoms, to be considered to be Free Software it must:

 

The freedom to run the program as you wish, for any purpose (freedom 0).

The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1).

Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2).

The freedom to distribute copies of your modified versions to others (freedom 3).

A precondition for these freedoms is that the code must be accessible to those who would want to read it. The importance of Free Software is much like the arguments for free speech or freedom of information. It is not that everyone wants, or has the competency, to use these rights but without them all of us are a little less informed about what is happening around us.

Once again it is important to stress Free Software is not about price. Nor is it about doing whatever you like with the code. From the Free Software Manifesto (1985)

GNU is not in the public domain. Everyone will be permitted to modify and redistribute GNU, but no distributor will be allowed to restrict its further redistribution. That is to say, proprietary modifications will not be allowed. I want to make sure that all versions of GNU remain free.

It is a gift with a very clear condition.

Free Software is sometimes confused with Open Source software. They are both similar but they have different conditions:

The term “open source” software is used by some people to mean more or less the same category as free software. It is not exactly the same class of software: they accept some licences that we consider too restrictive…

A common difference that can easily be seen in many open source licenses is the lack of the clear condition that nothing can be made into proprietary software.

Here are the slides I used.

The Dangers of the Success Myth

This is taken from an excellent article about the social network Diaspora and its tragic end What Happened to the Facebook Killer? It’s Complicated. Aside from telling this story the article also has an excellent critique of the myth of success in silicon valley where survivor bias and the need to create “strong man” myths dominates to an incredible degree.

These creation myths not only prevent us from seeing the blatantly obvious truths but actually work to prevent us from understanding what success is and how it is  achieved.

In Silicon Valley, where college dropouts go on to become billionaires and takeover the world, a deadly myth propagates. “As long as you’re over a certain threshold of intelligence, what matters most is determination,” evangelizes Paul Graham, founder of the legendary startup incubator Y-Combinator, which would later back Diaspora in a last gasp effort to keep the project alive. It’s a beautiful thought and fundamental to the American Dream. It’s a delusion that drives starry-eyed youngsters to quit school and head West, living off ramen and moving into hostel communities, “not so different from crowded apartments that cater to immigrants.” In Silicon Valley, they believe that if you do whatever it takes, eventually, you’ll get there too. There, everyone is on the cusp of greatness. And if you haven’t yet made it to the land of milk and honey, it’s only because you aren’t working hard enough. Or worse, you’ve given up.

Success, however, is never quite so straightforward, a layered concoction, equal parts good idea, perseverance and whole lot of serendipity. It’s for this reason that many of the industry’s biggest rock stars remain one hit wonders. Marc Andreessen has struggled to match the triumph of Netscape Navigator. Twitter co-founders Ev Williams and Biz Stone left their company a year ago to work on something called Obvious, but so far have only a single blog post to show for it. Then there’s Sean Parker of Napster fame. After wiggling his way into Facebook, his latest celebrity-endorsed venture, the Chatroulette clone AirTime, has yet to take off, if it ever does. Even with their credibility, confidence and cash, repeating past success eludes Silicon Valley’s finest.

Yet the myth propagates because survivor bias rules. Failure just isn’t part of the vocabulary; startup honchos prefer terms like “pivot” over more straight-forward words for a coming-to-terms. It’s not something winners acknowledge, nor is it something the media often reports. For every Mark Zuckerberg, there’s thousands of also-rans, who had parties no one ever attended, obsolete before we ever knew they existed.

Then there’s the issue of money. In the early stages of a tech startup, there are few measurable achievements and progress is abstract. At the height of Silicon Valley’s second great tech bubble, new players defined themselves not by what they’d done, but how much money they raised. While raising capital is fundamental, too much too soon can be a death sentence. All that cash hangs like an albatross around your neck, explains Ben Kaufman, who just raised $68 million for his company, Quirky.

“In the eye of the public, and specifically the tech community, funding is thought to mean much more than it actually does,” Kaufman writes. “The world views funding as a badge of honor. I view it as a scarlet letter.” This is the age of Kickstarter, where you can earn press and raise millions on the back of just an idea, undermining the tech scene’s supposed love affair with execution. It reinforces a false sense of success, Kaufman says, remembering the first time he raised his first $1 million at the age of nineteen. “My grandfather called me to congratulate me on building a successful company,” Kaufman recalls. “We still hadn’t done shit. We just got some dude to write a check.” In other words, when the money is flowing, it’s easy to feel like you’ve made it, before you’ve actually made it.

Disobedience Technology: Notes on a lecture

This lecture had the goal of introducing theories and methodologies behind civil disobedience in order to give the class the tools to identify legitimate acts of civil disobedience compared to lawlessness.

We began with the example of Socrates whose principled stand was that the law must be obeyed. In Plato’s text Crito we find Socrates in jail awaiting execution. His friends argue that he should escape.

But Socrates argues that the Laws exist as one entity, to break one would be to break them all. He cannot chose to obey the rules that suit him and disregard those which he doesn’t approve of.

The citizen is bound to the Laws like a child is bound to a parent, and so to go against the Laws would be like striking a parent. Rather than simply break the Laws and escape, Socrates should try to persuade the Laws to let him go. These Laws present the citizen’s duty to them in the form of a kind of social contract. By choosing to live in Athens, a citizen is implicitly endorsing the Laws, and is willing to abide by them. (Wikipedia)

This principled stand cost Socrates his life. However, most proponents of civil disobedience argue that there must be a way of following some rules while disobeying others. This disobedience must find legitimacy in other sources.

Greek mythology dealt with this issue in the story of Antigone where at one stage after a battle King Creon decreed that the dead were not to be buried. Antigone defied the law and buried her brother. She knew of the law and defied it knowingly arguing that she was bound by a superior divine law.

Continuing on this theme we looked at some of the classics of disobedience. Thoreau’s arguments that we are sometimes obliged to defy the government, Gandhi’s belief that we have a duty to disobey the unjust leader (and the example of the salt march), and Martin Luther King’s words that an unjust law is against God’s law.

“For years now I have heard the word ‘Wait!’…We must come to see…that ‘justice too long delayed is justice denied.’…One may well ask, ‘How can you advocate breaking some laws and obeying others?’ The answer is found in the fact that there are two types of laws: just and unjust…One has not only a legal but a moral responsibility to obey just laws. Conversely, one has a moral responsibility to disobey unjust laws.” (King Letter from Birmingham Jail)

These positions all argue that there is a higher moral authority that would make it legitimate to disobey rules. Indeed, King underscores that disobedience in such cases is a moral responsibility.

The argument against disobedience remains in the area of the social contract and the question about who could legitimately argue for the rules to be held or broken? In his Theory of Justice, John Rawles agreed that that there are situations where laws should not be followed and attempts to prevent “simple” lawlessness by stressing that disobedience is:

…a public, nonviolent, conscientious yet political act contrary to the law usually done with the aim of bringing about a change in the law or policies of the government.

H. A. Bedau argued in Civil Disobedience in Focus that in order for disobedience to be legitimate it should be

“committed openly…non-violently…and conscientiously…within the framework of the rule of law…with the intention of frustrating or protesting some law, policy or decision…of the government.”

While Peter Singer stressed

…if the aim of disobedience is to present a case to the public, then only such disobedience as is necessary to present this case is justified…if disobedience for publicity purposes is to be compatible with fair compromise, it must be non-violent.

These positions can be summed up with the idea that certain acts of disobedience are necessary in order to bring a minority position to the attention of the majority. However, in order to maintain its legitimacy, acts of disobedience must be carried out openly, non-violently, purposely, aimed at a specific rule or policy, by people prepared to accept the consequences.

Despite this, there are still critiques aimed at groups that attempt to disrupt via acts of civil disobedience. Often the arguments against disobedience are:

  • CD is not defensible in a democracy as the social contract is established and maintained by the people for the people.
  • CD is illegitimate as it subverts the equality embedded in the democratic process itself.
  • CD can only be acceptable if ALL other (democratic) methods have been exhausted

These critiques are easily enough met if we look at the American civil rights movement. The activists chose not to entrust the democratic process since the process is an endless one and does not necessarily promote change, but can be used to re-enforce established ideas. As King writes: ‘justice too long delayed is justice denied.’ The outlook for social change, brought about from within the system was bleak. By challenging the rules it became more and more clear to the majority that the rules were harmful and needed to be changed.

We then spoke of moving disobedience online. Discussing the ways in which technology can be used to support activism. At the same time our technology use has also created a system in which our activism has been trivialised and subverted. Social media is efficiently used to promote and spread information about injustice. However, social media is also used to trivialize political acts. We click on LIKE icons, re-Tweet links, and share videos but what does it all mean?

Is this Postman‘s dystopia (Amusing ourselves to Death) in action?

The slides

Regulating Online Public/Private Spaces: Notes from a lecture

The presentation yesterday dealt with moving regulation from the physical world to the digital environment. My goal was to show the ways in which regulation occurs and in particular to go beyond the simplistic “wild west” ideology online – at the same time I wanted to demonstrate that online behavior is controlled by more elements than the technological boundaries.

In order to do this, I wanted to begin by demonstrating that the we have used tools for a long period of time and that these tools enable and support varying elements of control. And since I was going to take a historic approach I could not resist taking the scenic route.

In the beginning was the Abacus. Developed around 2400 BCE in Mesapotamia this amazing tool for extending the power of the brain to calculate large numbers (which is basically what your smartphone does but much much more…). The fascinating thing with the abacus is that despite the wide range of digital devices it remains in use today (but it is in deep decline).

The decline of the Chinese abacus the Suanpan

Suanpan arithmetic was still being taught in school in Hong Kong as recently as the late 1960s, and in Republic of China into the 1990s. However, when hand held calculators became readily available, school children’s willingness to learn the use of the suanpan decreased dramatically. In the early days of hand held calculators, news of suanpan operators beating electronic calculators in arithmetic competitions in both speed and accuracy often appeared in the media. Early electronic calculators could only handle 8 to 10 significant digits, whereas suanpans can be built to virtually limitless precision. But when the functionality of calculators improved beyond simple arithmetic operations, most people realized that the suanpan could never compute higher functions – such as those in trigonometry – faster than a calculator. Nowadays, as calculators have become more affordable, suanpans are not commonly used in Hong Kong or Taiwan, but many parents still send their children to private tutors or school- and government- sponsored after school activities to learn bead arithmetic as a learning aid and a stepping stone to faster and more accurate mental arithmetic, or as a matter of cultural preservation. Speed competitions are still held. Suanpans are still being used elsewhere in China and in Japan, as well as in some few places in Canada and the United States.

Continuing on the story of ancient technology pointed to the Antikythera Mechanism an analogue computer from 100BCE designed to predict astronomical positions and eclipses. The knowledge behind this machinery would be lost for centuries.

In the 17th century Wilhelm Schickard & Blaise Pascal developed mechanical addition and subtraction machines but the more durable development was that of the slide rule

The Reverend William Oughtred and others developed the slide rule in the 17th century based on the emerging work on logarithms by John Napier. Before the advent of the pocket calculator, it was the most commonly used calculation tool in science and engineering. The use of slide rules continued to grow through the 1950s and 1960s even as digital computing devices were being gradually introduced; but around 1974 the electronic scientific calculator made it largely obsolete and most suppliers left the business.

Despite its almost 3 centuries of dominance few of us today even remember the slide rule, let along know how to use one.

While the analogue calculating devices were both useful and durable most of the machines were less so. This is because they were built with a fixed purpose in mind. The early addition and subtraction machines were simply that. Addition and subtraction machines. They could not be used for other tasks without needing to be completely rebuilt.

The first examples of programmable machinery came with the Jacquard loom first demonstrated in 1801. Using a system of punch cards the loom could be programmed to weave patterns. If the pattern needed to be changed then the program was altered. The punch cards were external memory systems which were fed into the machine. The machine did not need to be re-built for changes to occur.

The looms inspired both Charles Babbage and Herman Hollerith to use punch cards as a method for imputing data in their calculating machines. Babbage is naturally the next famous point in our history. His conceptual Difference Engine and Analytical Engine have made him famous as the father of the programmable computer.

But as his devices remained to the largest part theoretical constructs I believe that the more important person of this era is Ada Lovelace who not only saw the potential in these machines but, arguably, saw an even greater potential than Babbage himself envisioned. She was the first computer programmer and a gifted mathematician.

Few scientists understood Babbage’s breakthrough, but Ada wrote explanations of the Analytical Engine’s function, its advantage over the Difference Engine, and included a method for using the machine in calculating a sequence of Bernoulli numbers.

The next step in this story Hollerith’s tabulating machine. While the level of computing is not a major step the interesting part is the way it came to be and the solutions that were created. The American census of 1880 took 8 years to conduct and it was predicted that the 1890 census would take 13 years to conduct. This was unacceptable and the census bureau looked for technical solutions. Hollerith built machines under contract for the Census Office, which used them to tabulate the 1890 census in only one year.

Hollerith’s business model was ingenious. He did not sell the machines, he sold his services. The governments and corporations around the world that came to rely on his company had no control but had to pay the price for his technical expertise. Hollerith’s company eventually became the core of IBM.

The point being that Hollerith positioned his company as holding the key role between the user and the data.

The progress in machinery and thoughts around machinery moved forward at a steady pace. Then making rapid progress during the second world war with names like Bletchley Park, the Colossus (the world’s first programmable digital electronic computer) and Alan Turing.

While most people could hardly comprehend the power of a computer, Vannevar Bush wrote his famous article on the Memex As We May Think in 1945. Here were visions of total information digitization and retrieval. Ideas that are now possible after half a century of modern computing history.

And with this we leap into the modern era, first with the Internet, then personal computers, and the advent of the world wide web.

The fascinating thing here is the business model becomes more clearly what Hollerith envisioned it. It was about becoming the interface between the user and the data. This is where the power lay.

When IBM was at it’s height Bill Gates persuaded them to begin using his operating system. He also persuaded them to allow him not to be exclusive to them. The world realized that it wasn’t the hardware that was important – it was what we could do with it that counted. Other manufacturers came in and IBM lost its hold of the computer industry.

When Tim Berners Lee developed the web and the first web browser and released them both freely online he created a system which everyone could use without needing licenses or payment. The web began to grow at an incredible rate.

Windows was late in the game. They still believed in the operating system but the interface between the user and the data was shifting. No matter which operating system or hardware you used it was all about accessing data online.

With Windows95 Microsoft took up the fight for the online world against the then biggest competitor Netscape. Microsoft embedded their browser Internet Explorer in the operating system and made it increasingly difficult for users to remove it. This was the beginning of the browser wars, a fight for control of the interface between the user and the data.

The wars eventually lost their relevance with the development of a new type of company offering a new version of a search engine. When Google came on the scene it had to compete with other search engines but after a relatively quick battle it became the go-to place where Internet users began their online experiences. It had become the interface between the users and the data. It didn’t matter which hardware, software, or browser you used… everyone began with Google.

At this point I introduced the four modalities of regulation used by Lawrence Lessig and presented in his work The Code from 1999.

modalitiesContrary to what many believe, regulation takes many forms. We regulate with social norms, with market solutions and with architecture (as well as laws). Naturally none of these modalities occur in isolation but we often tend to forget that much of our regulation is embedded in social, economic and physical contexts. If any of these contexts change then the law must adapt to encompass this change.

Using the offline problem of slowing down traffic I pointed to the law which hangs out speed signs, the market regulation of the price of a speeding ticket and the time it takes to negotiate its payment. Social attempts to slow traffic occur when people in the neighborhood hang signs warning drivers of children in the area. They are appealing to the drivers better nature.

And the architecture of the road. If we want to slow down cars it is much more efficient to change the road than to hang up a sign. Make it curvy, make it bumpy, change its colors there are an array of things that can be done to limit or slow access. The problem with using technology (or architecture) is that it is absolute. If we put speed bumps in the road then not even someone driving with good cause can speed. Even someone attempting to drive a heart attack victim to the hospital must slow down.

triangle The more we move from the analogue into the digital world the less control that is afforded through the law and the more ability we have to change the realities in which we live. Architecture or technology is more pliable as a form of regulation.

In closing I asked the class to list regulatory examples which occur when attempting to access information online via their smartphones. The complex interface between them and the data included new levels like the apps they use, the apps that their phones allow, their payment plans, social control online, social control offline and a whole host of other regulatory elements.

And here are the slides I used:

Public/Private Spaces: Notes on a lecture

The class today was entitled Public/Private Spaces: Pulling things together, and had the idea of summing up the physical city part of the Civic Media course.

But before we could even go forward I needed to add an update to the earlier lectures on racial segregation. The article The Average White American’s Social Network is 1% Black is fascinating and not a little sad because of its implications.

In the meantime, whites may be genuinely naive about what it’s like to be black in America because many of them don’t know any black people.  According to the survey, the average white American’s social network is only 1% black. Three-quarters of white Americans haven’t had a meaningful conversation with a single non-white person in the last six months.

The actual beginning of class was a response to the students assignment to present three arguments for and three arguments against the Internet as a Human Right. In order to locate the discussion in the context of human rights I spoke of Athenian democracy and the death of Socrates, and the progression from natural rights to convention based rights. The purpose was both to show some progression in rights development – but also to show that rights are not linear and indeed contain exceptions from those the words imply. The American Declaration of Independence (1776) talks of all men

We hold these truths to be self-evident; that all men are created equal, that they are endowed by their creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness.

but we know that this was not true. Athenian democracy included “all” people with the exception of slaves, foreigners and women. So we must see rights for what they are without mythologizing their power.

In addition they cannot seen in isolation. For example the Declaration of the Rights of Man and of the Citizen (1789) came as a result of the French Revolution include many ideas that appear in similar rights documents

  • Men are born and remain free and equal in rights.
  • Liberty consists in the freedom to do everything which injures no one else.
  • Law is the expression of the general will
  • No punishment without law
  • Presumtion of innocence
  • Free opinions, speech & communication

The similarities are unsurprising as they emerge from international discussions on the value of individuals and a new level of thought appearing about where political power should lie.

The discussion then moved to the concept of free speech and the modern day attempts to limit speech by using the concept of civility, and interesting example of this is explained in the article Free speech, ‘civility,’ and how universities are getting them mixed up

When someone in power praises the principle of free speech, it’s wise to be on the lookout for weasel words. The phrase “I favor constructive criticism,” is weaseling. So is, “You can express your views as long as they’re respectful.” In those examples, “constructive” and “respectful” are modifiers concealing that the speaker really doesn’t favor free speech at all.

Free speech is there to protect speech we do not like to hear. We do not need protection from the nice things in life. Offending people may be a bi-product of free speech, but a bi-product that we must accept if we are to support free speech. Stephen Fry states it wonderfully:

fryAt this point we returned to the discussion of private/public spaces in the city and how these may be used. We have up until this point covered many of the major points and now it was time to move on to the more vague uses. Using Democracy and Public Space: The Physical Sites of Democratic Performance by John Parkinson we can define public as

1.Freely accessible places where ‘everything that happens can be observed by anyone’, where strangers are encountered whether one wants to or not, because everyone has free right of entry

2.Places where the spotlight of ‘publicity’ shines, and so might not just be public squares and market places, but political debating chambers where the right of physical access is limited but informational access is not.

3.‘common goods’ like clean air and water, public transport, and so on; as well as more particular concerns like crime or the raising of children that vary in their content over time and space, depending on the current state of a particular society’s value judgments.

4.Things which are owned by the state or the people in and paid for out of collective resources like taxes: government buildings, national parks in most countries, military bases and equipment, and so on.

and we can define private as:

1.Places that are not freely accessible, and have controllers who limit access to or use of that space.

2.Things that primarily concern individuals and not collectives

3.Things and places that are individually owned, including things that are cognitively ‘our own’, like our thoughts, goals, emotions, spirituality, preferences, and so on

In the discussion of Spaces we needed to get into the concept of The Tragedy of the Commons (Hardin 1968) which states that individuals all act out of self-interest and any space that isn’t regulated through private property is lost forever. This ideology has grown to mythological proportions and it was very nice to be able to use Nobel prize winning economist Elinor Ostrom to critique it:

The lack of human element in the economists assumptions are glaring but still the myth persists that common goods are not possible to sustain and that government regulation will fail. All that remains is private property. In order to have a more interesting discussion on common goods I introduced David Bollier

A commons arises whenever a given community decides that it wishes to manage a resource in a collective manner, with a special regard for equitable access, use and sustainability. It is a social form that has long lived in the shadows of our market culture, but which is now on the rise

We will be getting back to his work later in the course.

In closing I wanted to continue the problematizing the public/private discussion – in particular the concepts of private spaces in public and public spaces in private. In order to illustrate this we looked at these photos:

2953558475_b092ca8193_m

Just a Kiss by Shutterpal CC BY NC SA

The outdoor kiss is an intensely private moment and it has at different times and places been regulated in different manners. The use of headphones and dark glasses is also a way in which private space can be enhanced in public. These spaces are all around us and form a kind of privacy in public.

The study of these spaces is known as Proxemics: the study of nonverbal communication which Wikipedia defines as:

Prominent other subcategories include haptics (touch), kinesics (body movement), vocalics (paralanguage), and chronemics (structure of time). Proxemics can be defined as “the interrelated observations and theories of man’s use of space as a specialized elaboration of culture”. Edward T. Hall, the cultural anthropologist who coined the term in 1963, emphasized the impact of proxemic behavior (the use of space) on interpersonal communication. Hall believed that the value in studying proxemics comes from its applicability in evaluating not only the way people interact with others in daily life, but also “the organization of space in [their] houses and buildings, and ultimately the layout of [their] towns.

The discussions we have been having thus far have been about cities and the access and use of cities. How control has come about and who has the ability and power to input and change things in the city. Basically the “correct” and “incorrect” use of the technology. Since we are moving on to the public/private abilities inside our technology I wanted to show that we are more and more creating private bubbles in public via technology (our headphones and screens for example) and also bringing the public domain into our own spaces via, for example, Facebook and social networking.

We ended the class with a discussion on whether Facebook is a public or private space? If it is a private space what does it mean in relation to law enforcement and governmental bodies? If it is a public space when is it too far to stalk people? And finally what is the responsibility of the platform provider in relation to the digital space as public or private space?

here are the slides I used:

Digital Divides & Net Neutrality: Notes from a lecture

As today is the last week before the Scottish referendum which will decide whether Scotland will become an independent country I could not help but begin the lecture with a shout out to this coming monumental date. I find it hard to believe that the world is talking about anything else.

But the real point of today’s class was to talk about digital divides and net neutrality. To begin this I began by explaining how the Internet became this amazing thing it is today. We tend to take it’s coolness for granted because it is so cool (I recognize that this is circular reasoning but that is the way it is often explained).

One of the unexplained reasons the Internet became cool depends on the business model that is used. In the early days we paid for the time we were connected. This pay-as-you-go model is great but it does have a dampening effect. Since you are constantly paying the impetus is to be quick. Being quick means that the content will be light and fast to be usable.

This business model is the same as the telephone model has been for most of it’s history. We paid per minute and by distance. We were taught to be brief and idle chatting was discouraged. This is also because the infrastructure was originally highly wasteful and could only serve one user per line.

If the telephone had developed into a monthly charge instead, we could have seen a great deal more innovation and use of the system we could have created the Internet much much earlier. This counterfactual is not totally strange. The early ideas for the telephone included such oddities as dial-up concerts. Such as the one reported in Scientific American, (February 28, 1891):

In a lecture recently delivered in the Town Hall at Newton, Mass., Mr. Pickernell described the methods employed in the transmission of music by telephone. His remarks were very forcibly illustrated by the reception in the lecture hall of music transmitted over the long distance lines from the telephone building, at No. 18 Cortlandt Street, New York, and our engraving, made from a photograph taken at the time, shows the arrangement of the performers.

Scientific American, February 28, 1891

Scientific American, February 28, 1891

But as we all know this was not the way the telephone evolved. The Internet on the other hand did move in that direction. Rather quickly we moved from dial-up modems to fixed connections. Speed was important – but even more important was the fact that the user never had to worry about the time she was online. Downloading large files, streaming, idle browsing and most all of our online lives stems from the point where we stopped worrying about the cost of access to the Internet.

Another point that needs to be stressed is that we often confuse the Internet, the Web and what we do on our mobile devices. Put very simply the Internet is the cables and servers the infrastructure upon which several applications (such as email, netflix and the Web run upon). What we do on our mobile devices is mostly using apps which run on the Internet (but not necessarily the web).

So while the web which was developed by Tim Berners Lee became huge because he chose to give the system away without trying to patent or close it. It is now shrinking because we are becoming more dependent upon our mobile devices. For the longest time we said “the Internet” when we really meant “the Web” and now we say “the Web” when we really mean “the Internet” (via apps on our devices).

This may seem to be pedantic distinctions but they are important as each of these technologies have different strengths and weaknesses and different affordances and control mechanisms.

Once this was established we looked at this map illustrating:

What you’re looking at is a map of nearly every device that was connected to the internet on August 2. Or, at least, a map of ones that responded to a ping request from John Matherly, an internet cartographer. Motherboard

When we say everybody uses the Internet this is the everybody to which we are referring. The large dark areas are those without this, for us, basic technology. Additionally there are small places with more connectivity than the areas we would normally see as technology dense. The map also raises interesting questions about divisions created by culture and language and the problems of measurement when countries such as China are behind a firewall.

We also looked at an array of charts illustrating OECD statistics on broadband penetration per capita, average monthly subscription price, and average download speeds.

broadbandThe USA has an average broadband penetration among OECD countries but it also has the highest number of total internet subscribers by far seen in absolute numbers.

In order to have some form of consensus for our discussion on the digital divide I put forward this description:

… a gap between those who have ready access to information and communication technology and the skills to make use of those technology and those who do not have the access or skills to use those same technologies within a geographic area, society or community. It is an economic and social inequality between groups of persons.

The factors that are persistently pointed to as being the root causes of the digital divide are

  • Cost (technology and connection)
  • Know-how (how to connect, how to use devices, what to do when something goes wrong, overcoming cultural divides etc)
  • Recognizing the benefit

The latter is very interesting as most users do not need to explain why they benefit but non-users manage to make their lives work without access. It is difficult to demonstrate to non-users that they would benefit from using the technology. Indeed that they would benefit so much that it is worth struggling to overcome the barriers of cost and know-how.

Then we moved the discussion over to the Pew research report African Americans and Technology Use, which showed

African Americans trail whites by seven percentage points when it comes to overall internet use (87% of whites and 80% of blacks are internet users), and by twelve percentage points when it comes to home broadband adoption (74% of whites and 62% of blacks have some sort of broadband connection at home). At the same time, blacks and whites are on more equal footing when it comes to other types of access, especially on mobile platforms.

All things being equal there should be no difference in technology use. And yet there is a gap of seven percentage points. Considering most countries desire to transfer more business and services online this is a worrying number of outsiders. Remember, both groups should have users who don’t see a need for the technology – this gap is not about them.

When it came to smartphone ownership the difference was not significant (53% of whites and 56% of blacks) but I found this interesting taken in conjunction with the earlier numbers. Were some users choosing mobile devices over broadband? What were the consequences of this? Stephanie Chen was interviewed in Salon

“You can’t do your homework on a smartphone; you can’t help your kids with their homework on a smartphone; you can’t write your résumé on a smartphone. You can’t do any of that on a smartphone… As a test, I went through the process and tried to apply for a job at Walmart on a phone. It was an arduous process.”

Once again it is vital to remember that each device has it’s affordances that enable and discourage behavior.

Following this we touched briefly on the concepts of digital natives, digital immigrants and digital tourists. I can only refer back to an earlier rant of mine on the subject:

During the discussions one of the topics that came up was the digital divide which is claimed to exist between young and old (whatever do these epitaphs mean?) and then it was only natural to bring up the horrible term digital natives, digital immigrants and digital turists. All these terms were popularized by Marc Prensky and are completely horrific. And of course very popular. There were voices of reason among the crowd but at the same time the catchy phrase seemed to win over intelligent discussion.

There are several problems with the metaphor, not to mention the built in racism. In most languages, calling someone a native smacks of arrogance, a touch of racism and good old fashioned colonialism.

Who is the native? So who is the native and how does one become one? Obviously the idea here is that the youth of today are all tech-savvy and understand technology while the older generation is good at saying stuff like “I remember when…” and handling analog technology. Seriously what a load of dog doodoo. The fact that we lack common areas of interest is not a digital divide. Young people tend to have different tastes in music, love, hobbies, work, films, books than older people. Even Beethoven’s father probably complained at his sons taste in music.

Are they a group? The young are not a homogeneous group, but then again the question could be put forward if homogeneous groups actually exist at all? Does the Englishman really exist? What is it the natives are supposed to understand? This is the biggest problem with the metaphor. Yes, there are hoards of young folk who can easily send hundreds of text messages per day but does that identify them as digital? Does this mean that they are fundamentally different from those who can hardly use the mobile telephones?

The problem is that the idea of the digital native seems to be that they are (1) comfortable using all digital technology and, (2) understand all digital technology. This is most obviously wrong. The ability to be on Facebook does not prepare you for editing wikipedia, blogging or twitter. The ability to use wikipedia has nothing to do with being popular on twitter. And none of these abilities have anything to do with the ability to use the most of the functions in the simplest word processors.

The understanding of technology, how it works, what it means – in addition to its social, economic and cultural impact is quite often totally lost on these so-called natives. I mean no disrespect (even though saying this usually makes things worse) but being an enthusiastic user has no relation to understanding technology.

Metaphors are supposed to exist to help us understand complex ideas. When they do not fulfill this basic purpose they are useless or worse harmful to our understanding. A misguided metaphor is worse than no metaphor at all. And the concept of digital natives does not aid understanding –  it only creates barriers.

It was then time to deal with net neutrality and in order to do this in a more entertaining manner I showed a part of Last Week Tonight with John Oliver: Net Neutrality (HBO) 

And finally closed by appealing to them to go and read up on net neutrality, and in particular, check out the website Battle for the Net since there is still time for those who feel it to be important to react and to show politicians that the open net is something ordinary people are passionate about.

Here are the slides I used for this presentation:

 

Laughing or crying at Le Corbusier: Every action has consequences

Le Corbusier is one of those names: many have heard of him but few know why. (This is based on a totally unscientific poll I took at a party. It reflects the poor quality of knowledge among my chosen friends, and bad science on my part to generalize it in this way) Anyway, we vaguely associate him with something to do with design and architecture.

If we were to ignore his impact on a generation of architects and urban planners, we can also turn to the furniture line created by himself and his designers and introduced in the 1920s and 1930s. Gorgeous creations of chrome tubes and leather cushions that have been featured in magazines and films for decades, usually signifying luxury or the future but not always. Here is an example of one of his chairs in The Big Lebowski (Ethan Coen, 1998).

Screen capture from The Big Lebowski

For designers, hipsters, and furniture nerds this is all great. But for us copyright geeks, it takes a lot longer before it all begins to get interesting. Le Corbusier died in 1965, but naturally his designs and thoughts are still influential several decades on. We think differently about design because of him. This is all well and good.

The part that makes it difficult to know whether to laugh or cry is the news that Le Corbusier’s heirs (and the holders of his copyright today), after discovering that some of their relative’s work was included in Getty Images enormous photo collection, have sued Getty for making the images available online. The copyright holders won the case: Fondation Le Corbusier v. Getty Images (Paris Court of Appeals, Pole 5, 2nd chamber June 13, 2014). Read more about it over at The 1709 Blog.

Copyright is important and the images involved in the case were not pictures of other things with some furniture in the background. They were clearly identifiable as Le Corbusier, and in the foreground. Additionally the photos did not make any reference to Le Corbusier as having anything to do with the chairs!

So the Le Corbusier family gained some money and can argue that they defended the family honor. But to what expense? Since Getty Images has 80000 images online, will they have to act in some way to prevent other families eager to profit from the remains of their dead ancestors?

Will cases like this scare other archives away from digitizing images and making them available online? The aftereffects of this sort of thing has the potential to drag us back from the cultural bonanza of online archives. Today we go online and find what we want – should the relatives of dead designers have the power to prevent this?

Does sharing the same DNA as a creator make you well suited to decide the fate of her creations?

This post originally appeared here.

Cities and Suburbs: Notes on a Lecture

The lecture began with a short piece on population. The future of the world is urban centers and the population of the world will arrive at 10 billion people. This has nothing to do with large families. Because as Hans Rosling explains in this talk. We are not having more children but the the population is aging. He talks of “the big fill up”.

That we are moving from the countryside to the cities and have been doing so for a really short period of time. So while Urban settlements appeared around 3,000 B.C. in ancient Mesopotamia, Egypt, and the Indus Valley we were mostly country dwellers until about 2008.

In 1800, only 3% of the world’s population lived in cities, a figure that rose to 47% by the end of the twentieth century. In 1950, there were 83 cities with populations exceeding one million; by 2007, this number had risen to 468. (Wikipedia).

Despite this there has been a long tradition of the viewing the city as a bad place and the countryside as a good place. In poetry we can see this trend as far back as to the bucolic poetry of Theocritus (c. 270 BC). Basically the city is unhealthy for both mind and soul.

Today the concept of the city is a space that is divided up into an inner zone which usually matches the boundaries of the old industrial city and suburbia, which was designed for the automobile, beginning from the 1920s.

One of the creators of suburbia (both as a concept and a reality) in the USA was the property developer William Levitt whose massive construction and development of whole regions spawned copies all across the country. In Levitttown home construction began in 1952 and 17,311 homes were built by 1958. At its peak, through an intense division of labor, the workers were building a home every 16 minutes.

A home in 16 minutes. Perspective – how long did it take for you to wake up and get dressed this morning?

Levittown was also a highly regulated space, designed to conform to the ideal of the American family. Among the rules were things like: no laundry hanging outside on Sunday, and no fences between properties. More seriously Levitt did not sell homes directly to African Americans. In 1957 an African-American family, the Myers, bought a home from the previous owners

Their move to Levittown was marked with racist harassment and mob violence, which required intervention by state authorities. This led to an injunction and criminal charges against the harassers while Myers and their supporters refused to surrender and received national acclaim for their efforts. Wikipedia

The dream of suburbia is also reflected in the ideology of the time. Personal property was good for the individual and for the society. The words of Sen. Charles Percy (1966) are interesting here:

“For a man who owns his home acquires with it a new dignity… He begins to take pride in what is his own, and pride in conserving and improving it for his children. He becomes a more steadfast and concerned citizen of his community. He becomes more self-confident and self-reliant. The mere act of becoming a homeowner transforms him. It gives him roots, a sense of belonging, a true stake in his community and well being.”

Indeed Percy is expressing what is to be considered the norm. This norm becomes that which is supported socially, economically and politically. If a society believes that home ownership is the keystone of society then it will invest in ensuring tax incentives for the creation of a wider base of home owners in society.

The ideal of the suburban homes has its most interesting expression in the front lawn. Naturally, our understanding of this artifact is colored by both our time and our space. But it is interesting to see that the large expanse of expensive green desert in front of peoples houses (in American suburbia) is never used, highly maintained and costly. It is all about signalling. This resonated with many students and stories of the ways in which neighbors are judged by the appearance of this empty piece of land were shared.

But the connection between private property and community involvement is under question. Salon Magazine reported on the increase in renting homes in USA and the way in which this does not signal the end of community:

Philly, a recent survey of renters conducted by the city found unexpected levels of social engagement. Planners were surprised by how many renters knew their neighbors, participated in neighborhood events and helped maintain the physical environment through volunteer work.

Indeed renting is the norm in many other countries and it is growing in the birthplace of suburbia. There is also a growing critique towards the ways in which suburbs are problematic on many levels. One suburban critic is Charles Marohn who is interviewed in The Suburbs Will Die: One Man’s Fight to Fix the American Dream

The “suburban experiment,” as he calls it, has been a fiscal failure. On top of the issues of low-density tax collection, sprawling development is more expensive to build. Roads are wider and require more paving. Water and sewage service costs are higher. It costs more to maintain emergency services since more fire stations and police stations are needed per capita to keep response times down. Children need to be bused farther distances to school.

The article was written by Leigh Gallagher whose book The End of the Suburbs came out in 2013.

Among the other critiques (environmental, social, economic) an interesting, and maybe counter-intuitive, study shows that suburbia may even be bad for your health. The Atlantic ran an article called “Do We Look Fat in These Suburbs?

“Garrick and Marshall report that cities with more compact street networks—specifically, increased intersection density—have lower levels of obesity, diabetes, high blood pressure, and heart disease. The more intersections, the healthier the humans.”

streetsIn the last section of the presentation I moved on to the city and the users. Once again the point here is to show that there are “ideal” users and that those who do not conform are not welcome to the city.

We have talked about anti-homeless design or uncomfortable design in the last lecture Control By Design. But what I wanted to get on to was the ways in which the city is being used. The ways in which our public spaces are most probably not public anymore but they are privately owned and therefore no longer need to conform to the rules of the public space. Or rather they can be made to fit the ideals of the owner.

Showing the ways in which spaces are used in alternative ways I mentioned the case of The Hess Triangle

In 1910, nearly 300 buildings were condemned and demolished by the city to widen the streets and construct new subway lines. David Hess battled the city to keep the Voorhis, his 5-story apartment building. He resisted eminent domain laws for years, but was ultimately forced to give up his property.

By 1914, the 500-square-inch concrete triangle was all that remained of Hess’ property. As if his loss wasn’t bad enough, the city asked him to donate the tiny portion of concrete to use as part of the public sidewalk. Out of spite, Hess refused the offer. On July 27, 1922, he had the triangle covered with mosaic tiles, displaying the statement, “Property of the Hess Estate Which Has Never Been Dedicated For Public Purposes.” Atlas Obscura

And the Seattle nail house that seems to have been the inspiration for the movie Up. Edith Macefield refused to sell her house while a mall was being built around it. In 2006 she turned down US$1 million to sell her home to make way for a commercial development in the Ballard neighborhood of Seattle.

We closed the lecture by talking about strange little remnants of architecture and city planning: desire paths and Thomassons. The latter is

a term launched by Genpei Akasegawa in 1972, and refers to an architectural detail that is both completely and utterly useless, but is still being maintained. These steps in south Philly are an example:

steps

 

 

 

 

And here are the slides I used

Why is copyright law so weird?

When we came across an old Remington Typewriter in a small curiosity shop in Manchester Vermont (founded 1761), the 12-year old looked at it with great curiosity and asked how it worked. He knew it was a writer’s tool but he was unable to figure out how text was produced.

So I explained how to load it with paper, pointed to the ribbon and explained that simply touching the keys would do very little – this was a classic machine where every key needed to be thumped hard to produce an imprint on the paper. The shopkeeper and the other customers (being older) all smiled at the idea that something so simple needed to be explained.

Naturally, everything imaginable has already been done on the Internet, so if you want to get an idea of what this conversation was like, check out the Typewriter episode of the adorable “Kids React to Technology” series:

One of my favorite quotes is that the machine “…types and prints at the same time”. Many of the kids seem to enjoy the tactile nature of typing but they all agree it’s too complicated.

Reminiscing about the typewriter is not only nostalgia. Understanding the technology of the past is vital to understanding the regulations and culture of the present. Take for example something simple like

Ctrl X – Ctrl V

Which, as most people know, are the keyboard shortcuts on a computer for cut and paste. But how many know the reason for cut and paste is that in the analogue world moving section a section of text could literally involve a pair of scissors and some glue. You cut it out and pasted it into the right place.

This is easy enough but it gets even more complex when we talk about law (or culture, but I am limiting this to law). For the longest time, copyright law did not really need to address private copying because the process of copying involved hours of labor and low-quality final output. Physical reality acted as a barrier to the action and therefore legislation was unnecessary. We have no regulation prohibiting people from passing through walls – the very nature of walls makes it unnecessary.

The problem arises when we live through a period of rapid technological change. The law is, and always will be, a slow mover. Most legislators grew up in worlds where typewriters did not need to be explained. Their understanding of the physical realities of copying were created in an analogue reality.

As Douglas Adams writes in Salmon of Doubt:

“I’ve come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things.”

So what does this mean? Picture a legislator: they are often (unfortunately) older, wealthy men. For our example, picture Lex, a 60-year old legislator. Lex was born in 1954, he was fifteen in 1969, and hit 35 in 1989.

Technology invented prior to 1969 is perfectly natural: Obviously the typewriter, the radio and television were all natural. Email had been invented but most people were more likely to get a telegram than understand what an email was. The hottest new device – in this area – was the fax machine. Mobile telephones were invented but it was highly unlikely that anyone would ever hold one.

The development of technology between 1969 and 1989 was astounding – this era began with the first manned mission to land on the Moon: one small step and all that. But still Lex would be slowing down in his appreciation of technology; he would be able to use the VCR and he may even have considered buying the bulky Macintosh portable introduced in 1989…but the Internet, smartphones, mobile devices and most things we now take for granted in communications were not even in his imagination. Few people in 1989 thought landlines would be disappearing.

Just because Lex is old doesn’t mean he cannot be innovative. However, the lens through which he interprets the world is formed by a set of technological tools that have, for the most part, been replaced completely or been upgraded beyond recognition.

When Lex talks about copyright, he uses the vocabulary of this era but often his mindset is interpreting the words through the lens of his established technological world. To make matters worse, he is probably interpreting a set of laws that were created in the 1970s by men whose technology visions were set in the thirties. Naturally all these laws have been updated and modernized – but their fundamental nature remains anachronistic.

So the next time you are puzzled by copyright law remember that it wasn’t built for your iPad…it was built by people who never even dreamed of iPads.

This post first appeared on Commons Machinery.