New York Times' prophetic 1983 warning about the NSA

The scary part about the whole NSA Prism story is the predictability, if not inevitability of the whole affair. The shock of the disclosure lies mainly in the hope that government will not do what they have the power to do.

Via BoingBoing comes this 1983 article from The New York Times written by David Burnham: THE SILENT POWER OF THE N.S.A.

No laws define the limits of the N.S.A.’s power. No Congressional committee subjects the agency’s budget to a systematic, informed and skeptical review. With unknown billions of Federal dollars, the agency purchases the most sophisticated communications and computer equipment in the world. But truly to comprehend the growing reach of this formidable organization, it is necessary to recall once again how the computers that power the N.S.A. are also gradually changing lives of Americans – the way they bank, obtain benefits from the Government and communicate with family and friends. Every day, in almost every area of culture and commerce, systems and procedures are being adopted by private companies and organizations as well as by the nation’s security leaders that make it easier for the N.S.A. to dominate American society should it ever decide such action is necessary.

Wearable camera takes 2 photos per minute

Lifelogging has been a buzzword for some time now, but its still a cumbersome task for most of us. But this is not going to last long.

One device that’s going to make this all too easy is the Memoto, which has the tag line “Remember every moment.”

The product is small and simple, clip it on and it takes two photos per minute until you take it off. In the promotion video Memoto says: “What if we could build a camera small enough to never be in the way, but smart enough to capture life as we live it.”

The mass of 5 megapixel pictures are stored on Memoto’s storage surface, and include the time and the location where they were taken. Via an app the photo’s are searchable via gps and time.

When the images are stored on the cloud they are organized into moments, represented by the algorithmically chosen most interesting image.

Sure this is a cool toy, its small, light and colorful. But it also raises several ethical implications. Such as:

  • Many of the people around will have no idea they are being photographed by the device
  • People may object in general to having their time and location and image stored
  • What happens if the device carrier walks into sensitive areas such as hospitals, courts, police stations
  • Who controls the images
  • Who accesses the images (legally or illegally)
  • Copyright questions
  • Trade secrets

Despite all these questions the devices are available and will probably be around soon. A day will produce over 1000 pictures – which explains the need for the algorithm to help us sift through the garbage. But even then I suspect that most of us will realize that we live fundamentally boring lives, probably not worth documenting.

 

Technology: older than we think

Technology is always older than we think. Recently XKCD published a wonderful series of quotes on how we perceive the changes technology brings on the pace of everyday life.

Then today I came across Mark Twain’s excellent use of the camera in King Leopold’s Soliloquy: A Defense of His Congo Rule published in 1905.

The kodak has been a sore calamity to us. The most powerful enemy that has confronted us, indeed… Then all of a sudden came the crash! That is to say, the incorruptible kodak — and all the harmony went to hell! The only witness I have encountered in my long experience that I couldn’t bribe… Then that trivial little kodak, that a child can carry in its pocket, gets up, uttering never a word, and knocks them dumb!

Surveillance and hi-resolution

Huge hi-res images are fascinating and the London Panorama from the BT Tower is no exception. But the resolution got me thinking that this was an excellent visualization of what surveillance really can look like. It’s not only the barely visible images taken from cheap cameras on walls. Check out the zoom on this baby…

Do you see the man with the red shirt and glasses?

Slut Shaming: Notes from a panel

My university has decided that it must act more quickly to join into a larger social debate on current events and to this end they arranged an open event on cyber bullying. The topic was well chosen as in December Göteborg experienced “slut shaming riots” when groups of youths attempted to catch and punish the person they thought was behind a local slut shaming account on Instagram.

The event was in the form of a panel with psychology professor Ann Frisén, police commissioner and chief of the youth section Birgitta Dellenhed, and myself. The university vice chancellor Helena Lindholm Schulz moderated the panel and three thoughtful and perceptive school teenagers were given the role of questioning the panelists before the audience were given time for questions.

The event was held in the old university main hall and was very well attended.

Professor Frisén opened with a presentation of what the concept of cyber bullying was and presented the findings from her research. Her worked confirmed that many children and young people experienced cyber bullying. I was next and then the presenters session was completed by commissioner Dellenhed explaining how the youth section worked and the basics of the recent slut shaming riots.

My role was to talk was on the technological side of the problem. As the reason for the panel was the result of slut shaming I focused my talk on technology’s role in slut shaming. I began with a restatement technology as neutral by using the well know “Guns don’t kill people”. In this perspective I explained that technology is not misogynistic per se but it is important not to forget that the technology is embedded with the values of the creators and adapted by the users.

I used a timeline of the last decades social media innovations to show that we have in a particularly short time evolved a whole new communications infrastructure. This infrastructure has enabled us to do things which we previously could not. This enabling has created new behaviors that may previously been unacceptable.

The ability to do new (and maybe unacceptable) things through technology means that it is our use that brings into question the rightness or wrongness of the situation. Users need social cues and guidance to know the ethics of their actions. Carrying on in technology at time minimizes the ethical social cues and makes behavior online morally complex.

As the whole event was focused on slut shaming and the riots there was a call for order and justice underlying everything that was being said. So I tried to bring back some balance by pointing out that the value of freedom and freedom of expression is important to our lives and societies. Yes I raised a warning finger against moral panic.

What is freedom of expression? Without the freedom to offend, it ceases to exist.
Salman Rushdie

The questions from the students were very interesting and deep. They reflected a need for both space and security. The complexity of this paradox (surveillance and control) was not lost on them. The questions from the floor were mostly good but towards the end was a gruff man demanding more surveillance, law and order. If we know who did it why don’t we prosecute and punish? His comments were applauded which made me think that some of the finer points were lost on the crowd.

The police explained that they do not ignore prosecutions but finding the guilty is not easy. She also pointed out that the person behind the account is also a victim (in some ways). I tried to argue that to catch the guilty in the way he was proposing would entail surveillance of all the innocent and was not compatible to a free and open society. But he denied that he was talking surveillance.

Most of the questions carried the discussion along nicely and the whole event seemed to be enjoyed by all.

The panel and the venue

Slut Shaming, misogyny and technology

This evening I shall be participating in a panel on slut shaming. The university has been quick to organize this panel in response to the slut shaming riots in December. The panel has the Swedish title NÄTMOBBNING – vad är det och vilken roll spelar den nya tekniken? which places the focus on two things (1) what is cyberbullying (2) what role does the new technology play.

Obviously the technology is vital. You just can’t have cyberbullying without the cyber. But there is an interesting undertone to the second question and my role will be to try to strike a balance between explaining why the technology does create or aggravate human behavior at the same time ensuring that the technology itself is not a problem.

Misogyny is not created by technology. BUT… the social norm systems embedded in the technology and the technology users MAY create misogynistic socio-technical systems. Therefore it would be strange not to place some of the (moral) social responsibility on systems developers.

Guns don’t kill people. But gun designers develop superior killing machines and placed in the hands of people with intent they become much more efficient at killing people than a bag of soft toys. (Gotta love an odd metaphor…)

So that’s the plan. Please drop in, if you happen to be in the neighborhood. It’s at 6pm in the university aula at Vasaparken.

Why we use technology: Checkov's gun & expiry dates

This tweet by @Asher_Wolf at 4:25 am on 25 December contains a photo of a tear gas canister used by the police to try to control the Delhi rape riots. The interesting thing here is that the tear gas has an expiry date of May 2009.

The picture got me thinking about different motivating factors for using a certain technology. This post is an exploration. It is not a critique of the decision by the police to use tear gas in this specific situation.

Checkov’s gun

Chekhov’s gun is a metaphor for a dramatic principle a certain inevitability. If a loaded gun is shown in the beginning of a play it will be used before the play is over. Otherwise the gun should not have been shown.

In this case Checkov’s gun is the fact that police have tear gas in their supplies. Any technology we have at our disposal does not simply provide us with an opportunity for action but also creates a demand for action. Possessing the technology creates a desire for it’s use. Checkov’s gun is particularly true of new technology.

The desperation of technology

Spending Christmas in Stockholm this year provided an excellent example of this. The days before Christmas saw large amounts of powdery new white snow fall on the city. Christmas day, therefore naturally saw many kids playing with new winter gear. My home city of Göteborg was less fortunate. Much of the snow had melted due to rain. Despite this many kids were trying to use sleighs on the few icy patches available. They had new technology and were driven to use it.

The frugal cook

One of the common complaint on these days after Christmas is that many are forced to continue eating Christmas food. We may be tired of the taste but we cannot bring ourselves to throw away good food. There is another reason. The Christmas season is a particularly expensive one. So after the main event, after the wrapping paper is cleared away it is naturally that our more frugal natures rise to the fore.

We are not necessarily eating Christmas leftovers because we like them, nor because we cannot afford alternative food – we are eating them as a form of punishment for our excess: the term “waste not, want not” is, in this case, a form of puritanical punishment.

Frugal Riot Control

Therefore the case of the outdated riot gear.

(1) Since the tear gas has been bought it must be used (Checkov’s gun).

(2) If no legitimate situation arises we will redefine reality to legitimize use. (Desperation of technology)

(3) Stockpiles of old technology prevent us from buying new technology. Therefore we must use the old in order to be allowed to by new (frugal cook).

So what?

Attempting to understand why people act is very interesting – but it is also quite impossible to know for certain. While I am sure that all official records of the use of tear gas during the riots will show that the situation warranted its use  – the nagging question always rests in my mind: Why did they use this technology? Why now?

Technology drives human action. It’s not deterministic we have choice. But many of the reasons we decide to use, or not to use, technology may have less to do with us than with technology.

Police, Evidence and Facebook

One of the things I presented at IR13 was in a 10-minute panel presentation on the regulation of Internet by spaces such as Facebook. I wanted to use this all to brief time to enter into the discussion of a problem of police, policing, procedural rules and technological affordances – easy right?

This is going to be a paper soon but I need to get some of the ideas out so that I remember the order they are in and so that people who know better can tell me how horribly wrong, ignorant and uniformed I am about the rules of evidence in different jurisdictions.

So the central argument is that computers have been used for a long time in police work and we have created safeguards to ensure that these computers and databases are not abused. In order to prevent abuse most countries have rules dictating when the police can search databases for information about someone.

Additionally, many countries have more or less developed rules surrounding undercover work, surveillance work and the problem of what to do with excess information (i.e. information gained through surveillance but not relating to the investigation that warranted the surveillance). As you can tell I need to do more reading here. These will all be in the article but here I want to focus on a weakness in the rules of evidence, which may be presented to the courts. This weakness, I argue, may act as an encouragement to certain police officers to abuse their authority.

Facebook comes along and many government bodies (not limited to the police) are beginning to use it as an investigative tool. The anecdotal evidence I have gathered suggests no limitations within the police to using Facebook to get better photos of suspects, finding suspects by “trawling” Facebook and even going undercover to become friends with suspects.

Now here is an interesting difference between Anglo-American law and Swedish Law (I need to check if this applies to most/all civil code countries): The Anglo-American system is much better at regulating this are in favor of individual rights. Courts routinely decide whether or not information gathered is admissible. If a police officer in America gathers information illicitly it may not be part of the proceedings.

In Swedish law all information is admissible. The courts are deemed competent to handle the information and decide upon its value. If a police officer gathers information illicitly in Sweden it is still admissible in court but he may face disciplinary actions by his employer.

So here’s the thing: If an officer decides he doesn’t like the look of me. He has no right to check me up. But there is no limitation to going online.

He may then find out that some of my friends have criminal records (I have several activist friends with police records) or find politically incorrect, borderline illegal status updates I wrote while drunk (I have written drunk statements on Facebook).

This evidence may be enough to enable him to argue probable cause for a further investigation – or at least (and here is the crux of my argument) ensure that he will not be disciplined harshly in any future hearing (should such a hearing arise).

The way the rules are written Facebook provides a tool that can be used to legitimize abuse of police power. And the ways the rules are written in Swedish law are much more open to such abuse.

Here are the slides I used for the presentation

Is there an inverse Filter Bubble?

The whole concept of Filter Bubbles is fascinating. It’s the idea that services like Google & Facebook (and many more) live on collecting data about us. In order to do this more efficiently they need to make us happy. Happy customers keep using the service ergo more data. To keep us happy they organize and filter information and present it to us in a pleasing way. Pleasing me requires knowing me. Or as Bernard Shaw put it “Do not do unto others as you would that they should do unto you. Their tastes may be different”

Its this organizing that makes creates problems. At its most benign Google attempts to provide me with the right answer for me. So if I search for the word “bar” Google may, based on my previous interests (searches, mail analysis, Youtube views etc), present me with drinking establishments rather than information about pressure. Maybe useful, maybe annoying. The problem occurs when we move on to more difficult concepts. The filter bubble argument is that this organization is in fact a form of censorship as I will not be provided with a full range of information. (Some other terms of interest: echo chamber & daily me & daily you).

Recently I have been experimenting with filter bubbles and have begun to wonder if there is also an “inverse” filter bubble on Facebook. The inverse filter bubble occurs when a social media provider insists on keeping a person or subject in your feed and advertising despite all user attempts to ignore the person or topic.

So far I am working with several hypothesis:

  1. The bubble is not complete
  2. The media provider wants me to include the person/topic into my bubble
  3. The media provider thinks or knows of a connection I do not recognize
  4. The person I am ignoring is associating heavily with me (reading posts, clicking images etc)

This is a fascinating area and I need to set up some ways of testing the ideas. As usual all comments and suggestions appreciated.

Neil Armstrong nerdy engineer

I am, and ever will be, a white-socks, pocket-protector, nerdy engineer, born under the second law of thermodynamics, steeped in steam tables, in love with free-body diagrams, transformed by Laplace and propelled by compressible flow.

– Neil Armstrong (1930-2012)