This is a nice video infographic by To-Fu Design “29 Ways to Stay Creative”
Most useful is stay of the computer, go somewhere new and surround yourself with creative people.
This is a nice video infographic by To-Fu Design “29 Ways to Stay Creative”
Most useful is stay of the computer, go somewhere new and surround yourself with creative people.
Digital RIghts Management. It’s a term designed to put you to sleep and make you ignore what is happening around you. Wikipedia says: Digital Rights Management (DRM) is a class of technologies that are used by hardware manufacturers, publishers, copyright holders, and individuals with the intent to control the use of digital content and devices after sale. It’s most common on digital products but things are getting more interesting and we should be paying more attention.
The DRM chair was a fun way of demonstrating the destructive elements involved in applying DRM – especially outside the world of software. The chair would self destruct after being used 8 times. This was a perfect illustration of the way in which technology can be used to hobble the things we surround ourselves with. It was a thoughtful, illustrative mix of art, design and political commentary. It wasn’t supposed to be an instruction manual…
…CEO Brian Kelley says its new coffee makers will include technology that prevents people from using pods from other companies. The approach has been compared to DRM restrictions that limit the sharing of digital music and video online. But more than just curbing your coffee choices, Green Mountain’s protections portend the kind of closed system that could gut the early promise of the Internet of Things — a promise that hinges on a broad network of digital, connected devices remaking the everyday world.
Cory Doctorow comments
I think Keurig might just be that stupid, greedy company. The reason they’re adding “DRM” to their coffee pods is that they don’t think that they make the obviously best product at the best price, but want to be able to force their customers to buy from them anyway. So when, inevitably, their system is cracked by a competitor who puts better coffee at a lower price into the pods, Keurig strikes me as the kind of company that might just sue.
This is just coffee. Not even particularly interesting coffee but what’s interesting is where we are heading. It is now easy and affordable enough for a seller of coffee to think about DRM. To limit consumers ability to change products, to buy a more affordable or better tasting brand. If it’s cheap enough to do this stupidity with coffee? Why would we imagine a world where this does not happen with everything else? Image a future where the spices you have will not blend with your lunch because they are sold by different corporations.
Technological systems leave their mark on the way in which we live our lives. An obvious example is this fascinating nighttime photo of North and South Korea taken from the International Space Station. It’s obvious because the two countries are separated by access to the basic supply of electricity.
North Korea is almost completely dark compared to neighboring South Korea and China. The darkened land appears as if it were a patch of water joining the Yellow Sea to the Sea of Japan. Its capital city, Pyongyang, appears like a small island, despite a population of 3.26 million (as of 2008). The light emission from Pyongyang is equivalent to the smaller towns in South Korea.
An even more brilliant (bad pun) illustration is the images of Berlin by night taken from the International Space Station. The photo, taken by Colonel Chris Hadfield, shows that the city still carries with it the heritage of the division. The Berlin Wall came down in 1989 and since then Berlin has been rapidly unifying and developing. Despite this, the East-West divide can be seen in the color of the street lights.
The technological systems follow the political and administrative lines of the past and cannot be removed as easily as the wall which divided them. The Guardian explains the different colors:
Daniela Augenstine, of the city’s street furniture department, says: “In the eastern part there are sodium-vapour lamps with a yellower colour. And in the western parts there are fluorescent lamps – mercury arc lamps and gas lamps – which all produce a whiter colour.” The western Federal Republic of Germany long favoured non-sodium lamps on the grounds of cost, maintenance and carbon emissions, she says.
These examples are of traces of systems that have failed (or are going to fail). They work on the principle that by controlling users with force they can maintain power. In the end, systems like these, will collapse because the effort of keeping control outstrips the ability to control. Real control is efficient when (1) the users internalize the surveillance/supervision (Foucault: Panopticon) AND (2) users believe that they are acting in their own convenience and desire.
What fascinates me with these examples is the way in which our technology use marks our surroundings. An obvious example of this is the desire path that line which appears in the snow or bare track in the grass that shows how the world is really used by people as opposed to the idea which the designer believed the technology would be used.
The difference between expected use and actual use. Technology use leaves its traces in our consumption and adaption to the technology upon which we rely. However, it works both ways. By controlling the technology we rely on, we the users, can be led to believe that we desire the features of control that are provided.
An example of this is the way in which the popularity of the iPhone is no way diminished by, from a usability point of view, android operating systems are infinitely more adaptable to different needs. Or the ways in which the collection of data from technology users is all but ignored by the users in their desire for convenience.
If the iTunes/iPhone is to be compared to a silo keeping its users locked in, then it can only succeeded if (1) the users can be convinced that they are happy with the surveillance/control (Foucault: Panopticon) AND (2) any other alternative would be less convenient. If (1) fails then users would happily jailbreak their devices (on a much larger scale than now) and if (2) fails then the system will eventually collapse under its own weight when users realize that life is better on the other side of the wall.
We will all be controlled by the path of least resistance.
Lifelogging has been a buzzword for some time now, but its still a cumbersome task for most of us. But this is not going to last long.
One device that’s going to make this all too easy is the Memoto, which has the tag line “Remember every moment.”
The product is small and simple, clip it on and it takes two photos per minute until you take it off. In the promotion video Memoto says: “What if we could build a camera small enough to never be in the way, but smart enough to capture life as we live it.”
The mass of 5 megapixel pictures are stored on Memoto’s storage surface, and include the time and the location where they were taken. Via an app the photo’s are searchable via gps and time.
When the images are stored on the cloud they are organized into moments, represented by the algorithmically chosen most interesting image.
Sure this is a cool toy, its small, light and colorful. But it also raises several ethical implications. Such as:
Despite all these questions the devices are available and will probably be around soon. A day will produce over 1000 pictures – which explains the need for the algorithm to help us sift through the garbage. But even then I suspect that most of us will realize that we live fundamentally boring lives, probably not worth documenting.
This tweet by @Asher_Wolf at 4:25 am on 25 December contains a photo of a tear gas canister used by the police to try to control the Delhi rape riots. The interesting thing here is that the tear gas has an expiry date of May 2009.
The picture got me thinking about different motivating factors for using a certain technology. This post is an exploration. It is not a critique of the decision by the police to use tear gas in this specific situation.
Chekhov’s gun is a metaphor for a dramatic principle a certain inevitability. If a loaded gun is shown in the beginning of a play it will be used before the play is over. Otherwise the gun should not have been shown.
In this case Checkov’s gun is the fact that police have tear gas in their supplies. Any technology we have at our disposal does not simply provide us with an opportunity for action but also creates a demand for action. Possessing the technology creates a desire for it’s use. Checkov’s gun is particularly true of new technology.
The desperation of technology
Spending Christmas in Stockholm this year provided an excellent example of this. The days before Christmas saw large amounts of powdery new white snow fall on the city. Christmas day, therefore naturally saw many kids playing with new winter gear. My home city of Göteborg was less fortunate. Much of the snow had melted due to rain. Despite this many kids were trying to use sleighs on the few icy patches available. They had new technology and were driven to use it.
The frugal cook
One of the common complaint on these days after Christmas is that many are forced to continue eating Christmas food. We may be tired of the taste but we cannot bring ourselves to throw away good food. There is another reason. The Christmas season is a particularly expensive one. So after the main event, after the wrapping paper is cleared away it is naturally that our more frugal natures rise to the fore.
We are not necessarily eating Christmas leftovers because we like them, nor because we cannot afford alternative food – we are eating them as a form of punishment for our excess: the term “waste not, want not” is, in this case, a form of puritanical punishment.
Frugal Riot Control
Therefore the case of the outdated riot gear.
(1) Since the tear gas has been bought it must be used (Checkov’s gun).
(2) If no legitimate situation arises we will redefine reality to legitimize use. (Desperation of technology)
(3) Stockpiles of old technology prevent us from buying new technology. Therefore we must use the old in order to be allowed to by new (frugal cook).
Attempting to understand why people act is very interesting – but it is also quite impossible to know for certain. While I am sure that all official records of the use of tear gas during the riots will show that the situation warranted its use – the nagging question always rests in my mind: Why did they use this technology? Why now?
Technology drives human action. It’s not deterministic we have choice. But many of the reasons we decide to use, or not to use, technology may have less to do with us than with technology.
To create a web what is needed is links. The explosion of links and growth of the web show how extremely effective users have been at creating a system of seemingly unlimited linked knowledge.
But is this still growing? Are we still linking items freely together? I dont have any data so this is pure speculation (what else is new here).
My linking practice has changed radically. Sure I send tons of links out via Twitter and quite a few via Facebook and even a few via Google+.
Occasional blogging includes a few links but nowhere as many as before, and my blog includes few permanent links to other blogs & sites. Part of this is because of the annoyance with dead links but mostly its because of the growth in social media.
What will the changes in linking habits mean for the open web outside the walled gardens of social media sites? Could it be that the wild web is slowly slipping into obscurity and all that is left will be the controlled versions – or will we see a revival?
These thoughts began when I read Do people still see blogs as networks? – does adding a link to this post defy the original question?
Is there anything more boring than reading instructions or manuals? Ok there is some sado-maschocistic enjoyment in the frustration when attempting to decipher the badly translated or incomplete. What is interesting is the huge leap between the dry explanatory text to the emotional response when we use a piece of well designed technology.
Came across an interesting quote on the nature of man from Buckminster Fuller (apparently from chapter, The Phantom Captain, Nine Chains to the Moon):
Man? Man is a self-balancing, 28-jointed adapter-base biped, and electro-chemical reduction plant, integral with the segregated stowages of special energy extracts in storage batteries, for subsequent activation of thousands of hydraulic and pneumatic pumps, with motors attached; 62,000 miles of capillaries, millions of warning signal, railroad and conveyor systems, crushers and cranes, and a universally distributed telephone system needing no service for seventy years if well managed, the whole extraordinary complex mechanism guided with exquisite precision from a turret in which are located telescopic and microscopic self-registering and recording range-finders, a spectroscope, etc.
Wonderful and precise but lacking something essential to explain the way in which we behave when we are in love. I don’t lack some reference to a soul or a deity but there is something difficult, if not impossible to reduce people to the sum and function of their parts.
These vague thoughts can also be applied to technological systems. On paper they show their intent and purpose but once implemented into a social context they may warp and change into something that was not intended.
The BBC podcast of The Media Show with Steve Hewlett is always interesting to listen to. The latest show I listened to (episode 28 September 2011) contained a segment on the recent changes to Facebook and what these may mean for privacy. Hewlett interviewed Facebook’s Christian Hernandez and attempted to get him to see the privacy effects of the new changes.
Basically the new changes will mean that your friends will see what you are doing online – unless you opt out of showing those specific pages. In other words Facebook will happily announce to your “friends” that you have been looking at pages on weight loss (or whatever) and naturally let them draw their own conclusions from what they see of what I saw.
Hernandez was quick to stress the elements of user control over his/her information. If you chose you may opt-out of showing friends the specific pages you are viewing right now. Additionally if you forget you can remove the pages after the fact.
My problem with the former is that I need to be aware that my Facebook friends will always be looking over my shoulder. I am easily going to forget this. As for the latter – well once my friends know what I have looked at, removing the links/pages/information is not effective… I have already outed myself.
When pressed for a reasoning to why the privacy encroaching changes were made Hernandez talked about Zuckerbergs vision of a social net. When pressed further he returned back to the concept of user control. Eventually he did accept that these changes will require user education.
In other words we, the users, need to learn new proactive, protective forms of behavior. The platform owner has washed their hands – its our problem that they have given us the gift of freedom and control. Wonderful terms like freedom and control become red herrings in the world of data harvesting.
But if we are in danger from social media shouldn’t we be able to expect that the state will somehow regulate to protect us from our own behavior. They did so in areas such as smoking, seat-belts and motorcycle helmets… Sure there is a lot of interest in attempting to update privacy regulation from the pre-social media age – but its tricky. Also not everyone is in favor of regulation.
An example of this is Jeff Jarvis’ recent book Private Parts – Gordon Crovitz reviewed it in the Wall Street Journal
“Congress is considering several privacy bills. But Mr. Jarvis calls it a ‘dire mistake to regulate and limit this new technology before we even know what it can do.’
“Privacy is notoriously difficult to define legally. Mr. Jarvis says we should think about privacy as a matter of ethics instead. We should respect what others intend to keep private, but publicness reflects the choices ‘made by the creator of one’s own information.’ The balance between privacy and publicness will differ from person to person in ways that laws applying to all can’t capture.”
Jarvis is right that it is complex to regulate what we do not fully understand but this means that in the meantime we are losing our integrity rights every time the platform owners make changes – nominally to increase our freedom and control – but in reality to increase their control and profits. Lets never forget what MetaFilter user blue_beetle wrote “if you’re not paying for something, you’re not the customer; you’re the product being sold”.
Profiteers may act to protect access to raw material – not the rights of raw material.
What happens when we finally reach a point of information saturation? Can we see information in the same way as food? Some food would be healthy, some would be unhealthy, but no matter what food – overeating is never a good thing.
Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge
This is, in essence, a wonderful idea – but imagine what will happen in a world were the sum of all human knowledge is available? I began to explore this in a presentation called Wikipedia & Dr Faustus? where I discussed the effects all the worlds information being made available.
The problem with wishing for access to information is that we today have an infrastructure that can provide all the information that we desire but the technology will not discriminate between healthy and unhealthy information. As part of summer reading I began The Filter Bubble by Eli Pariser and came across an interesting quote from Danah Boyd from her speech at the 2009 Web 2.0 Expo:
Our bodies are programmed to consume fat an sugars because they’re rare in nature… In the same way, we’re biologically programmed to be attentive to things that stimulate: content that is gross, violent, or sexual and that gossip which is humiliating, embarrassing, or offensive. If we’re not careful, we’re going to develop the psychological equivalent of obesity. We’ll find ourselves consuming content that is least beneficial for ourselves or society as a whole.
Maybe Boyd is making a value judgement on the different forms of information and compares the “gross, violent, or sexual” to fatty foods – which would probably make necessary facts and information (e.g. maps, statistics) high protein or high fiber. In relation to food we are programmed for fats and sugars but in relation to information we are programmed to relationships. Information about which berries are edible varies but information about relations is universal. We are programmed to be wary of precisely the gross, the sexual, the humiliating and the embarrassing – our survival in the group depends upon it.
The problem is that our interests in these areas is related to other people, people who we are not related to or dependent upon they serve only as entertainment or simple diversion. The evolutionary role of diversion is unclear but we certainly do seem to desire it – or at least fear boredom. So in our desire to avoid boredom we overindulge in our consumption of unhealthy information.
There are basically two ways of dealing with over-consumption (1) more exercise, or (2) dieting. The former is not really efficient but is more a method of coping with the effects of over-consumption. The latter is healthier as it reduces the intake and avoids the negative side effects of over-consumption. Exercise is hard work, but dieting is harder still. It goes against all our natural instincts to overindulge in preparation for the next information glut.
We need to learn healthy information habits right from the start and to ensure that we keep away from information binges. Staying information healthy may be important, but it sure sounds boring.