Live Facial Recognition: People, Power, and Privacy in the Surveillance Machine
Drs. Nora Madison and Mathias Klang
Live Facial Recognition (LFR) represents the next evolution in the surveillance society. This talk will provide an overview of the development and implementation of LFR systems, discuss the ethical implications of LFR, and briefly introduce techniques and technologies for avoiding or subverting LFR. The goal is to demonstrate the need for a deeper understanding and societal debate before uncritically accepting this far-reaching threat to our privacy.
Jul 30, 2020 04:00 PM in Eastern Time (US and Canada)
Algorithms are flawed. And yet they seem to be the best technology companies have to offer. How many products claim to “learn from your behavior”? But what happens when I am the weaker part in this information exchange? There is no way I can know what gems are hidden in the database. So once again the products recommended to me are repetitive or shallow.
So it was great to stumble upon Susanna Leijonhufvud’s Liquid Streaming, a thesis on Spotify and the ways in which streaming music, selected by algorithm not only learns from our experiences, but more interestingly, acts to train us into being musical cyborgs (a la Haraway)
Starting from the human, the human subject can indeed start to act on the service by asking for some particular music. But then, as this music, this particular track, may be a part of a compilation such as an album or a playlist, the smart algorithms of the service, e.g. the machine, will start to generate suggestions of music back to the human subject. Naturally, the human subject can be in charge of the music that is presented to her by, for instance, skipping a tune, while listening on a pre-set playlist or a radio function. Still, the option in the first place is presented through a filtering that the machine has made, a filtering that is originally generated from previously streamed music or analysis of big data, e.g. other networked subject’s streamed music. Added to this description; if an input derives from the subject’s autonomous system, then the analogy of an actor-network is present on yet other layers. The actor-network of the musical cyborg work both within the subject itself, as the subject is not consistent with an identity as an entity, as well as between the subject and the smart musical cicerones.
Leijonhufvud (2018) Liquid Streaming p. 274
We often forget this feedback loop. Since we are trained by the algorithms the level of serendipity and growth is relatively low and we tend to be stuck in a seemingly narrow spiral – especially considering we are supposed to have access to an almost infinite amount of music.
As a newish Spotify user who is musically ignorant, I often find the algorithm to be laughably unhelpful since it does little to expand my horizons and as such is less of a cicerone (knowledgable guide) and more of a frustrated and frustrating gatekeeper.
It would be nice not to have the things I already know recommended to me ad infinitum, but rather show me things I have not seen or heard. Sure I may hate them but at least I may have the chance of expanding my repertoire.
In a fascinating addition to the screen time debate (aka is social media hurting the kids) Przybylski & Orben have published a study in Nature Human Behavior, the study is based on massive amounts of statistical data and has once again shown that we shouldn’t be freaking out about screens or social media. Since the market for fear mongering books about technology that tickle parent paranoia are profitable, I doubt that this will settle the discussion.
Highlights from their study:
With this in mind, the evidence simultaneously suggests that the effects of technology might be statistically significant but so minimal that they hold little practical value.
While we find that digital technology use has a small negative association with adolescent well-being, this finding is best understood in terms of other human behaviours captured in these large-scale social datasets. When viewed in the broader context of the data, it becomes clear that the outsized weight given to digital screen-time in scientific and public discourse might not be merited on the basis of the available evidence.
More harmful than screens
For example, in all three datasets the effects of both smoking marijuana and bullying have much larger negative associations with adolescent well-being… than does technology use.
More important than reducing screen time
Positive antecedents of well-being are equally illustrative; simple actions such as getting enough sleep and regularly eating breakfast have much more positive associations with well-being than the average impact of technology use…
Best line in the paper…
Neutral factors provide perhaps the most useful context in which to judge technology engagement effects: the association of well-being with regularly eating potatoes was nearly as negative as the association with technology use…
In many spaces, mobile digital devices and social media are ubiquitous. These devices and applications provide the platforms with which we create, share and consume information. Many obtain much of their news and social information via the personal screens we constantly carry with us. It is therefore unsurprising that these devices also become integral to acts of social activism and resistance.
This digital resistance is most visible in the virtual social movements found behind hashtags such as #BlackLivesMatter, #TakeAKnee, and #MeToo. However, it would be an oversimplification to limit digital resistance to its most popular expressions. Video sharing on YouTube, Twitter, and Facebook have revealed abuses of police power, racist attacks, and misogyny. The same type of device is used to both record, share, and view instances of abuse. The devices and platforms are also used to organize and coordinate responses, ranging from online naming and shaming, online protests, physical protests. The devices and the platforms are then used to share the protests and their results. More and more the device and the platform are the keyhole through which resistance must fit.
Our devices and access to platforms enable the creation of self-forming and self-organizing resistance movements capable of sharing alternative discourses in advocating for diverse social agendas. This freedom shapes both the individual’s relationship to both power and resistance, in addition to their identities and awareness as activists. It is somewhat paradoxical that something so central to the activist identity and the performance of resistance is in essence created and run as a privatized surveillance machine.
Digital networked resistance has received a great deal of media attention recently. The research field is developing, but more needs to be understood about the role of technology in the enactment of resistance. Our goal is to explore both the role of digital devices and platforms in the processes of resistance.
This special edition aims to understand the role of technology in enabling and subverting resistance. We seek studies on the use of technology in the acts of protesting official power, as well as the use of technology in contesting power structures inherent in the technology or the technological platforms. Contributions are welcome from different methodological approaches and socio-cultural contexts.
We are looking for contributions addressing resistance, power, and technology. This call is interested in original works addressing, but not limited to:
Problems with the use of Digital Resistance
Powerholders capacity to map Digital Resistance-activists through surveillance
How does Digital Resistance differ and/or function compared with Non-digital Resistance?
Problems and advantages with combinations of Digital Resistance and non- Digital Resistance?
Resistance to platforms
Hashtag activism & hijacking
Online protests & movements
The use of humor/memes as resistance
Selfies as resistance
Globalization of resistance memes
Ethical implications of digital resistance
Online ethnography (testimonials/narratives provided by online participants)
Issues concerning, privacy, surveillance, anonymity, and intellectual property
Effective rhetorical strategies and aesthetics employed in digital resistance
Digital resistance: Research methods and challenges
The role of technology activism in shaping resistance and political agency
Shaping the digital protest identity
Policing digital activism
Digital resistance as culture
Virtual resistance communities
The affordances and limitations of the technological tools for digital resistance
Abstracts should be 500 – 750 words (references not included).
Anyone who is trying to think about platforms and their impact should be following Mark Carrigan. His Platform Capitalism Reading Group includes key readings and is a discussion I wish I could have attended. And then their is this simple text What is platform literacy? in connection with a call for reading materials. This is why this is a fascinating question:
“In the last couple of years, I’ve found myself returning repeatedly to the idea of platform literacy. By this I mean a capacity to understand how platforms shape the action which takes place through them, sometimes in observable and explicit ways but usually in unobservable and implicit ones. It concerns our own (inter)actions and how this context facilitates or frustrates them, as well as the unseen ways in which it subtly moulds them and the responses of others to them.”
Nothing is more beautiful (and frustrating) to an academic to read a simple paragraph that nails the questions rattling around in your own mind. Thanks Mark!
The goal of platform literacy is to be able to identify the subtle ways in which actions are directed, controlled, regulated and censored in the online environment. To mangle the great “Whereof one cannot speak, thereof one must be silent.” (Wittgenstein) – We cannot protest that which prevents us from protesting.
This lecture was about the ways in which the Simulacrum is a model for surveillance. The idea was to present the ways in which surveillance can be seen as beginning with juridical power of classical liberalism. This is best illustrated by the ways in which the power of the monarch was all about the right to use power over life and death to enforce commands. An illustration of this can be seen in Bentham’s model prison. This classic surveillance was built into the architecture and focused on reducing the cost of surveillance of the prisoners. In Bentham’s prison the efficiency was maximized when the few could easily and efficiently monitor the many. The norms we live by create the prison. Foucault writes in Volume One of History of Sexuality:
Power was exercised mainly as a means of deduction, a subtraction mechanism, a right to appropriate a portion of the wealth, a tax of products, goods and services, labor and blood, levied on subjects… a right of seizure… it culminated in the privilege to seize hold of life in order to suppress it.
This is, of course, Foucault’s starting point when he sees the prison as a metaphor of control and power. In the panopticon it was the prisoner’s role to internalize their own surveillance and become their own guards. In the wider society this can be seen by the ways in which we all become our own guards as we have internalized the social rules around us.
In order to illustrate this, I used the art history in order to illustrate the shift from the single dominant explanatory model to the complexity of regulatory and surveillance models. I started with showing them Diego Velazquez’s Las Melinas from 1656. After describing what the image portrayed we spoke of the positioning of the artist, the royal couple, the courtiers, dwarves and dog. The meaning increased with the understanding of who everyone was. The next image was Thomas Gainsborough’s Mr and Mrs Andrews
(1750) which portrayed the wealthy couple showing off their wealth. It reminds me of the boastful elements of social media. The next portrait was John Singleton Copley’s Portrait of Sam Adams (1772). This shows Adams slightly disheveled, holding a petition and demanding change. His head is oddly sized to his body and its hardly a flattering portrait. The image captures the high point of his activism rather than his physical prowess and wealth.
A major technological development is the camera. Now that mechanical reproduction was possible the question of what could be done in art was open for experimentation. From this point we see the development of different styles from the expressionism of Edvard Munch’s The Scream (1893), to the dadism of Marcel Duchamp’s Fountain, Jackson Pollock’s abstract expressionism, and the pop art of Andy Warhol.
These forms of art are often criticized for being simple and hardly worthy of the praise and attention they receive. The goal was to explain the ways in which we have moved from the dominant explanations of the world and begun to accept that multiple models of explanations that overlap and co-exist. The earlier forms of art are representations of a single idea and people. Later art has nothing so simple models of explanation but they are there to be interpreted and can offer different answers to different people. It was a fun exercise.
In particular asking the students whether something was art or not. Along with these famous images I also showed them an image of the joke that some people played when they put a pair of glasses on the floor in a gallery and people began taking photos of them.
Asking the students if this was art led to an interesting discussion. Could Marcel Duchamp exhibit a urinal and everyone called it art then what was different with the eyeglasses?
The goal was to discuss the world of surveillance without a dominant narrative and how power is redefined. Instead of the (juridical) centralized power we are left to our norms and this comes into a form of control by desire. The desire to belong to, and follow a group of norms. I had the students post questions on the readings in advance. Some of the questions were:
So is Bogard saying that being able to predict the actions of a population is a more effective form of control than making the population think it is constantly being watched?
Do you believe that people want to be ‘private’ in certain aspects of their life because of over-surveillance or has there always been an innate feeling that we are being watched?
Does constant surveillance morph people’s personalities over time?
if simulations are truly a way of surveilling, or something else? Are simulations a violation of privacy if they are not technically real?
With this I moved on to explaining the simulacrum as envisioned by Baudrillard who asserted that,
as simulation ascends to a dominant position in postmodern societies, the sign’s traditional function of representation, i.e. its power to “mirror reality” and separate it from false appearances, comes to an end, along with its role in the organization of society.
and
The utopian goal of simulation…is not to reflect reality, but to reproduce it as artifice; to “liquidate all referentials” and replace them with signs of the real. The truth of the sign henceforth is self-referential and no longer needs the measure of an independent reality for its verification.
In explaining the simulacrum I turned to The Idea Chanel and Mike Rugnetta to help illustrate the concept. His video “How Is Orphan Black An Illustration of the Simulacrum?” is a great and popular way to introduce the concept.
So Baudrillard shares Foucault’s sense that the panoptic model of enclosure and its disciplinary logic are historically finished. They are not enough to explain the ways in which norms are used as control and surveillance.
The discipline enforced by panoptic surveillance evolves into a general “system of deterrence,” in which submission to a centralized gaze becomes a general codification of experience that allows no room for deviation from its model. In post-panoptic society, subjectivity is not produced by surveillance in the conventional sense of hierarchical observation, but by codes intended to reproduce the subject in advance.
Not to mention that…
…power does not vanish, but becomes simulated power, no longer instantiated and invested in the real, but rather reproduced in codes and models.
In order to help explain the ways in which norms are used in enforcing I used Judith Butler’s ideas that its not that we are determined by norms – but rather that we are determined by the repeated performance of norms. In Gender Trouble she writes
In a sense, all signification takes place within the orbit of the compulsion to repeat; ‘agency’, then, is to be located within the possibility of a variation on that repetition.
The next step was to introduce the concept of biopower. This is becoming more and more interesting with the increasing use of wearable devices and fitness apps. This mode of surveillance comes with the idea of measurement through ideas of normal. Once we introduce concepts such as IQ, standardized testing, and BMI we instantly measure ourselves against them. They are the basis for creating a “correct” way to be. After the readings, some of the questions asked by the students were:
Is invisibly guiding people towards information that reinforces their biases (presumably what they want) a form of corporate efficiency, informational slavery, or both?
Does this type of surveillance bother us? Why not?
How does personal technology increase the constant surveillance of our bodies?
But how can we really trust algorithms in surveillance?
Here are the powerpoint slides I used in the class.
While writing on a totally unrelated topic I fell down a different rabbit hole about the role of the bicycle in the liberation of women. I vaguely remember an example of how the bicycle enabled women to take their own letters to the post office and therefore not be under the control of men. But I have been unable to find a good source for this. There are several other examples and quotes about the ways in which the bicycle enabled changes in fashion and social order.
There is a new dawn … of emancipation, and it is brought about by the cycle. Free to wheel, free to spin out in the glorious country, unhampered by chaperones … the young girl of today can feel the real independence of herself and, while she is building up her better constitution, she is developing her better mind.
Louise Jeye (1895)
No girl over the age of 39 should be allowed to wheel. It is immoral. Unfortunately, it is older girls who are ardent wheelers. They love to cavort and careen above the spokes, twirling and twisting in a manner that must remind them of long-dead dancing days.
Kit Coleman (1889)
Then there is this anti-women on bicycles rant that states that “the bicycle is the devil’s advance agent”
Or this one from The Sunday Herald 1891 “I had thought that cigarette smoking was the worst thing a woman could do, but I have changed my mind.” now its riding a bicycle.
But Susan B. Anthony was a fan of the new mode of transportation and credited it with playing a large part in the emancipation of women.
Let me tell you what I think of bicycling. I think it has done more to emancipate women than anything else in the world. It gives women a feeling of freedom and self-reliance. I stand and rejoice every time I see a woman ride by on a wheel…the picture of free, untrammeled womanhood
Here is a Vox video on How Bicycles boosted the woman’s rights movement.
Its a discussion that has been going on since the start of the Free Software movement in the 80s (and maybe even earlier), and its taking a more sinister and urgent turn. There are two parts of the problem both addressed in Joshua Fairfield’s book “Owned: Property, Privacy and the New Digital Serfdom” and this article he wrote for Quartz: A Roomba of One’s Own. The first part is the question
If we are surrounded by devices we bought but do not control, do we really own them?
This is the challenge to the very idea of property that we are facing today. The books you buy for your Kindle are less yours than the books you have on your shelf (they are more leased than owned). The devices that you cannot repair are a clear example of the ways in which your stuff is really more of a rental situation.
The second part is all about the data our devices collect about us. We have always been under surveillance but the difference is that now we are the ones buying the surveillance devices AND providing all the data for surveillance. Recently there was a fascinating display of this when Netflix posted this tweet:
Some people thought it was amusing while others saw it as creepy. But it is a simple example of how everything we do is being mined for data. This was a simple piece of humor, but it is also an excellent visualization of the power of data collection. Its not even a complex example.
The new iPhone doesn’t have a fingerprint reader to unlock.
Face ID on the iPhone X uses a “TrueDepth” camera setup, which blasts your face with more than 30,000 infrared dots and scans your face in 3D. Apple says this can “recognize you in an instant” and log you into your phone. (ArsTechnica)
While it is common to feel obstinate and antagonistic towards technical change there is one thing that this technical change will force.
Think about all the times and social settings where your phone is lying flat on a table and it’s socially awkward to pick it up. If you want to glance at the screen, maybe to check the time, a text, or even your emails. You simply press the home button and glance at the phone.
Do you now need to pick up the device? This micro-movement is huge, it open obvious and can even be a social slight.
I am reworking the media timeline that I use for teaching and this is what I have so far. What am I missing? Are there any glaring errors or omissions? The dates are notoriously hard to pin down on some things and I have used the earliest invention rather than the date they maybe became more popular. The social media timeline in the bottom needs to be extended to include more years and maybe be redesigned to better fit into the style of the others.