Facial Recognition Webinar

Live Facial Recognition: People, Power, and Privacy in the Surveillance Machine

Drs. Nora Madison and Mathias Klang

Live Facial Recognition (LFR) represents the next evolution in the surveillance society. This talk will provide an overview of the development and implementation of LFR systems, discuss the ethical implications of LFR, and briefly introduce techniques and technologies for avoiding or subverting LFR. The goal is to demonstrate the need for a deeper understanding and societal debate before uncritically accepting this far-reaching threat to our privacy.

Jul 30, 2020 04:00 PM in Eastern Time (US and Canada)

https://us02web.zoom.us/webinar/register/WN_jZxQ36haSsyhPRvhZQVbnA

The algorithm is a bad guide

Algorithms are flawed. And yet they seem to be the best technology companies have to offer. How many products claim to “learn from your behavior”? But what happens when I am the weaker part in this information exchange? There is no way I can know what gems are hidden in the database. So once again the products recommended to me are repetitive or shallow.

So it was great to stumble upon Susanna Leijonhufvud’s Liquid Streaming, a thesis on Spotify and the ways in which streaming music, selected by algorithm not only learns from our experiences, but more interestingly, acts to train us into being musical cyborgs (a la Haraway)

Starting from the human, the human subject can indeed start to act on the service by asking for some particular music. But then, as this music, this particular track, may be a part of a compilation such as an album or a playlist, the smart algorithms of the service, e.g. the machine, will start to generate suggestions of music back to the human subject. Naturally, the human subject can be in charge of the music that is presented to her by, for instance, skipping a tune, while listening on a pre-set playlist or a radio function. Still, the option in the first place is presented through a filtering that the machine has made, a filtering that is originally generated from previously streamed music or analysis of big data, e.g. other networked subject’s streamed music. Added to this description; if an input derives from the subject’s autonomous system, then the analogy of an actor-network is present on yet other layers. The actor-network of the musical cyborg work both within the subject itself, as the subject is not consistent with an identity as an entity, as well as between the subject and the smart musical cicerones.

Leijonhufvud (2018) Liquid Streaming p. 274

We often forget this feedback loop. Since we are trained by the algorithms the level of serendipity and growth is relatively low and we tend to be stuck in a seemingly narrow spiral – especially considering we are supposed to have access to an almost infinite amount of music.

As a newish Spotify user who is musically ignorant, I often find the algorithm to be laughably unhelpful since it does little to expand my horizons and as such is less of a cicerone (knowledgable guide) and more of a frustrated and frustrating gatekeeper.

It would be nice not to have the things I already know recommended to me ad infinitum, but rather show me things I have not seen or heard. Sure I may hate them but at least I may have the chance of expanding my repertoire.

Susanna Leijonhufvud (2018) Liquid Streaming: The Spotify Way To Music, Doctoral Thesis, Luleå University of Technology, (Fulltext here http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1171660&dswid=-2263

CFP Digital Ethics

Its time for another symposium on digital ethics, this will be the 9th year running. Here is the call for papers

We are looking for papers on digital ethics. Topics might include but are not limited to privacy, hate speech, fake news, platform ethics, AI/robotics/algorithms, predictive analytics, native advertising online, influencer endorsements, predictive analytics, VR, intellectual property, hacking, scamming, surveillance, information mining, data protection, shifting norms in journalism and advertising, transparency, digital citizenship, or anything else relating to ethical questions raised by digital technology. This is an interdisciplinary symposium, we welcome all backgrounds and approaches to research.

Researchers can either submit a proposal as a team (consisting of one junior and one senior scholar) or individually. In the latter case, organizers will match submitters up with a partner based on compatibility of the proposal. Five teams will be selected to present completed research at the symposium and critique each others’ work during five 75-minute sessions. After further review, the articles will be eligible for inclusion in a special issue of the Journal of Media Ethics.

Important dates:

Abstracts should propose original research that has not been presented or published elsewhere. The abstract should be between 500 and 1,000 words in length (not including references) and should include a discussion of the methodology used. Please also submit a current C.V. of all authors with the abstract. Abstracts are due on May 20, notifications will be sent out by June 5. Completed papers will be due by October 15.

For more information check out the Call for Abstracts: 9th Symposium on Digital Ethics

Post Panopticon & Me

Teaching privacy and surveillance is a great reason to return to the theories that underpin everything, and I do enjoy introducing students to the history, function, and metaphor of the panopticon. While making myself rethink how it actually works.

The basic panopticon is nicely summarized by Jespersen et al in their 5-point list about the character of the panopticon in Surveillance, Persuasion, and Panopticon

  1. The observer is not visible from the position of the observed;
  2. The observed subject is kept conscious of being visible (which together with the principle immediately above in some cases makes it possible to omit the actual surveillance);
  3. Surveillance is made simple and straightforward. This means that most surveillance functions can be automated;
  4. Surveillance is depersonalized, because the observer’s identity is
    unimportant. The resulting anonymous character of power actually gives Panopticism a democratic dimension, since anybody can in principle perform the observation required;
  5. Panoptic surveillance can be very useful for research on human behaviour, since it due to its practice of observing people allows systematic collection of data on human life.

So last week I focused on privacy and surveillance in situations of “invisible” panopticons. Invisible panopticons could still be covered by point 2 above. In the panopticon we internalize the rules for fear of being watched, and ultimately punished for transgression. But I was trying to explain why there are situations of of self-surveillance where we could easily “misbehave” and nobody would punish us. A misbehavior that nobody cares about aside from maybe myself. If I binge cookies for dinner, drink wine for breakfast, watch trash tv, ignore my work etc nobody cares (unless its extreme) but I may punish myself. Where is the panopticon/power that controls my behavior.

In this case the panopticon (if we can claim there is one) is… my self image? We really have to contort Foucault’s ideas to make this fit under the panopticon. As he says in Discipline and Punish:

the Panopticon must not be understood as a dream building: it is the diagram of a mechanism of power reduced to its ideal form; its functioning, abstracted from any obstacle, resistance or friction, must be represented as a pure architectural and optical system: it is in fact a figure of political technology that may and must be detached from any specific use.

The power over ourselves in settings where there may be no real social harm if we were found out, is more about the conditioning and identities with which we conform. And our ability to act beyond them, to break free of the constraints of power represents the scope of agency we have.

To behave outside the norms that reside within me requires that I am aware of those norms and that I am comfortable to break those norms. That I recognize that there may be other actions I could be taking, and that I am comfortable enough to take them. So the way in which Butler argues that we are not determined by norms. We are determined by the repeated performance of norms. This is as Butler agues in the conclusion of Gender Troubles “…‘agency’, then, is to be located within the possibility of a variation on that repetition.”

Human Surveillance & Agency

Therefore I am being surveilled by the idea of me. How that me would behave in any given situation is limited by my ability to see myself behave.

Digital Resistance Call for Papers

Digital Resistance: Call for papers

Special thematic issue of the Journal of Resistance Studies

Editors: Nora Madison & Mathias Klang

This call as a pdf is available here

In many spaces, mobile digital devices and social media are ubiquitous. These devices and applications provide the platforms with which we create, share and consume information. Many obtain much of their news and social information via the personal screens we constantly carry with us. It is therefore unsurprising that these devices also become integral to acts of social activism and resistance.

This digital resistance is most visible in the virtual social movements found behind hashtags such as #BlackLivesMatter, #TakeAKnee, and #MeToo. However, it would be an oversimplification to limit digital resistance to its most popular expressions. Video sharing on YouTube, Twitter, and Facebook have revealed abuses of police power, racist attacks, and misogyny. The same type of device is used to both record, share, and view instances of abuse. The devices and platforms are also used to organize and coordinate responses, ranging from online naming and shaming, online protests, physical protests. The devices and the platforms are then used to share the protests and their results. More and more the device and the platform are the keyhole through which resistance must fit.

Our devices and access to platforms enable the creation of self-forming and self-organizing resistance movements capable of sharing alternative discourses in advocating for diverse social agendas. This freedom shapes both the individual’s relationship to both power and resistance, in addition to their identities and awareness as activists. It is somewhat paradoxical that something so central to the activist identity and the performance of resistance is in essence created and run as a privatized surveillance machine.

Digital networked resistance has received a great deal of media attention recently. The research field is developing, but more needs to be understood about the role of technology in the enactment of resistance. Our goal is to explore both the role of digital devices and platforms in the processes of resistance.

This special edition aims to understand the role of technology in enabling and subverting resistance. We seek studies on the use of technology in the acts of protesting official power, as well as the use of technology in contesting power structures inherent in the technology or the technological platforms. Contributions are welcome from different methodological approaches and socio-cultural contexts.

We are looking for contributions addressing resistance, power, and technology. This call is interested in original works addressing, but not limited to:

  • Problems with the use of Digital Resistance
  • Powerholders capacity to map Digital Resistance-activists through surveillance
  • How does Digital Resistance differ and/or function compared with Non-digital Resistance?
  • Problems and advantages with combinations of Digital Resistance and non- Digital Resistance?
  • Resistance to platforms
  • Hashtag activism & hijacking
  • Online protests & movements
  • The use of humor/memes as resistance
  • Selfies as resistance
  • Globalization of resistance memes
  • Ethical implications of digital resistance
  • Online ethnography (testimonials/narratives provided by online participants)
  • Issues concerning, privacy, surveillance, anonymity, and intellectual property
  • Effective rhetorical strategies and aesthetics employed in digital resistance
  • Digital resistance: Research methods and challenges
  • The role of technology activism in shaping resistance and political agency
  • Shaping the digital protest identity
  • Policing digital activism
  • Digital resistance as culture
  • Virtual resistance communities
  • The affordances and limitations of the technological tools for digital resistance

Abstracts should be 500 – 750 words (references not included).

Send abstracts to noramadison@gmail.com

Important Dates

Abstracts by 15 January 2019

Notification of acceptance 15 February 2019

Submission of final papers 1 April 2019

  • Max 12000 words (all included)

Statistical Noise, Self Harm, Social Media

Social media gets blamed for a lot of ills, but sometimes the results are exaggerated. Here is an interesting quote from The truth about the suspected link between social media and self-harm

While some studies have found a link between social media and suicide, Przybylski’s colleague Amy Orben has noted that the correlation with mental health issues is tiny. In one study, social media use explained only 0.36 per cent of a girl’s depressive symptoms. That figure is so low, it could just be statistical noise.

New Activism Writing Project

Yesterday we go the good news that the book proposal by Nora Madison and myself has been accepted by Rowman and Littlefield’s Resistance Studies series. The working title is “Everyday Activism: Technologies of Resistance” (but this will be changed later) and looks at the ways in which technology assists, mediates, and hampers acts of resistance. Tentatively the book will be published in the end of 2019. We are really excited about this project and happy to be able to focus on a long term project. 

In conjunction with this I shall be using the blog to throw out ideas/updates about the project and generally return to using the blog as a more integral writing tool.

 

Updating my Media Timeline

I am reworking the media timeline that I use for teaching and this is what I have so far. What am I missing? Are there any glaring errors or omissions? The dates are notoriously hard to pin down on some things and I have used the earliest invention rather than the date they maybe became more popular. The social media timeline in the bottom needs to be extended to include more years and maybe be redesigned to better fit into the style of the others.

 

 

 

The original ppt slides are available here if you want to reuse them.

New Job, New Teaching

The beginning of term is just around the corner and I am really excited to begin my new job at Fordham where I am starting as Associate Professor in Digital Technology and Emerging Media. My teaching this semester is one of the reasons for my excitement as I will be offering two courses: One is the Introduction to the Digital Technology and Emerging Media major (syllabus here) and the other is the endlessly thrilling Digital Cultures (syllabus here)

 

Aside from this cool teaching I get to work at Fordham, a university that is ridiculously gorgeous with open spaces and classical buildings in New York.

AoIR 2017 Dissertation Award

The Association of Internet Researchers calls for submissions for the 2017 AoIR Dissertation Award. To be eligible for the AoIR Dissertation Award, a PhD dissertation in the area of internet research must have been filed in the 2016 calendar year. Nominations (self and other) must be received by 15 April 2017. All methods and disciplines are welcome.

Submissions Details:

  • A nomination letter that explains why the dissertation is deserving of
    the award
  • How it contributes to internet research
  • A PDF copy of the dissertation should be emailed
  • The graduate or their supervisor must be a member of AoIR
  • Self-nominations are permitted
  • Filed in 2016 (meaning fully defended, all edits complete,
    filed/published with a 2016 copyright)

The recipient of this award will be announced this summer. In addition to winning a cash prize, the individual will also be invited to present their research in a session at AoIR 2017 in Tartu, Estonia, 18-21 October 2017.

The committee this year is comprised of Jeremy Hunsinger, Daren Brabham, Jill Rettberg Walker, Tim Highfield and chaired by Mathias Klang.

For any questions please contact Mathias Klang dissertationaward@aoir.org or Jenny Stromer-Galley prez@aoir.org.