I’ve got a new position as Book Series Editor at the Fordham University Press! Yay!
We are looking to revitalize the McGannon Book Series, and looking for books that “…interrogate the ways in which media and networked communication technologies (1) constitute social, economic, cultural, and political arrangements and (2) affect the distribution, regulation, and control of information flows.”
If you have an idea you would like to discuss, reach out!
“When I was young there were beatniks. Hippies. Punks. Gangsters. Now you’re a hacktivist. Which I would probably be if I was 20. Shuttin’ down MasterCard. But there’s no look to that lifestyle! Besides just wearing a bad outfit with bad posture. Has WikiLeaks caused a look? No! I’m mad about that. If your kid comes out of the bedroom and says he just shut down the government, it seems to me he should at least have an outfit for that.”
Algorithms are flawed. And yet they seem to be the best technology companies have to offer. How many products claim to “learn from your behavior”? But what happens when I am the weaker part in this information exchange? There is no way I can know what gems are hidden in the database. So once again the products recommended to me are repetitive or shallow.
So it was great to stumble upon Susanna Leijonhufvud’s Liquid Streaming, a thesis on Spotify and the ways in which streaming music, selected by algorithm not only learns from our experiences, but more interestingly, acts to train us into being musical cyborgs (a la Haraway)
Starting from the human, the human subject can indeed start to act on the service by asking for some particular music. But then, as this music, this particular track, may be a part of a compilation such as an album or a playlist, the smart algorithms of the service, e.g. the machine, will start to generate suggestions of music back to the human subject. Naturally, the human subject can be in charge of the music that is presented to her by, for instance, skipping a tune, while listening on a pre-set playlist or a radio function. Still, the option in the first place is presented through a filtering that the machine has made, a filtering that is originally generated from previously streamed music or analysis of big data, e.g. other networked subject’s streamed music. Added to this description; if an input derives from the subject’s autonomous system, then the analogy of an actor-network is present on yet other layers. The actor-network of the musical cyborg work both within the subject itself, as the subject is not consistent with an identity as an entity, as well as between the subject and the smart musical cicerones.
Leijonhufvud (2018) Liquid Streaming p. 274
We often forget this feedback loop. Since we are trained by the algorithms the level of serendipity and growth is relatively low and we tend to be stuck in a seemingly narrow spiral – especially considering we are supposed to have access to an almost infinite amount of music.
As a newish Spotify user who is musically ignorant, I often find the algorithm to be laughably unhelpful since it does little to expand my horizons and as such is less of a cicerone (knowledgable guide) and more of a frustrated and frustrating gatekeeper.
It would be nice not to have the things I already know recommended to me ad infinitum, but rather show me things I have not seen or heard. Sure I may hate them but at least I may have the chance of expanding my repertoire.
Its time for another symposium on digital ethics, this will be the 9th year running. Here is the call for papers
We are looking for papers on digital ethics. Topics might include but are not limited to privacy, hate speech, fake news, platform ethics, AI/robotics/algorithms, predictive analytics, native advertising online, influencer endorsements, predictive analytics, VR, intellectual property, hacking, scamming, surveillance, information mining, data protection, shifting norms in journalism and advertising, transparency, digital citizenship, or anything else relating to ethical questions raised by digital technology. This is an interdisciplinary symposium, we welcome all backgrounds and approaches to research.
Researchers can either submit a proposal as a team (consisting of one junior and one senior scholar) or individually. In the latter case, organizers will match submitters up with a partner based on compatibility of the proposal. Five teams will be selected to present completed research at the symposium and critique each others’ work during five 75-minute sessions. After further review, the articles will be eligible for inclusion in a special issue of the Journal of Media Ethics.
Abstracts should propose original research that has not been presented or published elsewhere. The abstract should be between 500 and 1,000 words in length (not including references) and should include a discussion of the methodology used. Please also submit a current C.V. of all authors with the abstract. Abstracts are due on May 20, notifications will be sent out by June 5. Completed papers will be due by October 15.
This course will explore the effects of surveillance technologies from the everyday devices to the most sophisticated. It will analyze the effects of technology on society, culture and law. Students will gain insights into the impact of surveillance and technological empowerment on communication. Through the study, analysis and application of privacy & surveillance theory the participant will develop a firmer understanding of the role of surveillance on society and its impact on privacy.
In order to cover the topic in five weeks the course will cover one topic each week.
This is from an article arguing against our fetishization of counting calories and BMI but the conclusion contains an important truth that should be applied more broadly:
Humans come in many shapes and sizes. Some people can truly eat whatever they desire and not gain a pound; others chew on leaves and remain portly. The lengths we go to calorie count isn’t a sign of health; it’s orthorexia, which creates cortisol, another factor in weight gain.
Spring break is over and the commute has returned. My tired body not used to the rigor of early mornings (how quickly we forget), made worse by the increased darkness of daylight savings. But each trip is its own reward, and this one didn’t disappoint.
One the subway a heavily tattooed man helped a blind man who was losing his balance, despite a grey sky, children celebrated spring by running and smiling, even the panhandler gave me a heartier smile as he shook the coins in his hand.
A rat ran along the side of the street and around the corner. At the site of a former launderette a crowd gathered to watch the removal of debris from the recent fire that had ravished the building.
A man with a black cowboy hat with a large gold brooch on the front was leaning against a wall while making a phone call. His snakeskin boots matching his brown snakeskin belt, which had a buckle that matched the gold on his hat.
There is always something happening here. It’s good to be back.
In a fascinating addition to the screen time debate (aka is social media hurting the kids) Przybylski & Orben have published a study in Nature Human Behavior, the study is based on massive amounts of statistical data and has once again shown that we shouldn’t be freaking out about screens or social media. Since the market for fear mongering books about technology that tickle parent paranoia are profitable, I doubt that this will settle the discussion.
Highlights from their study:
With this in mind, the evidence simultaneously suggests that the effects of technology might be statistically significant but so minimal that they hold little practical value.
While we find that digital technology use has a small negative association with adolescent well-being, this finding is best understood in terms of other human behaviours captured in these large-scale social datasets. When viewed in the broader context of the data, it becomes clear that the outsized weight given to digital screen-time in scientific and public discourse might not be merited on the basis of the available evidence.
More harmful than screens
For example, in all three datasets the effects of both smoking marijuana and bullying have much larger negative associations with adolescent well-being… than does technology use.
More important than reducing screen time
Positive antecedents of well-being are equally illustrative; simple actions such as getting enough sleep and regularly eating breakfast have much more positive associations with well-being than the average impact of technology use…
Best line in the paper…
Neutral factors provide perhaps the most useful context in which to judge technology engagement effects: the association of well-being with regularly eating potatoes was nearly as negative as the association with technology use…
Teaching privacy and surveillance is a great reason to return to the theories that underpin everything, and I do enjoy introducing students to the history, function, and metaphor of the panopticon. While making myself rethink how it actually works.
The observer is not visible from the position of the observed;
The observed subject is kept conscious of being visible (which together with the principle immediately above in some cases makes it possible to omit the actual surveillance);
Surveillance is made simple and straightforward. This means that most surveillance functions can be automated;
Surveillance is depersonalized, because the observer’s identity is unimportant. The resulting anonymous character of power actually gives Panopticism a democratic dimension, since anybody can in principle perform the observation required;
Panoptic surveillance can be very useful for research on human behaviour, since it due to its practice of observing people allows systematic collection of data on human life.
So last week I focused on privacy and surveillance in situations of “invisible” panopticons. Invisible panopticons could still be covered by point 2 above. In the panopticon we internalize the rules for fear of being watched, and ultimately punished for transgression. But I was trying to explain why there are situations of of self-surveillance where we could easily “misbehave” and nobody would punish us. A misbehavior that nobody cares about aside from maybe myself. If I binge cookies for dinner, drink wine for breakfast, watch trash tv, ignore my work etc nobody cares (unless its extreme) but I may punish myself. Where is the panopticon/power that controls my behavior.
In this case the panopticon (if we can claim there is one) is… my self image? We really have to contort Foucault’s ideas to make this fit under the panopticon. As he says in Discipline and Punish:
the Panopticon must not be understood as a dream building: it is the diagram of a mechanism of power reduced to its ideal form; its functioning, abstracted from any obstacle, resistance or friction, must be represented as a pure architectural and optical system: it is in fact a figure of political technology that may and must be detached from any specific use.
The power over ourselves in settings where there may be no real social harm if we were found out, is more about the conditioning and identities with which we conform. And our ability to act beyond them, to break free of the constraints of power represents the scope of agency we have.
To behave outside the norms that reside within me requires that I am aware of those norms and that I am comfortable to break those norms. That I recognize that there may be other actions I could be taking, and that I am comfortable enough to take them. So the way in which Butler argues that we are not determined by norms. We are determined by the repeated performance of norms. This is as Butler agues in the conclusion of Gender Troubles “…‘agency’, then, is to be located within the possibility of a variationon that repetition.”
Therefore I am being surveilled by the idea of me. How that me would behave in any given situation is limited by my ability to see myself behave.