The whole concept of Filter Bubbles is fascinating. It’s the idea that services like Google & Facebook (and many more) live on collecting data about us. In order to do this more efficiently they need to make us happy. Happy customers keep using the service ergo more data. To keep us happy they organize and filter information and present it to us in a pleasing way. Pleasing me requires knowing me. Or as Bernard Shaw put it “Do not do unto others as you would that they should do unto you. Their tastes may be different”
Its this organizing that makes creates problems. At its most benign Google attempts to provide me with the right answer for me. So if I search for the word “bar” Google may, based on my previous interests (searches, mail analysis, Youtube views etc), present me with drinking establishments rather than information about pressure. Maybe useful, maybe annoying. The problem occurs when we move on to more difficult concepts. The filter bubble argument is that this organization is in fact a form of censorship as I will not be provided with a full range of information. (Some other terms of interest: echo chamber & daily me & daily you).
Recently I have been experimenting with filter bubbles and have begun to wonder if there is also an “inverse” filter bubble on Facebook. The inverse filter bubble occurs when a social media provider insists on keeping a person or subject in your feed and advertising despite all user attempts to ignore the person or topic.
So far I am working with several hypothesis:
- The bubble is not complete
- The media provider wants me to include the person/topic into my bubble
- The media provider thinks or knows of a connection I do not recognize
- The person I am ignoring is associating heavily with me (reading posts, clicking images etc)
This is a fascinating area and I need to set up some ways of testing the ideas. As usual all comments and suggestions appreciated.