The state of surveillance

This is a conference report from the Cyber-surveillance in everyday life workshop that took place 12-15 May 2011 at the University of Toronto.

One of the great things about this international workshop was the conscious inclusion of a variety of different actor besides academic scholars. Hackers, activists, lawyers, advocates and policy folks concerned with the implications of near-ubiquitous surveillance, both online and off made for a great launch pad for discussions. In fact an advocate or activist, rather than someone from the same academic community/perspective consequently commented upon every academic paper presented.

The workshop kicked off with a panel on codes, technologies and technologies of resistance. Finn Brunton and Helen Nissenbaum’s paper addressed the problem of resistance to regimes of everyday surveillance in which refusal to these technologies is not really a practical option. Instead they proposed a tactic of obfuscation as a strategy of last resort. Obfuscation could take the form of actors producing misleading, false or ambiguous data with the intention of confusing the adversary. In my opinion, such a tactic of obfuscation presents a much more interesting or clever way of resisting surveillance technologies than merely leaving or refusing to use it. For instance in the case of Facebook many have argued that the only option is to simply quit and leave, but for most young people this isn’t really a feasible option as they would miss out on too much in terms of their social life (see for instance Alice Marwick’s interesting conference contribution in this regard ). Obfuscation or feeding misleading data to these databases would not only contribute to questioning the authenticity and usefulness of these technologies but also constitute a clever way of employing their own means against them, as obfuscation is a common technique used in software development. As Brunton said: “If data is what they want you can give them enormous amounts, repeat yourself, say different things, fill the possibility space with points”.

It is important to discuss different strategies, tactics and options people have when encountering surveillance technologies especially because for some people surveillance indeed becomes a very real encounter, of which the G8 and G20 summits and activists are always a good example. But the continuum between event activists and everyday life is not very comparable. There was a lot of talk about privacy violation and surveillance technologies, of silent sms sent by the police in order to track someone’s whereabouts, deep packet inspection on the Internet, Google’s search engine query logs, Web cookies, RFID, eye tracking and advertising in public spaces, wire tapping, storage of CCTV recordings. Again, for some these techniques may become very acute while for others having to constantly deal with taking a conscious stance may become a burden in and of itself. Indeed, some of the more radical proposed solutions such as ‘take out the batteries of your mobile phone’, or ‘just stop using digital technologies’ cannot be the answer – or at least not for ordinary people.

So what should be done then? Practices need to be context sensitive. Most delegates seemed to agree on this. What might be an appropriate response in one situation is not necessarily in another situation. For example, not having a mobile phone or Internet and phone wires can have exactly the opposite effect than what one originally intended. As one conference participant mentioned, Bin Laden’s compound stood out because there was no Internet or phone wire. In other words, not being or acting like everyone else is suspicious. Surveillance is not only based on what is there but arguably more importantly on what is missing.

A telling case that deserves attention is that of Andrej Holm and Anne Roth . On the morning of July 31st, 2007, a squad of special police forces raided their apartment. Holm was arrested along with six other men and kept in detention for weeks. The crime? Suspicious behaviour that had led the police to accuse him of being part of a terrorist organization. This suspicious behaviour included conspicuous search terms on Google, usage of the term “gentrification” in his academic articles (a word also used by the terrorist organization) and not always taking his mobile phone along. As Holm and Roth explain in a book chapter on their experiences: “A terrorism accusation in Germany allows extensive surveillance options: phones were tapped, emails read, access to websites registered and evaluated. GPS devices were installed in private cars to exactly monitor their movements. Video cameras pointed to house entrances, and police teams followed the accused to observe their daily life. Portable microphones recorded conversations in bars.” 1 In an article written for the Guardian right after the arrest the prominent sociologists Richard Sennett and Saskia Sassen liken the German case to Guantanamo and the war on terror in which nameless fears and irrational reposes get out of hand. It surely casts a rather dark light on academic freedom and free speech.

The other side of the equation as exemplified in a couple of papers on the discourse of surveillance and ordinary people’s reaction to it is that most people do not really care too much. As Arsalan Butt and Richard Smith’s paper title tellingly says, “I might not scratch my ass if I think there might be a camera taping it”. The question is whether we can live with this kind of effect on our behaviour imposed by surveillance technologies or not. People are ok with being surveilled as long as they get some discounts, was another conclusion. No big surprise there I suspect. And here is also the crux with these technologies, it is always a matter of give and take of balancing the positive and the negative. For example, people get to use Facebook for free. In turn they agree to give up their data. People generally think it is ok that CCTV’s are in use, if this may help prevent or solve crime.

Discussions around surveillance often involve a good deal of paranoia.2 Get together a too homogenous group and the risk is it becomes black and white. This conference brought together a quite homogeneous group albeit it did not necessarily set out to do so. Don’t get me wrong, the mix of activists, academic and advocates was great, but at the end of the day everyone was concerned. Why was there not anybody there in favour of surveillance? Why is surveillance almost always framed as if it is a bad thing that involves a breach in privacy? Maybe it is symptomatic of entering a new field from the outside to always get a little bit surprised by the high level of agreement and consent about the topic of concern as if it is a given. There is no denying that in a state of ubiquitous computing, digitalized public spaces, growing use and dissemination of biometric techniques and use of automated detection and recognition software surveillance is not only an important but also a truly necessary topic. But then what? As Colin Bennett, one of the central scholars in the field of surveillance studies, asked in the discussion round after one of the last panels, what is surveillance anyways? Is it still (if it ever was) a useful framework, and why?

This expresses the problem with surveillance studies, or any other prefix to ‘studies’ for that matter. The prefix all to often becomes a given, treated as an ontological fact. I am not saying that one should constantly question what ‘media’ is when engaging in ‘media studies’. Surely something must be taken for granted at some level if we are to proceed at all. However coming from outside, meaning not being overly familiar with the surveillance studies literature, made me realize how unquestioned the term surveillance and more importantly its usefulness went throughout the conference. Of course, as David Harper responded it is perfectly ok to have a hard time defining the social as a sociologists for instance, it is even part of the game. The same should go for surveillance. What could possibly have become an interesting discussion did unfortunately just fade away and I got a nagging feeling it was because surveillance all too often just gets treated as a word rather than a concept. The distinction being that a word describes, refers to something external to itself, while the concept also questions, embedded with fundamental ontological and epistemological concerns. Surveillance as it seems to me is very much used as a word, a buzzword even, describing a state of affair often embedded in a set of concrete technologies rather than questioned as a concept. Instead of always equating surveillance with CCTV one should ask what surveillance really means, how it means, what it is,  how we can use it as a valuable analytical framework, where we draw the boundaries if everything is becoming surveillance and how do we know it is surveillance?

1 Andrej Holm and Anne Roth (2010) ‘Anti-terror Investigations against Social Movements— A Personal Experience of a Preventive Threat’. In  F. Hessdörfer, A. Pabst, P. Ullrich (Ed.) Prevent and Tame. Protest under (Self)Control. Berlin: Karl Dietz Verlag. URL: http://www.rosalux.de/fileadmin/rls_uploads/pdfs/Manuskripte/Manuskripte_88.pdf

2 See for instance conference delegate David Harper’s paper Paranoia and public responses to cyber-surveillance. URL: http://www.digitallymediatedsurveillance.ca/wp-content/uploads/2011/04/Harper-Paranoia-and-public-responses.pdf

Rob Kitchin and Martin Dodge’s suggestions for further research in Software Studies

  1. Detailed ethnographic studies of how developers produce code and the life of software projects (p. 247)
  2. Develop a subarea of software studies – algorithm studies – that carefully unpicks the ways in which algorithms are products of knowledge about the world and how they produce knowledge that is then applied, altering the world in a recursive fashion (p. 248) How? For instance by conducting a detailed archaeology of how algorithms come to be constructed and how an algorithm then translates and mutates across projects to be reemployed in diverse ways (p. 255)
  3. Core agenda for software studies should be to produce detailed case studies of how software does work in the world (p. 249) for instance through observant participation (p. 256-257)
  4. Develop systematic examinations of how software seduces (or fails to seduce) people while simultaneously disciplining them (p. 250)
  5. In-depth case studies that examine how and why people adopt and submit to certain software products, outlining the complex and contingent ways that people understand and react to the discursive and affective fields surrounding software-enabled technologies (p. 250)
  6. Examining the discursive regimes surrounding software. How are discursive regimes assembled over time by a variety of vested interests? How are discourses promoted and countered and how does it unfold to shape how software is developed, deployed, and received? (p. 251)
  7. Explore questions regarding the ethics of code: how should the world be captured in code to minimize negative impacts, and how might code do work in the world that is beneficial to the largest number of people? (p. 251)
  8. Using software to map out aspects of the work of code trough data and info visualization techniques (p. 257)

Kitchin, R., and Dodge, M. (2011) Code/Space: Software and Everyday Life. Cambridge, Mass.: MIT Press.

Research visit to the Infoscape Research Lab, Toronto

I’ve been visiting the Infoscape Research Lab at Ryerson University, Toronto this spring. I’ve had a really nice and productive three months time here, the people at Infoscape having been very welcoming towards this stranger from Norway who initially wrote them an email almost a year ago expressing her admiration for their work. In fact, it all began when reading Ganaele Langlois’ excellent PhD dissertation while I was a visiting scholar at NYU last year. At that time I hadn’t come across too many people doing this kind of critical software studies inspired work on social media (and I still haven’t for that matter). Immediately upon establishing contact with Ganaele and Greg Elmer the opportunity to come here for a couple of month to visit was discussed. Although Toronto greeted us with a completely flooded apartment, hotel stays for two weeks and endless rainy days, the environment at Infoscape was much more hospital with a welcoming dinner and drinks, and a goodbye BBQ last weekend.

The Infoscape Research Lab basically consists of three office spaces located on the third floor of the Rogers Communication Centre. Every Friday was usually presentation day where someone would present their current work and the different people associated with the lab would come together and discuss. I shared an office with Alessandra Renzi, a post.doc who is at Infoscape to contribute to the production of the collaborative/open source documentary film Open Sourcing Secrecy, based on the recently published book “Preempting Dissent” by Greg Elmer and Andy Opel. She is very engaged in media activist practices and wrote her highly intellectually engaging PhD dissertation on Telestreet, an Italian network of pirate community television channels. I had some very instructive conversations and learned a lot from her on the philosophy of Simondon and the Italian Autonomist movement in particular.

It was also very nice to meet another PhD candidate who has exactly the same research interests has me, and who reads and references the same books, in addition to having the same appreciation for good hip hop. Fenwick McKelvey is a research associate with the lab and writes his PhD dissertation on transmissive control and Internet time, on “the politics of traffic management software – how it controls information and how it meets resistance”. He has published one of the few social science/media studies papers on algorithms that I am aware of, in which he studies the net neutrality controversy as a case for understanding the politics of algorithms in processes of networking. He also has an interesting article forthcoming in Fibreculture about programmability, platforms and transduction.

Ganaele Langlois and Greg Elmer, Associate Lab Director and Lab Director respectively, both somehow manage to make this rare blend of theory, critique and digital methods work that is both intellectually enriching and methodologically valuable. Whereas the Lab and the research projects carried out there mostly hinge on cases from the political field (Canadian politics), the methods (traffic tags, using the APIs, tracking the circulation of digital objects) and theoretical frameworks used (with a particular affinity towards Deleuze and Guattari) make for a much broader use within media studies more generally. Personally I have found their contributions on “Networked Publics” and “Mapping commercial Web 2.0 worlds” extremely useful. Also look out for their forthcoming article on Traffic Tags that is going to be published in Information Polity. Ganaele, who is an assistant professor at the University of Ontario Institute of Technology, also continues to do work on meaning, Guattari’s a-signifying semiotics and participatory culture. Her latest work in this field can be found in the current issue of Culture Machine on digital humanities.

I also want to mention Steven, Kate, Erika and Yukari whom I also had the pleasure to meet during my stay at what indeed is a very productive and inspiring research environment.

Bye for now..

Interview with Julian Oliver

I met the Berlin-based media artist and programmer Julian Oliver in Toronto as part of the Subtle Technologies festival, where he taught a workshop on the Network as Material. The aim of the workshop reflects Oliver’s artistic and pedagogical philosophy nicely; to not only make people aware of the hidden technical infrastructures of everyday life but to also provide people with tools to interrogate these constructed and governed public spaces.

Julian Oliver, born in New Zealand (anyone who has seen him give a talk will know not to mistake him for an Australian) is not only an extremely well versed programmer but is increasingly as equally knowledgeable with computer hardware. His background is as diverse as the places he has lived and the journeys it has taken him on. Julian started out with architecture but became increasingly interested in electronic art after working as Stelarc’s assistant on ‘Ping Body’, Auckland,1996. He moved on to Melbourne, Australia, and worked as guest researcher at a Virtual Reality center. Later he established the artistic game-development collective Select Parks and in 2003 left for Gotland to work at the Interactive Institute of Sweden’s game lab. He then moved on to Madrid where he had extensive involvement with the Media Lab Prado. Several countries, projects and residencies later, he made Berlin his preferred base, setting up a studio there with colleagues. Julian is also an outspoken advocate of free software and thinks of his artistic practice not so much as art but more in terms of being a ‘critical engineer’, a term that he applies particularly to his collaborations with his studio partner Danja Vasiliev.

Their latest collaboration called Newstweek, was recently announced as winners of the Golden Nica in the Interactive art category of the Prix Ars Electronica 2011. The project leverages on the network as a medium for rigorous, creative investigation, exploring the intersection between the perceived trustworthiness of mass media and the conditions of networked insecurity.

His artistic practice clearly reflects his hacker and gaming background, playing around and messing with routers, capturing data from open wireless networks, visually augmenting commercial billboards in the cityscape, sonifying Facebook chats, visualizing protocols and otherwise manipulating networks for artistic purposes. I sat down with Julian on June 1st to talk about his most recent art projects, his reflections on software and digital media arts more generally.

To read the actual interview go to Furtherfield

Interview with Jens Wunderling

Check out my interview with German media artist Jens Wunderling published at Rhizome earlier this week.

In 2009, Default to Public, his graduation project at the University of Arts, Berlin, won an award of distinction in the Interactive art category at Ars Electronica. This artwork explores the discrepancy between people’s modes of self-revelation online and their simultaneous desire for privacy in the real world in three different modules, focusing on the microblogging site Twitter.

Obsolescence is the moment of superabundance

I’m in Toronto, home of renowned Canadian media theorist Marshall McLuhan. 2011 also marks the centenary of McLuhan’s birth. There are therefore numerous events this year around the globe celebrating his legacy. The most interesting one here in Toronto that I have come across so far is ‘Illuminated Manuscripts’, a photo exhibition showing the work of Canadian artist Robert Bean. While looking at the exhibition I had a small chat with the artist about his art, McLuhan and media technologies more generally.

Bean, an artist, writer and teacher living in Halifax, Nova Scotia was commissioned to create a site-specific exhibition at the Coach House, The Centre for Culture and Technology at the University of Toronto. Tucked away behind various other bigger and bolder buildings the Centre for Culture and Technology was created on October 24, 1963 to keep Marshall McLuhan at the University of Toronto as he had received various other lucrative offers from foreign universities.

The House still remains more or less in the shape it was when McLuhan held his famous seminars through the 1970s. “McLuhan and one or more guests shared dialogue, ideas, controversy and explored truth and awareness, while students and other participants sat around them on their floor.”1 Even much of the furniture still in the house belonged to McLuhan. Curated by Bonnie Rubenstein, the Coach House indeed is the perfect place for hosting the exhibition.

Inside the Coach House

As Robert Bean explains: ‘Illuminated Manuscripts’ is a project about writing, archives and photography. It emphasizes the figure/ground relationship that is physically inscribed on the surface of Marshall McLuhan’s documents. Along with photographic works depicting the texture of McLuhan’s handwriting from the first draft of his seminal work ‘Understanding Media’, the exhibition also a variety of obsolete technologies from the 1920s – 1950s.

Most of the technologies that Robert Bean researched and photographed are from the collection of the Canadian Science and Technology Museum in Ottawa and include such long gone devices as the ‘Monotype’ a typesetting machine, and the ‘Edison Voicewriter’ a voice-recording device intended to streamline the workflow of the contemporary office environment. With a McLuhanesque pop cultural reference, Bean points out this are the sort of technology used in Mad Men. The social and cultural impact of these dictation technologies became manifest in the gendered division of labour in office environments.

'Edison Voicewriter' photographed by Bean in 2010

The collection of photographs also include 4 intriguing works depicting some of the most important predecessors of modern day computing, the SAGE computer system, punch cards and the UNIVAC computer. SAGE, or the Semi-Automatic Ground Environment was the computerized command and control air defence system designed to protect North America from nuclear attack during the Cold War. The SAGE system was the largest, heaviest and most expensive computer system ever built. First used in the 18th century to control automatic looms, machine-readable punch cards transformed how we store data — freeing it from hand entry into ledgers and, ultimately, from human input. Popularized by the 1890 Census, punch cards held sway for more than 100 years and gave rise to IBM. 2 The UNIVAC on the other thand was the first commercially viable computer marketed to government and industry in the United States.

'Finger Apparatus' photographed by Bean in 2010
Bean depicts these obsolete technologies without any sort of interfering noise. They are what they are; neither glorified or depicted as nostalgic relics from a time long gone. They are shot in close ups without any context or other potentially visually disturbing cues competing for the viewers attention. For the younger generation these objects may seem strange and rather archaic and many will probably have no idea as to their original uses or workings (aha that’s what they did on Mad Men!). The slighter older generation will be reminded of the passing of time that eventually leads to the once new and exiting technologies loosing their allure. Everything eventually becomes obsolete. While these photographs make the viewer think of technologies in a potentially new way by forcing forgotten ordinariness into the foreground, these photos by stylistically resembling a catalogue from some Technical Museum can seem a bit repetitive. One device here, one device there. Some of the photos however, especially ‘The Noiseless Typewriter’ and ‘Finger Apparatus’, succeed in capturing some of the aura that only an original or the residue of something left behind may have, not just by the motive itself but also by the share scale, whiteness of the background and glossy paper used to show the obscure. It’s not that the viwer is given a chance of not noticing. What is fascinating is how once very ordinary and new media technologies regain some of their initial aura as time passes.

Just think of the Sony Walkman, now proclaimed an obsolete technology or dead media. With a life span of 31 years, the Walkman was finally taken out of production in 2010. We’ll get a smirk on the face when we see someone with a walkman on the bus, we commemorate the habit of making mix tapes and  make art by using the cassette tapes as material. When is the critical point at which technology transforms from boring, mundane even outdated to regain a certain sense of aura, mysticism and curiosity about them? When will I be looking at a photograph of the mobile phone with the same sense of wonder surrounding pictures of the ‘Audograph’? Let alone, what would be the significance of a blown up depiction of a mobile phone today?

As Bean explains, these pictures are not about a nostalgia. They are simply about understanding the present by means of exploring McLuhan’s observation that ”obsolescence is the moment of superabundance”.

http://www.utoronto.ca/mcluhan/about_history.htm

http://www.wired.com/gadgets/gadgetreviews/magazine/test2007/st_best

Picks of the week

  • Two new exciting books that I’ve been waiting for to come out are now just around the corner: Rob Kitchin and Martin Dodge with Code/Space: Software and Everyday Life on MIT Press; and David M. Berry The Philosophy of Software: Code and Mediation in the Digital Age on Palgrave Macmillan
  • Rebooting the News: a weekly podcast on news and technology with Jay Rosen and Dave Winer
  • Another new exciting book, Cognitive Architecture: From Biopolitics to NooPolitics edited by Deborah Hauptmann and Warren Neidich. Contributors to the publication include amongst others Deborah Hauptmann, Boris Groys, Paolo Virno, Ina Blom, Jordan Crandall, Maurizio Lazzarato, Keller Easterling, Bruce Wexler and Warren Neidich. Cognitive Architecture questions how evolving modalities – from bio-politics to noo-politics – can be mapped upon the city under contemporary conditions of urbanization and globalization.The book rethinks the relations between form and forms of communication, calling for a new logic of representation; it examines the manner in which information, with its non-hierarchical and distributed format is contributing both to the sculpting of brain and production of mind. OBS! Book launch today (Thursday 24th of March 2011) at Almstadtstraße 48-50, Berlin, from 8:30pm – 11:30pm (thanks to Per Platou for pointing this one out)
  • Making WiFi visible: Timo Arnall, Jørn Knutsen and Einar Sneve Martinussen are based at the Oslo School of Architecture and Design where they have made a film in which they explore the invisible structures of WiFi networks in urban space. It is part of a new exciting research project on social media, design and the city called YOUrban, in which also several of my former colleagues from IMK participate in.

Interview with Liz Filardi

Check out my interview with performance artist Liz Filardi over at Furtherfield.

Liz Filardi is a New York City-based performance artist who often works in public space. She was recently awarded a Turbulence Commission for a networked performance piece called I’m Not Stalking You; I’m Socializing, exploring the anxieties of social networking in three modules. “Status Grabber,” the first module, is a satirical online service that extends the status update phenomenon to participation over the telephone. “Black & White,” the second module, is a Facebook-like website, consisting of two interlinked profiles, that tells the story behind one of the original cases of criminal stalking in America. “Facetbook,” the final module, is a performance piece in which the artist compiles a series of archives of her live Facebook profile to illustrate the tension of online identity– between the façade of a profile and the more telling story of how the profile changes over time.

Facebook resistance at Transmediale 11

Earlier this month I went to the Transmediale festival in Berlin. There I participated in the Facebook resistance artist workshop with Tobias Leingruber (de). Tobias is an artist and designer who is involved in a number of exciting Web-based art projects and interventions, including the F.A.T Lab and Artzilla.org. When faced with standardized and depersonalized cultural products that only offer limited possibilities for individual expression there is time to do do something about it. Facebook provides its users to design their identity akin to fow IKEA lets us design our apartments, so the workshop description: “The only individuality lies in the family pictures standing in your BILLY shelves.”

The workshop was a first attempt to experiment with ways to challenge, resist and circumvent the interface design and functionalities of Facebook. The purpose is not as Tobias stressed to resist Facebook altogether, but to find new creative ways in which we can challenge some of the parameters through which we are meant to behave on these sites. I agree. Exiting and quitting cannot be the only options. Quitting is easy, ”researching on ways to change its rules and functionality from inside the system” as it says in the workshop description is certainly not.

Tobias has extensive experience with experimenting with code, browser apps and interface design. Some of the projects that he is best known for, Pirates of the Amazon and China Channel are both impressive software-based art interventions into the politics of the Web. The Amazon pirate browser add-on linked the BitTorrent search engine of the Piratebay.org to Amazon, so it would install a “download 4 free” button on Amazon.com, in principle letting you download Amazon stuff for free. The project was meant as a scholarly work on the Internet in the shape of a software hack in order to critically comment on current media distribution models. For the China Channel, Tobias made a Firefox plugin that reroutes your IP through China, allowing you to browse the Web as if you’d look at it from within China, allowing you to check the state of the censored Web yourself.

There was a great interest in the Facebook resistance workshop, only confirming that people have been looking for this kind of stuff to happen. This time around, as it was the first workshop of its kind, the participants more or less threw out ideas on what kinds of interventions they would like to see happening. “I want an alert of people stalking me”, “I want to have more variety on buttons describing my emotional status”, and “I want to have a gender neutral button, and be able to have 4 wives!” Some even got around to actually code some stuff as one group did on personalized ads. Other interventions could be to design a dislike button (someone already did this), make custom background images and visual css styling for your visitors or simply change themes as in changing Facebook’s color to yellow…check out the below video for some visual and audio (!!) impressions from the workshop itself.

Geert Lovink to IMK

Next week my supervisor Geert Lovink will visit Oslo and the Department of Media and Communication to give the lecture “From Blogs to Wikileaks – on Critical Internet Culture” and to hold a masterclass on all things Web 2.0. I am sure it will be some very interesting days and I strongly encourage those of you in Oslo to come to the open lecture on Tuesday February 1. at Oslo Innovation Center where our department is located, from 14:15 – 16:00, room 205.