The art of (making) mistakes: on glitch aesthetics

Upcoming: noise & new media festival, September 28 thru October 3, 2010. The international conference and gathering will take place in Chicago to celebrate the aesthetics of the glitch.

A glitch is a short-lived digital or analog error. Such errors mostly occur when there is some sort of mistranslation in the transmission of data between different domains in a computational system. A visual glitch is not the error itself, but its visual manifestation of it. They appear as a malfunction (a voltage-change or signal of the wrong duration) in an electrical circuit. In software a glitch is something unpredictable, something that changes the desired or expected output of the system. Things go wrong. I think Olga Goriunova and Alexei Shulgin describe what a glitch is all about in a nice way in their contribution to the Software Studies lexicon: “A glitch is a mess that is a moment, a possibility to glance at software’s inner structure…it shows the ghostly conventionality of the forms by which digital spaces are organized”.

Glitch #9 from

The unexpected and the dysfunctional nature of the glitch lend itself well for artistic explorations. Glitch aesthetics is the visualization or making visible of errors, it is a way of organizing perception that emphasizes the artificiality of representation. The aesthetics of glitch makes the functionality and dysfunctionality of software appear. It interrupts the event and breaks down the expected.

Skyscraper #1 from

Glitch art is an art form that plays with these manifestations of errors, these ruptures and cracks. According to artist Rosa Menkman, glich art shows how destruction can change into the creation of something original. Glitch art is not just about errors produced deliberatively by the artists but also about a way of expression that depends on multiple actors contributing to the creation of unexpected events in computational systems. Menkman describes her artistic practice in producing glitch art as uncanny and sublime. “The artist tries to catch something that is the result of an uncertain balance, a shifting, un-catchable, unrealized utopia connected to randomness and idyllic disintegrations”. She says: “I manipulate, bend and break any medium towards the point where it becomes something new”.

Glitch #19 from

The artist Nick Briz compares glitch art to cubism. The logic of cubism he says is that of reducing natural forms into its basic geometric constituencies, glitch art does something similar by attempting to expose the algorithmic processes into an aesthetic form. Glitch art also resembles pop art in his view. Like pop art, glitch art shows an interest in popular culture by appropriating it. What’s being appropriated are the errors occurring in software, video games, images, videos, audio and other forms of data. Unlike Menkman, Briz seems to think that artists primarily search the digital landscape in order to catch, grab and record a glitch, rather than intentionally create them.

The question is whether a glitch is still a glitch, that is an unexpected result of a malfunction, if it is intentionally created? Is it possible to make a mistake on purpose and still call it a mistake?

One of the most prominent glitch theorists, Iman Moradi, distinguishes in his dissertation between the “pure glitch” of the unexpected malfunction and the “glitch-alike” which is a result of an intentional human decision. Moradi together with Ant Scott, Joe Gilmore and Chrisopher Murphy just last year published one of the first books (if not the first) solely devoted to glitch aesthetics – to the art of loss of information, the frozen uncertainty, and the revenge of the machine.

Constantly changing features

In the social media world things change quickly. New features are constantly introduced, removed, and changed. Constant transformations happen in en ever-greater fashion. Things that were there yesterday may not be tomorrow. Users barely keep up with it all. For researchers the object of study, be it Facebook or Twitter, is in constant flux as well. Social media are never finished objects, always in transition. Most often we, whether users or researchers (or both of course) have to keep up with the object, run after it, not the other way round. When finally catching up with it, the objects run a wild again.

Sometimes changes become more visible than others. These are often changes that are perceived to have an impact on privacy. Discourse flares up, debates takes place, extensive blogging, mainstream media covers them, people become outraged, policy people start talking and corporations fuel their public relations machinery. Some changes, gain a lot of attention, some pass almost unnoticed. Almost guaranteed however, is the fact that no matter how serious the software update or change, a new change will soon take some of the bad attention away and replace it with new discourse.

What we are left with is a kind of bewilderment and nausea over what it is that we are actually dealing with. The ontogenetic logic of social media platforms certainly constitutes a modern day power relation contributing to modelling the user as flexible and adaptable. Constantly loosing control over features, settings and functionality runs the risk of users and researchers gradually becoming unresponsive to keeping up with it. This is particularly worrisome as the rolling out of constant new features and disappearance of previous ones also affect the default and privacy settings of these powerful media companies.

In principle every change in interface devices could also mean a change in layers beneath or beyond the primary pages. The problem is that is has become way to difficult and time-consuming to keep control of the increasingly complex social media environments that have become infused with platform politics.

The different social media platforms engage in a constant competition with each other for the best and most relevant features, often by disregarding the wishes and opinions of users. The politics between platforms is to a large extent reactive, rather than innovative. Facebook reacts to Twitter, Twitter to Facebook, Facebook to Fourquare, Foursqaure to Twitter and so on.

So in light of having lost track of the social media flow over the past month or so, I’ve tried to assemble some of the important changes, or changes to come, which have been announced, talked about and almost forgotten again.

– Twitter unveils new web interface

Twitter will over the course of the next few weeks roll out a new interface design that will integrate multimedia into the stream. The technology Mashable calls this change the Facebookification of Twitter. However, the point according to Twitter is not to become more like Facebook. In fact, Twitter doesn’t see itself as a social network: “Twitter is for news. Twitter is for content. Twitter is for information”.

– Facebook launches its location feature “Places”

The places feature is accessible through a web-enabled mobile device and is currently only available in a limited number of countries, including the UK, but not yet other European countries. Basically it lets you to see where your friends are and share your location in the real world through check-ins. It has long been announced that Facebook planned to launch a location feature and it was expected that they would do so at the f8 developers conference back in April. As a multimedia and all-encompassing social networking platform, Facebook needed to also be able to compete with services like Foursquare. However, unlike Foursquare, Facebook has decided not to offer game-like mechanics to their location feature. More information about places can be found in this guide.

– Fourquare moving beyond check ins

The 2.0 version of Foursquare, for the time being only available as an iPhone version, emphasizes “Tips” and “To-Dos”. These features are given much more prominent placement in the navigation bar, suggesting that Foursquare is more than a game for badges and mayorship. To do lists imply an expanded connectivity to other parts of the web. Like Facebook and Twitter before them, with the Like and the Tweet button, Foursquare has now launched an “add to my foursquare” button. So for instance when reading about a cool restaurant in the New York Times, users can click an “add to my foursquare” button and the restaurant will be saved in the to-do list on the mobile device. Once saved and in use, Foursquare will visually alert you to nearby saved To-Do’s

– Diaspora releases the source code

Facebook competitor Diaspora is slowly developing and just released its project code so that developers can start working with making it a feasible and good alternative social networking site. The code is shared at GitHub.

Paying Attention part 2

A lot of the discussions during the conference concerned the distribution and circulation of attention in social media platforms and search engines. The role of advertising, audiences as eyeballs versus attention profiles, the like button of Facebook as an attention forming technique and general monetization strategies associated with user-generated content.

Although I am sceptical of the whole notion of an attention economy as something that supposedly has grown out of the development of new media technologies and an ever-expanding network of networks, I find the relationship between attention and economy a highly interesting one. That is how attention flows in and out of institutions, marketplaces, commodities, consumer practices, desires, trade and the general organization or allocation of scarce resources.

It is important not to forget that worries about attention are nothing new. Attention has always been a scarce resource because it per definition is limited. Sometimes I get the feeling we too easily ignore the fact that the issue of attention has been around for all of media history. Just consider Plato’s Phaedrus or 20th century media theory that began with a theory of distraction. Recently Michael Newman published a great article on the discourses of attention around television and young people, highlighting the mutual discursive constitution of attention as a scarce resource between media producers and popular discourse. Beliefs about attention are too easily reproduced and internalised with media professionals themselves, whereas in fact psychological research around these issues are far from showing any consensus as to the cognitive effects of media technologies on attentional habits.

When it comes to effects on violent behaviour or aggressiveness, computer games are considered as important actors, albeit as scapegoats. In contrast when it comes to issues of attention a lot of research on computer games have highlighted their immersive and engaging affordances. As Guardian journalist John Harris points out in a recent article on Carr’s book: “The point is, to play successfully in an online role-playing game, you have to pay an incredible amount of attention to what your team-mates are doing, to the mechanics of the game. You can set up a thesis for The Depths, just as much as The Shallows”. When it comes to the supposed negative cognitive effects on attention, the Internet and social networking sites are the perpetrators while computer games with their often immersive affordances to a large extent get silenced in discourses around attention.

What I missed in the conference discussions were more critical voices as to the notion of attention itself, other ways of thinking and operationalizing attention so to speak. That said, I enjoyed the talk by Huey Li Li in this regard. She pointed out how there is a primacy of speech in the educational context, whereas silence is degraded. Silence is often equated with non-participation and taken as signs of disinterestedness, of inattentiveness. There are some interesting distinctions to be made between attentive silence, inattentive silence, commodification of the voice and the power of silent resistance. As she pointed out, maybe we should not always compel the dispossessed to speak.

Especially in terms of social media and the celebration of participatory culture I find a revisiting of the primacy of speech important. What these media platforms want is precisely participation, uploads, clicks, comments, chronic status updating. However, in an age where speech has been democratized online, we should also ask whom we are speaking on behalf of, whom our speaking will gain, and who will profit from our clicks and comments. Indeed, silence as a form of resistance and attention seems a timely topic in a culture that honours participation.

I also enjoyed Nadia Arancio’s video essay about adolescents’ identity performances on YouTube. Her work made clear how young people learn to play with attention online, developing skills to attract attention in a self-advertising fashion. Counter to worries about young people’s short attention spans, Nadia calls these YouTubers economists of attention following Richard Lanham’s work on the the Economics of Attention. These at times highly professional kids often seem to know how to represent themselves in such a way as to get attention. On YouTube as Nadia pointed out, subscribers are the scarce resource and many of the kids and adolescents actively and strategically try to manoeuvre the economy of subscribers, all in a quest to become popular.

Overall the conference was quite productive with many great discussions. It definitely gave a lot of food for thought. Lastly for those interested in art and digital media, you should check out two of the artist projects that were presented at the conference. Furtherfield, a platform for bringing together art and technology in creating, discussing and learning about experimental practices for social change, and the netartist Stanza who experiments widely with issues of surveillance in the cityscape.

Paying Attention part 1

The blessed wonders of technology are overwhelming us. Yes we do love our digital devices, Facebook is fun, Twitter does provide useful information – but it’s all too time consuming and indeed distracting. Concentration and contemplation don’t seem to belong to the common vocabulary anymore.

The American technology writer Nicholas Carr calls the Internet an interruption system, the French philosopher of technology Bernhard Stiegler worries that the new psychotechniques that are the social networking sites will create a generation incapable of paying deep attention, and theories on the attention economy have made it clear that attention is the scarce resource in the information society.

Last week the European Science Foundation and the Digital Cultures Research Centre at the University of the West of England co-organized the conference Paying Attention: Digital Media Cultures and Generational Responsibility in Linköping, Sweden. The conference was concerned precisely with this growing sense of bewilderment about the stakes of attention in today’s media saturated world.

How can we make sense of the ways in which attention is mediated and cultivated in and through digital media? What kind of experiences do digital media promote today? What architectures of power are at work in the attention economy?

The great mixture of high profile keynote speakers such as Tiziana Terranova, Bernhard Stiegler and Michael Bauwens, young researchers and artists set the tone for a week of great discussions on the topic.

Tiziana Terranova started off the conference by revisiting several of the key players and texts that have been associated with debates on the attention economy, such as Michael Goldhaber and Jonathan Beller’s work in this field. Both Goldhaber and Beller link attention first and foremost to economic discourse and see attention as a commodity that can be traded in a system of exchange.  More fruitful to the theorising of attention in economic terms Terranova suggests a turn to the theories of the Italian philosopher Maurizio Lazarrato and the French sociologist Gabriel Tarde. Attention through these perspectives can be seen as the will to power of the brain, as an ontological and expressive force that is productive of desires, beliefs and affects.

Terranova moreover talked about the recent neuroscientific shift in understanding attention, or what she called the bios of attention. I too touched upon this in my talk. Several media and cultural commentators, scholars and theorists have recently deployed neuroscientific knowledge to make sense of the attention economy.

In his recent book the Shallows, Carr argues that we’re experiencing a rewiring of the brain set off by our frantic multi-tasking with digital media.  Similarly Stiegler, who also was one of the keynotes at the conference argues somewhat along the same lines in his recent book Taking Care of Youth and the Generations.

In the book, Stiegler contends that the greatest threat to social and cultural development is the destruction of young people’s ability to pay critical attention to the world around them. The myriad of new social network media and technologies must be seen as psychotechniques of capitalist society aimed at keeping a hold of people’s attention by producing consciousness directed towards an imaginary object of desire, feeding into and sustaining a system of consumption and marketing. The pharmakon (something that is both poisonous and therapeutic at the same time) that are the new psychotechniques have thus the unfortunate consequence of producing a subject identifying not with parents but with brands.

In his talk at the conference, Stiegler advocated the need for examining the attentional forms pertaining to the new forms of metadata and the processes of transindividuation that they set off. Stiegler’s conception of attention is hugely influenced by the writings of French philosopher Gilbert Simondon. In Taking Care, Stiegler also enters into dialogue with Katherine Hayles work on what she refers to as a generational shift from modes of deep attention to hyper attention.

For Stiegler attention is both psychic and social, where attention acts as an interface for psychic and collective individuation. The psychic aspects of attention are seen as a modality of concentration on an object, which essentially pertains to a traditional view on attention. As the art historian Jonathan Crary has noted, we have been caught up in an imperative of concentrated attentiveness since the 19th century. The social aspects of attention are based on Stiegler’s reading of the British psychoanalyst Donald Winnicott. Thus the basis of all forming of attention originates in the parent child relationship and in the attention that the mother gives to the child. Hence the title Taking Care. Taking Care both refers to the taking care of the infant child as basis for developing attentional faculties as well as to an overall responsibility of taking care of the generations to come by aiming at enhancing the therapeutic side of the pharmacon instead of letting the toxic side of these technologies take over.

It is this last point that should be emphasised. That the crucial challenge lies in encouraging the therapeutic or productive aspects of these technologies, instead of becoming individuated into the never-ending marketing circuit that capitalizes on our attention and desires. For one could easily read Taking Care as a pessimistic account of a supposed linear shift from deep to hyper attention, in which the latter has succeeded over the former. But as Kathrine Hayles contends, there should be no doubt about the fact that hyper attention came before the mode of deep attention, and that hyper attentive modes are reappearing so to speak with the Internet and new media technologies. With all the references to the rewiring of the brain through our constant and frantic media behaviours, also with dominant voices within neuroscience voicing their fears, it easily looks like there is a causal relation between (the rather recent knowledge about) the plasticity of the brain and diminished attention spans through extensive digital media use. However, as several people have pointed out and which was also voiced at the conference, motherhood is fundamentally characterized by distraction. Caring for an infant child resembles much more a hyper attentive mode than anything else. Hyper attention then is by no means a new phenomenon. This seems quite paradoxical, especially in the context of Taking Care and Stiegler’s argument about the primacy of the mother-child relationship for developing attention. The experience of the mother in this relationship would indeed account for anything but deep attention.