The question of geographical space – its definition and delimitation – is not least importantly a juridical matter. And so even a digital space, however infinite it may seem, is a legal territory that can hardly be regarded as a heterotopia governed by altogether different laws: It is a gray area that defies the application of existing laws.
Berlin is both the scene of network activism exemplified by the Chaos Computer Club, which was founded in the city, and a safe haven for activists associated with Edward Snowden who may face legal action at home. And it is where Friedrich Kittler developed his archaeology of media, which was a crucial influence on the reflection on the digital world in the 1990s. This archaeology is also the point of departure for Paul Feigelfeld’s exploration of a transformation that began as early as the nineteenth century: From codex to code.
We are finding ourselves in a media-historically and politically unique time. On the one hand, technology and its infrastructures have – to an unprecedented degree – infiltrated every aspect of our lives and environment. On the other, the limits, laws, mechanisms and freedoms of this new world are still largely unknown. In this regard, during the past three decades, Berlin has become a place of constant negotiation for political, technological and artistic theory and practice – locally, yet on a global scale. In the 1980s, Berlin was the birthplace of Chaos Computer Club, one of the first organizations to take up digital technology and the definition of its power-political dimensions – significantly, and all the way up to the legislature – and which gained worldwide importance. Today, Berlin offers activists like Jacob Appelbaum and Laura Poitras a place and conditions to live and work freely, under reasonable legal protection. Appelbaum is a member of Tor, an online anonymity network; Poitras – along with journalist Glenn Greenwald – brought Edward Snowden’s revelations to light. It was in Germany that the RAF dragnet started, and so the discussion on telecommunications data retention was held early on, which led to European Union–wide policy on the matter. Still, it’s difficult to predict in which directions – and there ought to be several simultaneously – digital rights shall develop. Between the Cold War, the Berlin Wall, the dot-com bubble, and housing- and discourse-speculation, Berlin as a territory mirrors the Internet – in political, juridical, but also in anarchically significant ways, whose formation and current state I’d like to experimentally investigate here.
“You know, the medium is not the message, Marshall … is it? I mean, it’s all in the lap of fucking gods …” Roger Waters
In recent years, it’s been possible to circumscribe the Internet’s comprehensive centralization and hierarchization campaign, and the solidification and compression of technological, political, economical and legal infrastructures within a metaphorology of the ephemeral, the heavenly and the transcendent: The Cloud. The Cloud is an ostensibly plushy, amorphous, pretty, but ungraspable “data-beyond” somewhere on Cumulus Cloud 9. Nonetheless, it quite deliberately obscures any glimpse into the original conditions of techno-capitalist accumulation. The drain is on our personal data. As comprehensive as it is, the Cloud is in a legal gray zone, even today.
Like Luke Howard’s nephology, or cloud science, to which we owe the classification of the ungraspable and transitory into Cumulus, Cirrus, Stratus and Nimbus, the origins of the limbus of spatial and legal ideas reaching into the notion of Cloud Computing, lie in large part in the nineteenth century. It was during this time that the world’s oceans became legally territorialized – territorializing, that is, the exterritorial. The law of the sea, over international waters, is a law above the law. It effected a redefinition of heterotopias – decentralized monopolies from the British East India Company up to Google – whose hyperterritorial sovereignty was accompanied and thereby constituted by pirates.
With raw materials such as coal and oil came the techno-economic media concept of the ability to proceduralize. Moreover, these materials made possible a condition where Karl Friedrich August Kekulé von Stradonitz could turn Hegel’s synthesis and substance into an Ouroboros-like benzene molecule – that is, into synthetic plastic. Jean Baptiste Joseph Fourier’s mathematical delineation of continuous processes opened up nature – the hitherto poorly- or ungraspable – into the realm of encodability and decodability. The real could now transparently and without loss be translated into the symbolic. Hence the birth of “code”, a new concept, broadening the legal concept of codex, which was in many ways the twentieth century’s defining dispositive. The legal sense of the word codex wasn’t diluted, but the reverse: Through code, everything was turned into a subject of law.
The mid-nineteenth century also witnessed the origin of the notions of environment, Umwelt and milieu, and with them a concept of space that no longer insists on the separation of nature and culture. Industrial and communication-technological infrastructures would entangle and be superimposed, moving with geology and biology into the center, that is, the realm of sufficient complexity. Such complexity would likewise be constitutive of the twentieth century’s media ecology.
The two World Wars bombed the world into shape for a cybernetic capitalism whose utopias we’ve long lived on the other side of. Building on Norbert Wiener’s feedbacking anti-aircraft guns, Turing machines, and Von Neumann architecture, the calculation of missile trajectories along with Germany’s Enigma Code – which made the Manhattan Project possible – in the 1950s and 1960s media-ecological infrastructures emerged, which don’t have gray zones, since they themselves are such gray zones. These zones’ first demarcation lines were erected nearly simultaneously to the Berlin Wall. Both are Cold War territories, hubs for information and paranoia. The early 1960s – to be concrete, 1963 – JCR Licklider’s paper on the Intergalactic Computer Network led to the development of ARPANET. Due to the progressive networking of large research centers by way of TCP/IP packet switching, from the late 1970s and in the 1980s in (among others) CERN – where Tim Berners-Lee created HTTP and thus the semantically accessible interface of the World Wide Web. Thus arose a media-technological reality, raising anew the questions of national case law, data protection, private spheres, and so forth. In terms of copyright and patent law, the computer, as a copy machine, is – just as Gutenberg’s printing press was – a juridical problem. The concepts of cybernetic self-regulation, ecosystems, and the environmental movement had to contend with the development of a policy and case law in a similar way post-digital cultures have to today, resulting in legal spaces and spatial laws comparable on the one hand to the law of the sea, but on the other hand, with concepts like Cloud computing and storage, seem rather to resemble a fog bank. This fog bank considerably hinders the possible positions of us cyberneticists – in the end, just “navigators” in the literal sense of the word.
Even more, digital rights allow a completely new consideration on how law and space relate to each other, since everything thrown in the legal codex is itself code.
The process is not simply a legal one and the overlaps go in both directions. Scan the science pages and see articles about the possibility of using DNA sequences as incredibly powerful parallel processing “computers”. Think of the software designers who create electronic ecologies and then use those strings of computer code which have proved themselves as survivors – harnessing a form of “natural” selection that Darwin would have recognized but could never have imagined. Put it all together and then compare this “reality” to the way that we thought about computers on the one hand and biology on the other, just 20 years ago. In the international information economy, the medium is not the message. The medium is irrelevant, according to James Boyle.
It is due to a structural problem of technology commodification and an understanding of law that lets open sources seep into black boxes, that digital rights only entered the public consciousness after the revelations of Wikileaks and Edward Snowden. Outrage that intelligence services – who, significantly, codeveloped our information systems’ architecture – collect and use our data is evidence of a certain immaturity. In a time in which digital media permanently codetermine every aspect of our lives, and will continue to do so to a greater extent, such immaturity counts as one of the central problems of our society altogether. Digital rights not only means that the private sphere or informational self-determination, as well as the copyright of producers and businesses, shall be protected; that is, it’s not only the limits of law that should be respected, but first and foremost its openness. When a commercial product such as a router contains Open Source software components, these also have to be made available as open products. It must be ensured that anyone who wishes to work with this material won’t be penalized for copyright infringement. It’s significant that the desire of the law is transmuted from that of the artifact to that of the process, and the process, the algorithm, the code, are nothing other than law and circuitry – which allows a couple thousand lines of code to be as patentable as a DNA sequence.
(Translation: Pablo Larios)