Building Worlds on Quicksand
Daniel Evans
Axiom: Something that is said to be true in order to be used as a starting point for reasoning.
World building: The act of constructing worlds, narratives and characters. These may be fictional, real or some liminal space between the two.
Incomplete: Not complete.
Why did God(el) build the world upon a set of formally contradictory axioms?
The prospect of building and understanding the world neatly through mathematics has been embraced by many throughout history. From the Classical Pythagoreans to the logical positivists of the Vienna Circle; there is something ultimately comforting about the prospect of an internally coherent world. Questions around ‘coherence’ and ‘consistency’ are fundamental to number-centric world building, much more is at stake when your entire ontological and epistemological groundwork is reliant on mathematical consistency. To take an extreme example, in the year 450BC there was a boat full of Pythagoras’ cultish followers sailing away from Greece. The story goes that Hippasus of Metapontum was talking about his new mathematical discovery. This was that the square root of two is a number that makes no rational sense, its decimalisation goes on towards infinity. This perspective angered the Pythagoreans so much that they threw him, and his 1.4142135623731... overboard to die. Of course the idea of irrational numbers did not die with Hippasus, the acceptance of them into the mathematical lexicon began with Egyptian polymath Abu Kamil Suja’ Ibn Aslam, over 1200 years after Hippasus’ supposed drowning. The moral of this fable is that any understanding of the world through numbers and formalised axioms has to be taken with a parallel acknowledgement of those axioms limitations, and by association also through the limitations of what we can know. This essay is firstly a speculative discussion of what it means to build worlds through formal axioms, through the use of Borge’s fictional ‘Library of Babel’. Secondly it acts to further this discussion through the lens of computation and epistemology, laying down a critique of new materialist world building projects. Whilst laying my groundwork firmly in the formalism of mathematics and theoretical computer science, I want to demonstrate how these projects are world builders in an equal capacity as fiction or contemporary philosophy. I speculatively offer the conclusion that any world is built upon fractured ground, and although many debates around world building (be they mathematical, scientific, metaphysical or fiction) concentrate on the ontology of what a world is, it is just as important to fathom and investigate the epistemology of what we can or cannot know about a world.
Jorge Luis Borge’s Library of Babel is ‘composed of an indefinite and perhaps infinite number of hexagonal galleries, with vast air shafts between, surrounded by very low railings’ (Borges 1962). Within the remit of this world every singular gallery offers someone everything they could need to survive, along with a singular entrance/exit and a selection of books. The narrator in the story is overtly aware of their own insignificant positionality within the library: ‘I am preparing to die just a few leagues from the hexagon in which I was born. Once I am dead, there will be no lack of pious hands to throw me over the railing; my grave will be the fathomless air; my body will sink endlessly and decay and dissolve in the wind” (Borges 1962). Each book contains a unique selection of commas, periods, spaces and twenty-five unique letters. Largely these books are gibberish, but a set of two fundamental axioms allow an important understanding to be extracted from this universe:
A. Ultimately the entire library is recursive. It is made entirely of identical hexagonal rooms, bar the content of the books themselves. This recursion is infinite.
B. There are only a set number of symbols within each book, twenty-eight in total. The order and pattern of these symbols are chaotic and, seemingly, random.
From these two formalised axioms it can be presumed that the library contains ‘all possible books’ (Borges 1962). Even though the library is very much a regular architecture, the information provided by it is seemingly chaotic, and completely lacking any formal structure or pattern. From this epistemological groundwork the mystics of the library have set to try and deduct a formal system du monde (a system of the world). Despite this being a story of language and books, it is really one of mathematics and epistemology. It establishes the boundary of what can be known in the remit of the library and similarly, just like mathematics, it is not just about the answering of questions, but about how we formalise and axiomatise the questions themselves. Within the structure of this closed library world we can attempt to not only axiomatise a formal system, but also to speculate about potential inconsistencies or unknowabilities. Both these processes concurrently manifest as the act of world building.
The axioms Borges sets out for the books of the library tell us that there is definitely not an infinite number of different books. Each book has 410 pages, 80 letters per page, 28 possible symbols. This means there are 2x10^1,834,097 different possible books. If we take this to be true within the world of the Library then there are a few possible conclusions to be drawn from this:
a) The library is not infinite, and the books within are all unique and consistent, and perhaps there is a discernable and knowable pattern after all. This is the Pythagorean dream, one of complete and consistent knowability. However this conclusion contradicts axiom A.
b) The library is infinite, and the books contained within it are repeated over and over again. By this conclusion it is a possibility that we could never get our hands on all the knowledge in the books, despite the fact that we can theorise that the knowledge exists.
c) The final could be that there are in fact axioms and symbols unknown to the mystics, if the library were to be infinite perhaps there are books containing symbols and axioms yet unknown. This conclusion contradicts axiom B.
Within all of these possibilities it is safe to assume that, at least within the language/symbols of the Library’s known books, all communicable knowledge will be present within all possible books. However there is no neat solution to the problem of the library, all solutions offer some unsatisfactory or unknowable end result. In the case of a) and c) the conclusions we reach end up contradicting the original axioms in which the library is said to be built upon (and indeed that we used to reason our conclusions). In the case of b) even though it fulfills the original axiomatic conditions, the very conclusion is that we can never be sure that we have access to all the knowledge in the library. Let me be clear, this is not an exercise in undercutting or denying the wonder and importance of Borges world, but instead to emphasise that as a complete world-building exercise it is by its very nature rife with axiomatic inconsistencies and that is a positive outcome.
There are vague Godelian rumblings in Borges fiction here: whether intentional or not he alludes to ideas brought up by Kurt Godel's famous, and revolutionary, incompleteness theorem. To paraphrase his thesis Godel states that
‘All consistent axiomatic formulations of number theory include undecidable propositions’ (Godel 1962)
His work came out in opposition to Bertrand Russell and Alfred North Whitehead’s Principia Mathematica (1910) which attempted to axiomatise all mathematical logic into a consistent and complete set. Douglas Hofstader describes Principia as a ‘mammoth exercise in exorcising Strange Loops from logic, set theory and number theory’ (Hofstader 1979). The ‘Strange Loops’ he is referring to are the contradictory, self-referential axioms of mathematics, the quicksand upon which worlds are built.
Hofstader outlines, in lucid and imaginative terms, how Godel’s theorem works through the dialogue of Achilles and the Tortoise. In the fable a gullible Crab is sold a phonograph which can reproduce any and all sounds (a metaphor for a complete system of axioms). However the unfortunate crab then finds that the phonograph, because it can recreate all sounds, can also be made to reproduce sounds which physically break his record player (alluding to contradictory, paradoxical statements of conclusions). This phonograph is gifted by the Tortoise, who understands the fatal flaws in the leap of epistemological faith taken by the Crab. The Crabs only solution to ensure that his new record player stays intact is to get a low fidelity player (a metaphor for an incomplete, weak system of axioms), but naturally this is only a half satisfying solution as the original intentionality, i.e. for a perfect reproduction of sound, is made completely redundant (Hofstader 1979).
This story offers us a nice fictional parallel to the world building of Borges’ Library. There are two levels of meaning to both worlds:
1) The level of language in the library, or of music in the fable of the Crab and the record player. This is the way that knowledge and experience of the world interact with the characters in it. This level of meaning is not so much embroiled in questions of axioms, truth or contradiction. The transmission of linguistic or musical information is not jeopardized by these larger problems of axiomatic consistency. As the Tortoise says in the story, matters of internal consistency ‘are only defective if you have an unrealistic expectation of what formal systems should be able to do’ (Hofstader 1979)
2) This second level of meaning however is about the relationship between the individual containers of knowledge in the story (the records and the books) and the structures which contain and translate them. It is about whether, in any given (fictional or otherwise) world (w), any given set of axioms (a) and a truth about w (t) then w ⊢ t (with a). Can we prove a truth about the world we are in within a set axiom.
At its most basic Godel's idea is that: if a given system of axioms is considered complete then it cannot simultaneously be internally consistent and non-contradictory: conclusion a) about the library. By association, if a given system of axioms is considered internally consistent and non-contradictory, then it cannot be complete: conclusion b) about the library. Of course we can keep adding new axioms to our given system, but with the understanding that we cannot ever grasp the boundary of that system: conclusion c) about the library. In the context of the library, this theory and the conclusions we have made about it establish the exacting boundaries of epistemology within this specific world, bounding a contradictory picture of truth and knowledge.
Within the realm of computation, notions of axiomatic inconsistency and unknowability are manifested most apparently in the idea of undecidable or uncomputable functions. In Computer Science this is represented as the Church-Turing thesis: ‘every computable function from natural numbers to natural numbers is recursive and computable, in principle, by the Turing machine’ (Turing 1936). This thesis has so far proved correct, indeed all natural numbers are calculable by computers because when we talk about the ‘Turing Machine’ we are in effect talking about all modern computers as they are all in some way exceptionally complex variations upon Turing's original calculation machine. Connected to this we have another major problem: that of the ‘halting problem’, which although postulated by Turing was not given this name until the 1980s (Davis 1985). This problem states that, given a certain function (f) of a Turing Machine with an input value (i), is it possible to tell whether the function will stop (false) or continue (true) potentially onwards towards infinity. To put it into pseudocode :
given f(i){ if (true) continue
if (false) halt }
Put simply the answer is that there is no way for a pure Turing Machine to tell us whether it will or will not halt. The implications for this are that if we have a given problem which we offer to a computer to solve, that calculation could take an exceptionally long time to solve. The problem arises that there is no way of knowing whether the problem is simply unsolvable/uncomputable or whether the truth merely lies behind an unknowable amount of time/calculation away. Now given the Church-Turing thesis about every possible natural number being computable by a Turing machine this problem becomes the computational equivalent of Godel's incompleteness theorem. If we were to suspend disbelief and assume the existence of a complete and consistent set of mathematical axioms, we can prove that given any algorithm utilising natural numbers, that algorithm will always halt. However, because the halting problem has been proven as correct, and the idea of a complete and consistent set of axioms has not, we can conclude that those axioms must be false. We can assume that all natural numbers are computable by turing machines, but the question is whether the boundaries of the computational axioms - or indeed the limitation of time itself - allow this truth to ever be proven. This also reminds us of the Library of Babels conclusion b) surrounding the uncertain truth of an infinite library. However now our conclusions are not merely about fictional libraries, or the abstraction of pure mathematical logic, now our conclusions are about the tangible limitations of the computers which we utilise every day and which formulate a kind of virtual, simulative and experiential world-building. Suddenly the abstract is no more. When building and inhabiting these new worlds, what does it mean to see information theory as epistemology (Chaitin 2005)?
Gregory Chaitin emphasises the importance of Turing and Godel’s thesis’ on computability and incompleteness because they are essentially epistemological questions. The logic goes that ‘we can only understand something if we program it … what we can compute depends on the laws of physics in our particular universe and distinguishes it from other possible universes’ (Chaitin 2005). The limits of what we can know and understand in not only our own universe, but as we’ve discussed in constructed/simulated worlds, are bounded by certain formal, mathematical limitations. When Chaitin talks of being able to ‘program’ or ‘compute’ a thing he is talking more generally (not necessarily just talking in terms of digital computers) about whether one can reason towards some kind of truth/understanding given the axioms provided. Now I say ‘some kind of’ truth because, as we have discussed previously, achieving absolute provable truth in all cases is a mathematical misnomer, an error of world generation. Chaitin is famous for his constant, which is the probability that any given random algorithm will halt, and this is represented by the symbol Ω, in this essay I will utilise this constant to represent the very boundary of provable truth. In an interview he talks of how pure mathematicians need to abandon the idea of a mathematics which exists in the ‘mind of God’, a maths without Ω, and embrace the inconsistencies/limitations of our current systems. If a computable algorithm or problem hints towards a certain truth, but it seems unprovable, then they should still experimentally consider adding those axioms to the current mathematical lexicon (Chaitin 2019). From this we ask how we ascribe the meaning of truth value to any understanding of the world: if we accept that any set of formal axioms is inherently inconsistent/not wholly provable then we must revel in those inconsistencies in order to produce commonly understood meanings about the world(s). Rather than attaching ourselves to absolute truth, we can recognise that even within a formal system there is still an interactive process of meaning creation always occuring, which is establishing new kinds of truth.
In Sherry Turkle's book ‘Simulation and its Discontents’ she describes contemporary computation as dominated by simulative programming, a kind of digital world building. Utilising examples such as digital architecture and ocean mapping she outlines how young coders are ‘drunk with code’ as they construct ever larger, more expansive cyberspaces (Turkle 2009). Turkle’s thesis is largely reliant on what she describes as the move from ‘rational’ computation towards ‘simulative’ computing (Turkle 1997). By this she is referencing the move from computers without GUIs, which were largely used as calculation tools of the sciences and mathematics towards those which graphically simulate, inform and supplement experiential phenomena. Think of how I am writing this now, simulated words on a simulated piece of paper placed on top of my simulative desktop. Turkle’s observation is in a way a truism, and her analysis overly general. The overarching thesis of ‘Simulation and its Discontents’ is largely reflective of societies historical understanding of computers, but I would like to consider and critique how we draw a distinct, structural, thread between ‘rational’ computational/mathematical topics (those which we have spent much of this piece discussing) and the more simulative computing that Turkle describes. First of all we can begin our thread by reiterating that all commonly used computers are essentially Turing machines, and thus bounded by the same axiomatic limitations (Ω) that also limit mathematics and the world building of Borge’s library. Following on from this, despite the fact that ‘rational’ and ‘simulative’ computation - as outlined by Turkle - are conceptually different methods of understanding how computers are used, they are essentially the same fundamental machines. This emphasises the epistemological boundary over the ontological boundary as my concern is not with what computing is but about what the possibilities are for knowledge and understanding within the limit of Ω. When a digital ocean mapper produces new and unseen maps of the ocean floor, whilst simulating the ontological reality of the sea, the formal axioms of the computer they were built upon establishes the Ω boundary of meaning and knowledge within that world. Yes, these newly complex types of computations are utilising their newbound complexity to build simulative digital worlds. But these worlds are built on the same axiomatic inconsistencies, the same mechanical quicksand, as all worlds and libraries.
The formally axiomatic yet inconsistent vision of world building I have set out in this essay so far has two specific philosophical purposes:
1. It is about epistemology over ontology in world building. Ultimately this is an essay about the limitations, boundaries and consistency of worlds real or otherwise, however I am not asking what these worlds are (vis a vis. matter) but instead what and how we understand (vis a vis. meaning) knowledge and truth within these worlds. It is partly negative epistemology in that it is just as concerned with what we cannot know than with what we can.
2. I would like utilise these ideas to critique some tendencies within the new material/post-human social theories associated with the computational/digital turn. I want to counter the tendency to reject questions of semantics/meaning and crystallise why this more formal speculation into computational world building does not as naively re-inscribe human tendencies onto a so-called flat ontology. My axiomatic formalism can be just as rooted in conflict, contradiction and networking (rather than the commonly assumed roots in absolute a priori knowledge, consistency and absolute truth).
Many new materialist and post-human theories offer a position partly in reaction against the hyper-semiotics of post-structural/modernist thought that were defined by offering fluid, decentered language-centric perspectives on the world. Within the contemporary context of increasing technologisation, the rise of computation and environmental issues such as climate change; writers like Donna Haraway have done great work in attempting to redefine the (de-)centre. Her work is critical of post-structuralism’s (P-S) semiotic anthropocentrism and whilst offering a networked and largely material ontology, it is considerate of discursive practises of power and meaning creation (in particular on issues of gender). However there are wider tendencies within these theoretical turns firstly towards a re-inscribing of anthropocentric tendencies onto objects and secondly towards a rejection of the importance of meaning, semantics and the epistemological.
To address my first critique I would like to address some trends within the Object Oriented Ontology (OOO) and Speculative Realism (SR) movements: particularly those in the vein of Graham Harman (see Harman 2018) and his followers. For me, many of the OOO questions of metaphysics are non-questions. They claim that in breaking down the barrier of Kantian correlationism they are bringing in a new era of metaphysical questioning, but I would argue they are essentially asking epistemological questions about knowledge and what we can (or cannot) know about the world. Firstly In attempting to transgress the subject/object, human/non-human, discursive/material dualisms, OOO acts to re-inscribes anthropocentric values upon a distributed ontology/network with little concern for the semantic processes that allow us to understand our relationship to said network. Secondly OOO ends up ignoring the importance of socio-economic discursive practises and loses sight of the self as an actant. Many OOO thinkers end up superimposing their own perceptive apparatus upon all objects, and then essentially remove that self-apparatus entirely from their discussions. As Rosi Braidotti pertently notes in her critique of OOO, you cannot “cannot step outside the slab of matter that you inhabit” and of course whilst the interconnected, complicated networks that we inhabit are “complex” and “multiple” they are certainly “not infinite” (Braidotti 2014). A completely metaphysics obsessed OOO is almost just like the quest for consistency in mathematics, it reifies an a priori material sense of what it is to be ‘human’ then flattens and reinscribes that upon matter. It acts like the Crab of Hofstaders story, attempting to find that one object which can be used to reflect on all others, attempting the naive task of building worlds on stable ground.
In Karan Barad’s work she asks “how did language come to be more trustworthy than matter?” (Barad 2003) a reference to what is seen as the lingua-centrism of both post-enlightenment thought and PS thought. Barads project is primarily ontological, in that it aims for the ‘materialisation of all bodies “human” and “non-human”’ towards a ‘agential realist ontology’ (Barad 2003). This post-human performativity stands in opposition, quite rightly, to a representationalist view of epistemology, but in turn also sidelines language and the part it plays in the creation of meaning. Barad offers us the vague material-discursive network of intra-action which is an unsatisfying description for the complex plethora of relationships between all matter. These processes produce ‘phenomena’ (a purposeful redefinition of the Kantian-term and an overt reference to phenomenological experience) but there is little discussion as to how these phenomena accrue meaning. I should acknowledge that my position also stands opposed to a representationalist perspective, in which meaning is taken as something which is simply denoted by language (a position also upheld largely by PS). My consideration of meaning comes from Robert Bradom’s inferentialist pragmatist view, in which meaning (semantics) comes forth from the way the structure (syntax) of language is embedded in inferential relationships of social discursive practice (Brandom 1998). In this paradigm meaning (semantics) is not just beholden to structure (syntax) but instead the two are co-currently produced through sociao-discursive practises, which would take the place of networks of matter vis a vis new materialism. This is distinct from Barads view in that she essentially rejects the vitality of the semantic, instead flattening the structural (syntax) to mean networks of all things as matter, but in doing so fails to touch upon not only how this structure gains meaning but also, as discussed previously, how her own positionality as creator of her own world informs the final position. This new pragmatic position presents matter as the mediator by which formal structures (syntax) and meaning (semantics) interact, rather than simply ends unto themselves.
You may be asking, how does this connect back to the beginning of this essay and our discussions of formal axiomatic systems? Well because those discussions were essentially about how syntactical structures (the formal systems) inform the semantic boundaries of meaning and knowledge (undecidability, unknowability). Within these boundaries (Ω) we acknowledged that a certain level of discursive practice must exist in order to establish what is commonly understood to be true within the limits of those structures, given that universal provable truth is unreachable. Brandom’s perspectives on the relationship between syntax and structure assert that if you are to say something is ‘red’ then you must by default be entitled to believe that it is also ‘coloured’ but also not ‘green’ (Negarestani 2018). Any syntactical, formal system by default establishes the conflicts and boundaries of meaning within that system. Reza Negarestani goes one step further than Brandom, and discusses this discursive interaction of syntax and semantics as computation. He presents a ‘computational framework within which role-based syntactic constituents can be handled as logical expressions which subsequently, through dialogical interaction, find semantic values or meanings’ (Negarestani 2018). These semantic values not only establish worlds of meaning but also the limitations of what can be understood or known in those worlds, due to Ω limitations. Similarly the meanings created by computers are given the same position as those of the human actant, or to be more specific there is no exclusivity to who or what can construct meaning. Meaning becomes established not only through formal calculation, but also through an understanding of the Ω boundaries of knowledge. Firstly this offers a new way of disintegrating the human subject, not one based on naive flat ontology but on the practical insights that new modes of computation offer us about the limits of knowledge and meaning. This unbinding of language from human semiotics means accepting the formal dimension of syntactic structures. Formalised, axiomatic systems are not just a negative inheritence from enlightenment notions of consistency and rationality but can be understood (with the help of more contemporary thinkers) as playing a vital part in not only the construction meaning but also in how we can establish epistemological limitations about the world. They are the most vital creators of worlds.
The perspective I am offering is, granted, speculative and immature, but I think provides new perspectives in several ways. Firstly it attempts to reintroduce formalism and some acknowledgement of structure and logic into world building. It is not necessarily attached to a formal, absolute truth but acknowledges the vital importance of meaning and semantics in creating different truths within the boundaries of certain worlds. I am not offering a vision of enlightenment truth and knowledge , this is not a pure or consistent vision of rational truth and mathematics. The very limitation of Ω acknowledges that any world is built upon some formal barrier to knowledge. This formal barrier is not necessarily linguistic but, as we saw with the Library of Babel, it certainly can include language. As we saw via Brandom and Negarestani, language is itself a formal structure of knowledge co-constructed out of an interactionist relationship with meaning and discursive practice. I wish to maintain a consideration for the sometimes contradictory and non-consistent nature of the way formal structures (math, language, computation) construct both matter and meaning within the remits of Ω and internal self-contradiction. My perspective offers room for materiality, in that it both acknowledges the inextricable link between ontology and epistemology and also considers networks of matter as mediators in the relationship of syntax and structure. In exploring the formal limitations of knowledge within a given system we are undoubtedly exploring questions of what a world is as well as what we know about it. There is a central place for the self in this world, the self as a bearer of knowledge structures and as such also a builder of worlds. This ‘self’ does not simply have to be a human subject and I have left room for computational systems to become ‘selfs’; with their own limitations and contradictions inherent within them. This is a clear demonstration of alternative ways to try and de-centre the enlightenment human subject. This entire essay is merely skimming the surface of what could be explored on this subject, but ultimately it is attempting to playfully explore the conflict between what we people assume to be complete, consistent formal systems against the inevitable contradictions and boundaries of Ω within those systems. Ultimately the conclusion is that it is upon the site of this conflict that which worlds are built, the quicksand of world-building. If you attempt to struggle by committing too naively on the side of consistency and formalism, or of fluidity and anti-formalism then the sand will pull you in. If we revel in the conflict then that is when the sand will let us settle, and let new worlds be born.
Annotated Bibliography
Reza Negarestani: Intelligence and Spirit
I am sad that I didn’t get to talk about this text more, I partly didn’t as I don’t think I’ve fully grasped its scope yet and I have been trying to treat it with the respect it deserves. Honestly, this book has really blown a massive hole in what I think I know about the world. Throughout my academic life I have moved from being really involved in reading poststructural theory, and more recently I engaged lots of the new materialist/OOO thinkers but this book honestly feels like the most important work of theory I have ever read. The way he synergises continental thought with analytic and mathematical ideas is amazing, and his ability to fictionalise the theories he is discussing makes it a really compelling read. His thesis is essentially that we need to reformulate how we look at ‘rationality’ into a philosophy of intelligence. He is considerate of language and socio-discursive practices, but much of his thinking is rooted in formal logic. There is a very imaginative use of AI in this book, and he offers a great in depth speculation in how to conceptualise AGI within his rational inhuman philosophical framework. The way he really rethinks how we look at the (in)human -by integrating the computational as language- is for me a new and exciting way of addressing a lot of post-human concerns but without many of the critiques I have of that specific movement.
Jorge Luis Borges: The Library of Babel
This short story was actually sent to me by a friend not so long ago, he knows my affection for metamathematical problems and concepts and so thought I would enjoy it, and I really did! Borges story here is masterful in that it does not feel heavy handed in how it deals with the concepts of mathematics, infinity and knowledge. It would be easy to write a piece of fiction like this that feels overly contrived, but he succeeds not only because of the detail with which he describes his library, but also the way in which he positions his narrator. The whole concept of the library is outlined via the limitations of what the narrator can know, and the narrator is not depicted as a grand traveler of the library but as an insignificant part of a greater whole. You become enraptured and entranced by the mysteries that it holds, and the childlike wonder of the narrative voice. It seemed like a perfect springboard for this essay naturally, and as I began to write about it I realised how much there was to squeeze out of it (as you can probably tell).
Sherry Turkle: Simulation and its Discontents
I read this book for my undergraduate degree, for a project surrounding New Media and Anthropology. I chose it partly because I knew it had the discussion of simulative computing, and the move from the rational to the simulative, but also because it reflects a personal shift for me. Her work is rooted in the study of media, and what it does. When I first read it, I largely identified with the thesis but now I am highly critical of it. For me using it was 1. A way of demonstrating an engagement with work outside of theory/mathematics and 2. A reflection of my personal journey to acknowledge the vital importance of the technological structures of media and technology, rather than merely the societal implications as are so often emphasised in Anthropology, Sociology and Media Studies.
Bibliography
Barad, K., (2003), Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter, in. Journal of Women in Culture and Society 2003, vol. 28, no. 3, University of Chicago Press
Borges, J. L ., (1962), The Library of Babel, in. Labyrinths, New Directions
Braidotti, R., Interviewed by Vermeulen, T., (2014), Borrowed Energy. https://frieze.com/article/borrowed-energy
Brandom, R. B., (1998), Making It Explicit: Reasoning, Representing & Discursive Commitment, Harvard University Press
Chaitin, G. J., (2005), Epistemology as Information Theory: From Leibniz to Omega, arXiv:math/0506552
Chaitin, G. J. (2019), Is Mathematics Invented or Discovered?, https://www.youtube.com/watch?v=1RLdSvQ-OF0
Davis, M., (1985). Computability and Unsolvability, Dover Publications
Godel, K., trans. Meltzer B., (1962), On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Dover Publications
Harman, G. (2018), Object-Oriented Ontology: A New Theory of Everything, Pelican
Hofstader, D., (1979), Godel, Escher, Bach: An Eternal Golden Braid, Basic Books
Negarestani, R., (2018) Intelligence and Spirit, Urbanomic
Russell, B., Whitehead, A. N., (1910), Principia Mathematica. University of Cambridge Press
Turing, A. M., (1936), On computable numbers, with an application to the Entscheidungsproblem. The London Mathematical Society
Turkle, S., (1997), Life on the Screen: Identity in the Age of the Internet. Simon & Schuster
Turkle, S., (2009), Simulation and its Discontents. The MIT Press.