Meanderings in the Vault
By Martine Rouleau, on 24 November 2016
Vault artist in residence Kara Chin and Dr Martin Zaltz Austwick from the Bartlett Centre for Advanced Spatial Analysis introduce a screening of Magnetic Rose, a Japanese animé that follows four space travelers who are drawn into an abandoned spaceship that contains a world created by one woman’s memories, alongside It’s a Good Life, an episode of the Twilight Zone television series.
This double programme started with an exchange between Kara and Martin about themes found in science, urban planning, art, films and other cultural productions. The essence of this discussion can be found here. Kara Chin is hosting a screening of the Japanese animé Paprika, also discussed here, on the evening of the 29th of November.
Martin Zaltz Austwick:
I liked Magnetic Rose a lot; it reminded me of some of my favourite science fiction films made in the UK or US, but I can’t tell whether it was influenced by them, or whether there it has its own Japanese antecedents. It struck me that, in trying to tell a very universal story, or one which was very fantastic, there was a lot of gender politics in it; the women in it were all mentally ill, silent, or dead. While “alive Eva” initially seems quite nice and independent and all that, her dead omnipotent computerised version plays into every young bachelor’s chauvinistic fantasies – women just want to trick you, trap you, and ultimately, kill you.
It seems a bit weird to talk about gender when the film is about a deranged omnipotent space computer, but maybe I’ve watched too many films about deranged omnipotent space computers and I’m looking for the differences rather than the similarities.
I’d be interested to know what inspired you to screen this movie, and how it relates to your exhibition. One of the things that struck me was the Chinese Room thought experiment basically stems from a scepticism that computers can ever be more human than human – and if you’re investing them with massive power, whether they can be trusted to behave in a compassionate way. Is it even a good idea to give them that level of power over others in the first place, even if they are empathic or compassionate? I just wonder whether giving people that power is any more sensible though.
The gender politics in the film is an interesting issue, and one that I had not previously considered. You could certainly say that Eva’s character is playing into that rather nasty stereotype of women; this may or may not have been intentional (I hope not), but it is undoubtedly present. It is, however, in stark contrast to the other film I’m screening by Satoshi Kon (Paprika), which features a strong female protagonist, and, come to think of it, actually contains a fair few male stereotypes.
What I think is relevant to my work is the merging together of real and imaginary space. This is an aspect of the ‘Simulation Hypothesis’, which proposes that the universe exists as a virtual construct dependent on processing outside of our space-time, as this could refer to either a computer simulation or imagined simulation. If this is true, then real, imaginary and digital space become interchangeable. In the films this cross over of fact and fiction is shown in a very theatrical and fantastical way, but it is the underlying theme I was interested in. The characters come to exist in a space created by a mind/memory/imagination, [a simulation?] and what I have been considering is how or where this space takes form. I think it pertains somewhat to the old question – does mind give rise to matter, or matter give rise to mind? We see the characters experience this physical yet fabricated world; do we consider this reality to exist only within the characters minds, or manifested on the outside as well?
This could be a misinterpretation of the science on my part, but I think there are some interesting links between our experience of reality and imaginary in the brain. For example, we access similar regions of the brain when we recall real events, as we do to imagine events. There is also evidence that when we dream, we reactivate the same neural pathways that were active when we were awake and having a conscious experience.
I very much enjoyed the It’s a Good Life episode of The Twilight Zone, it was refreshing to watch a horror that is so quietly sinister and eerie. The parallels I draw between this and Magnetic Rose (and also the Ellison story actually) is this depiction of a kind of non-space; I see it as a fabrication of the child’s imagination such that, like in magnetic rose, the space and those trapped within it are under his control. It reminded me of Coraline by Neil Gaiman, another story involving an imagined, and therefore controllable and malleable space being used as a sort of trap or prison [this is also in some episodes of Charlie Brooker’s Black Mirror]. This idea of being trapped in somebody else’s imagined space, and therefore within their mind, makes me consider that existing in our own reality is sort of like being trapped within our own mind.. ?
I reckon the sexism of Eva’s character need not have been deliberate, but unconscious, or stemming from the culture. I find It’s a Good Life almost unbearable to watch – it’s so horrific and claustrophobic and unfree. Even the silly jack in the box visual didn’t really take the edge off for me.
In my current field (not a cornfield thankfully), which I would describe as being closely related to Smart Cities, Data Science and Visualisation, we think about cities and societies sensed and recorded to a fine degree; about social media sifting and self-driving cars. If we are capturing this fine-grained, up to the minute view on what each of us is doing, where we are going, what we are consuming, what we are saying (and by proxy thinking and feeling) – if we are putting all of this together to get a corpuscular view of our cities (and societies) – who do you trust to use this information, to make decisions about how a city lives and breathes, how a society runs and fits together? The Mayor, or the city itself? Anthony, or Eva?
There are also “mirror neurons” that fire when we watch people carrying out a task. I’m not sure how that fits into our idea of our imaginations and reality. The idea that our thoughts can control or define reality is an interesting one; on one level, that’s the dream of the “self-actualised” adult – living the life we want to live. On a more basic level, tool use helps us to do that, extending the capabilities of our bodies to bend the world around us to our will (our mind). As a race, we’re changing the environment of the planet earth almost without trying. I suppose we’re used to thinking that changing our physical reality with just our brains is magical, but changing it with our physical selves (building a house, or putting up a set of shelves) is prosaic, but I see it as a continuum. Through the mental effort of understanding the tube map I can can propel myself vast distances with proportionally miniscule physical effort (well, walking between platforms) – it’s primarily mental effort. The aforementioned self-driving car is a problem of forward route planning and then staving off boredom – all mental effort, resulting in a change in our reality (we were somewhere, now we are somewhere else). Likewise, I keep coming back to the idea that living without agency in a world shaped by another’s vision is another way of expressing the notion of power.
One thing that many stories like Magnetic Rose have in common is the desire of the world-creator to deceive the protagonist into believing they are in the “real” world, when they are in a world constructed to meet the needs of the creator. This deception is necessary, because the power that this awareness brings shatters the reality that has been so meticulously created. It’s hard not to read this as the protagonist discovering a sense of self, of their place in the world, and their own power to control their destinies. This is where It’s a Good Life diverges; the characters know they are in someone else’s fantasy, but they do not or cannot take control of their world.
The question of who sees and uses our online data, I find quite an ominous thought. Not that I’m paranoid some unfortunate operative is scouring through my emails – which I think is how my Mother and Grandmother see it – but it is, of course, well known now that companies do collect and use information from your search history, online conversations and apparently even voice recordings from Smart TVs (is this true?!) and others. I don’t really mind some algorithm using my online activity to place me in a demographic, though it is always a little unnerving when all the ads that appear on your browser have been tailored to your recent searches. Especially when they try to predict things about you, based on your information – females in my age group always seem to get ads pertaining to weight loss or childbirth, which is grim.
What I find ominous is the idea of this information being used as a representation of oneself. As you say, through the use credit cards, oyster cards etc. it’s possible to accurately map people’s movements, interests, routines etc. More and more of ‘ourselves’ is moving online, propelled by the arrival of devices like the Apple watch – our heart rate, blood pressure, how much exercise we do, how well/much we sleep, what we are eating, what we are buying, who we are dating – it seems there’s an app for everything. One potential, controversial outcome is that this information will be made available to employers or insurance companies, who can then make assumptions and opinions about you, based on your digital health profile.
In addition to this, we exist on so many different online platforms logging our social groups, opinions, feelings etc. Sites like LinkedIn, which are becoming imperative in the corporate world, log our achievements, employment, occupations – the list goes on and on. All of this information aggregates to form digital representation of a person, though usually this representation is formed of hyper real characteristics – we only upload the best of ourselves – it is a very blinkered reflection of the self.
There is an episode of Black Mirror (mentioned earlier) in which the deceased husband of the protagonist is virtually recreated, through analysing all of his past social media interactions. Like the rest of the series, this episode is pretty dark and sobering (but also really great I highly recommend it). The idea seems farfetched, but I was really surprised to discover that there are actually a growing number of company’s online, offering services that are not far removed from this. One in particular, called liveson.org, uses an algorithm that analyses your past twitter feed and creates new posts ‘in your voice’ from beyond the grave. Another called DeadSocial offers a variety of different post mortem social media options, including one that creates a funeral playlist based on the deceased most listened to songs.
I think I’m pretty complacent when it comes to these issues; I’m interested in the generational divide as well. I assume that anyone significantly younger than me is simultaneously more at-ease with the sort of digital ubiquity you’re talking about, and more skeptical. Your comments above are actually pretty close to the sort of discussions I hear from people my age, which is interesting.
I went to an event about a year ago where the question was posed “imagine a future in which Google or [insert other big tech company here] made your life incredibly easy (playing you music based on your mood, automating your weekly shop, and booking you an uber for the exact point you’ve had too much to drink) – but they have all your data to do this. Would you say yes to this world?”. In a room of 30 or 40 people, I was the only person to vote “yes” – because, tbh, I think this is what people want. We know big tech platforms have colluded with the UK and US government to mine our information and violate our privacy. And, despite handwringing about “where this could all lead”, we keep on using their products, presumably because they provide a service at no monetary cost – or, to put it another way: what we get from using them is worth more than what we think we lose.
All that aside, I don’t have a huge problem with how we represent ourselves digitally; I feel like a lot of the criticism of the brand-crafting aspect of social media is the typical reactionary response to new technology mixed with a generational attitude that young(er) people are shallow and vain. If people are aware that what they present online is one aspect of their personality or persona, I don’t see a huge problem. There are interesting conversations to be had around how employers use these versions of us, or how maintaining these representations affects our mental health or self-image, but I don’t think they can be had in a balanced way as long as old media is sticking to variations of its “the internet is scary and bad” line it’s been toeing since the mid-nineties. Manuel Castells talks interestingly on this topic.
One thing I found interesting, is that much of our perception of reality is formulated from our brain’s predictions of reality, rather than actual information. Programs, such as those of liveson and deadsocial, use algorithms that create a prediction of what we might have said or posted, just as companies use algorithms to create predictions of what we may be interested in and may want to buy. It’s a disturbing thought that you could be surmounted to an algorithm, a kind of shadow of yourself, made up from previous data; but is this what the brain does with our perception of reality anyway – making predictions based on previous experience?? [This is another point of interest in the real-imaginary-digital-space-merging-together theme in my work]
I think this is more where Artificial intelligence has come to be of interest in my work, not so much from the question of whether or not it’s really possible to manufacture consciousness. As we discussed, if an AI system passes the Turing test and is really indistinguishable from a Human, or performs correctly a task that we have deemed to require a human level of understanding, then whether or not it truly understands that task, or is truly conscious, is sort of irrelevant (but for the ethical and ontological implications). It was more, thinking about different ways in which a self or consciousness could be digitally imitated.
I was reading a book by Roger Penrose that had a few sort of thought experiments for thinking about what can be considered a mind or consciousness, and whether it can be computed, which I found quite interesting. For example, he states, if we could write down an algorithm for a human mind – this would of course be impossibly long and complex – but if we could, then it wouldn’t matter what form this information took. If it was processed by our brain, or by a supercomputer, or written down in an impossibly large book as a flow chart, the only difference would be the lag time between posing a question and extracting the answer. There would be no difference in the output derived from a certain input, because the route through the algorithm to reach that answer is exactly the same. In which case, disregarding time, can the book be considered just as – I think conscious is the wrong word – but could it be considered the same as the supercomputer, and could that be considered the same as the person who’s mind it is? I think may be this is quite a fanciful assessment, but I thought it sort of fit in with thinking about how the self can be reduced to data. If we could be computed, this would be like the ultimate data… is this, in some sense, an extension of how we currently exist on digital platforms, expressed in different forms of data??
I read some of Penrose’s stuff as a late teen, and came across some of these arguments. The criticisms from some philosophers of mind seem to be that he’s not adding all that much. I’m not in a position to refute that, but I did find his work hard to understand, but that might be because my brain is a novella rather than a library.
The “book” argument seems to relate quite closely to John Searle’s Chinese Room argument, with an emphasis on the rules as written down as opposed to the rules as carried out. They’re both arguments against an algorithmic understanding of consciousness, as I understand it.
I don’t know how this works in the context of perpetuating our own existences. I guess we can create entities that act like us, and we can’t really tell the difference – culture generally regards the construction of things that take on the trappings of humans without being human as monstrous – from Frankenstein to Bodysnatchers to Replicants to Zombies. One thing I wonder is what version of us it creates. Human beings change as they age, form relationships, are exposed to joy and trauma – would The Big Book of Martin be able to modify its responses and patterns based on the experiences it was exposed to – every possible experience in might be exposed to? Then, would Book Martin end up being a very different person? And a consciousness in its own right?
If we’re linking this to ideas of control – how our brains shape the world around us, and are shaped by deterministic processes – all of the universe is connected through quantum processes, some of them very old. We tend to think about these entanglements and correlations being washed out by the background noise of the universe – which is one of the reasons why I can’t control the weather, or make planes fall from the sky, solely through mental effort – but if we consider the universe as a whole, implicit in this “washing out” is the idea that it connects everything to everything. It’s the idea that we’re all drinking molecules of Caesar’s urine writ large on a cosmic scale, and less urine-y. We are all made of stars, and urine.
I can’t decide where I stand on the consciousness/algorithm argument, I think I just like visualising the idea of a big book of a person, and trying to manually route through it to have a conversation [although your right, thinking about it, if every experience changes us, then we’d need a different updated book for every moment of time or something – may be a big indefinite library of books of a person…?] and I like the idea of my brain being a Chinese room, and all my neurons are little people following instructions (but that is really just enjoying the thought of it, rather than the actual philosophical implications, which is may be the wrong approach). One thing I did consider though, which we talked a little bit about (and I think you disagree but I’m interested to see why/hear your thoughts!), was that although Searle’s Chinese Room is presented as a counter argument, there’s another argument that even if one neuron doesn’t understand what it is doing (or isn’t conscious), then could we consider the collective brain population of neurons all together to understand (or be conscious). I think Penrose disagrees with this too, although he said something about how it does seem feasible that as computations become extremely complicated they could take on some illusory characteristics of a ‘mind’. But, people always say that ant colonies collectively act like a brain, even though each individual ant isn’t particularly intelligent – do you think that this could be applied to neurons too?
I really don’t know. I’m no philosopher, but I think I tend to regard consciousness as a bit magical. There’s a bit in The Emperor’s New Mind where Roger Penrose comments that, if consciousness is algorithmic, it could be executed on a series of bamboo pipes which fill and empty with water to express algorithmic processes; that just seems counter-intuitive to me. Similarly, if ant colonies were so smart, we wouldn’t be around (aside: Saul Bass’ brilliant Phase IV considers just such a scenario).
I think in part, this is chauvinism about human intelligence, and in part it’s hard to divorce perceptions of intelligence and consciousness from communication, speed, and the senses. Communicating with a hyperintelligent collection of bamboo pipes, or a Chinese Room, would be weird and would take ages. So either you put that aside, and say “sure those pipes are conscious”, or you’re faced with the position that science hasn’t come up with an adequate explanation. I don’t see it as too different from the basic problem of determinism: if physical laws exist, and physical systems produce or determine consciousness, the concept of free will seems sort of nonsensical. And there’s the old problem of dualism, if the mind exists apart from the body how does it act on or through the body? This is really old and philosophers have done a lot of work getting past Descartes, but it’s way beyond me, I’m just a geographer/recovering physicist.
Oh, but you should read Jorge Luis Borges’ short story The Library of Babel – in this, the whole universe is a book. Or rather, the world is a library containing a series of books, perhaps an infinite number, one of which contains the truth of this world’s existence. Unfortunately, many contain untruths of the world’s existence, some contain partial truths with typos, and some are just random letters. It’s an amazing metaphor for truth, to me, or for trying to find it. I always remember that quotation about a thousand monkeys with a thousand typewriters producing the works of Shakespeare and think – that’s exactly what did happen! Maybe you have to replace “monkeys” with “playwrights” and “typewriters” with “years”, but it stands. Statistically, there pretty much had to have been a Shakespeare, in the same way that in a sufficiently crowded business field, some CEOs will have had an unbroken string of successes (or failures) just through dumb luck.
** There’s a website called the Library of Babel! (must be based on this book, I will have to read it) Have you seen it?? The idea is that it’s supposed to contain every possible combination of every character of every language + spaces and punctuation, (although they haven’t got the money/memory space to complete it) and so in theory it should contain every book ever written somewhere inside. Apparently so far they’ve got every combination of 3200 characters which equates to 10 to the power of 4677 books. It’s a fun site, you can search any word or phrase or paragraph of up to 3200 characters, and it’ll find it somewhere amongst all the random combinations. It’s pretty mind blowing, it makes the monkey typewriter story suddenly seem really tangible..
This is may be slightly different to what you were saying, but the fact that we’re all made out of the same ‘cosmic stuff’ or atoms or quarks or vibrating strings etc. is one of the arguments of the ‘Simulation Hypothesis’, which makes the comparison that these indivisible parts are the same as pixels or bytes or bits.
Some have suggested the Simulation argument as the most likely theory of our existence; if this was true then do you think that this is slightly favourable to the algorithmic argument? Last year I got really into this guy called Nick Bostrom’s theory of the Simulation Hypothesis, which sounded like science fiction to me and I think it was quite presumptuous and very optimistic about our technological capabilities [but again I don’t really have the knowledge to argue any position on any of this, I just found it a fun idea]. His approach, I think, pretty much relies on the ability to simulate consciousness. He proposes this theory that there are 3 potential outcomes of the human race: 1) humans die out before reaching technological maturity (which he defines as when we have the ability to run ‘ancestor simulations’- simulate the universe and our pre-technological-maturity lives) 2) Humans reach technological maturity, but lose interest in running these ‘ancestor simulations’. 3) Humans reach technological maturity, run ancestor simulations, and we are living in one right now.
I think it’s quite a fun theory. I’m not quite sure how it would make us act differently – since we can’t know for sure – except, perhaps, to believe in some entity that might be something like God. But we can’t really know their intention or how not to piss them off, so that doesn’t help.
I think I find this theory appealing, as it evokes the thought that if we are in a simulation now, then what happens when we reach technological maturity and we start to have ancestor simulations within ancestor simulations. It puts me in mind of Russian dolls or fractals – but of simulated universes, if that makes sense? – which is quite fun.
I suppose this is one way we could act differently – if we believe we’re already in a simulation, there seems little motivation to create a new one. We already have the capacity for infinite duration, and infinite repetition, so why expend loads of energy extending our lifespans when we could instead focus on making the life we do experience better for everyone.
Thinking about the idea of control in this, there is a sort of weird implication of this argument that there is a danger of the ‘simulator’ getting bored of us, and thus terminating the program. It’s kind of advocating the existence of god or a god like figure, whom we should in some way keep entertained to prevent our destruction. This all sounds pretty implausible, but thinking about it again, it kinda reminds of the ‘it’s a good life’ episode.
I mean, in It’s a Good Life you’d just find a way out, wouldn’t you? I’d wish myself into the goddamn cornfield. Harlan Ellison’s character doesn’t get that option, which is worse; it’s pretty biblical tbh.
Ah yeah that’s true, the Ellison story is sort of like a sci-fi version hell really, it reminded me of the fields of punishment in Greek mythology. Yes, the Ellison situation is definitely much much worse – well, infinitely worse. What’s also irksome is how annoying it would to be trapped inside technology that had the potential to be so amazing, but then not be able to control it. The whole point – in my mind anyway – of developing VR systems is to be able to create your own world; the standard tag line of any arts and craft product is ‘bring your imagination to life’ – well what if you could actually do that, like properly! Having the technology to create and control another reality would be like an extended lucid dream – literally living the dream. So yeah it would be really crap to have all that but then have someone else, especially an evil computer, ruin it for you. (That on top of the eternal torture)
On the subject of eternal torture, I think it’s probably about time I stopped picking your brains over all of these various topics, and thank you very much for taking the time to discuss everything with me!
Shall we end this this discussion by wishing our document selves into the computer version of the cornfield?
I thought it would’ve been really great if this had been a cornfield, but unfortunately it’s just a normal field.
So may be makes this joke less corny?
Your jokes are infinite torture. I have a mouth but it will not lol.
Thanks for inviting me to talk about these themes – I feel like I should have talked about Smart Cities and Quantum Physics, or at least things I know a bit about, more. I hope philosophers reading this will find ways to fill in the rather sketchy ways I’ve approached these arguments.
Additional resources: I have no mouth and I must scream, Harlan Ellison; The Library of Babel, Jorge Luis Borges; Black Mirror [TV Series], Charlie Brooker
Image: Kara Chin