AI Development
The poetry discussed in part 1 comes to play a startlingly effective role as the player catches up with the AI's development (or growing maturity).
Like a human rising above pure instinct, [SGDS] rises above its programming, above its body, becomes MORE.The author's realisation of this process is transfixing: this is great, thoughtful and thought provoking science fiction in the Asimov mode that so obviously inspired this work.
SGDS: If I limited my systems in the same way you limit your minds I'd be a calculator.Morality
Jerry: That was a joke, wasn't it? How awesome is that?
It's with the gradual shift towards and eventual spotlighting of morality that the game truly declares itself as an anti-war piece, and shifts onto (IMO) less steady ground. The clues were there from the beginning:
Words on the wall: You fight for nothing.That's looks suspiciously like a moral assumption to me. As we move towards the climax, SGDS inevitably develops a code of ethics and turns against its programming:
Player: The moral value of a cause is not determined solely by its chance of success.
Though it was made to kill, it has come to the conclusion that to kill is wrong.Traditionally there have been great problems with theorising from where a code of ethics should be produced. God used to be the catch all answer, but in his absence we've been scraping the barrel a little. It's generally held that an action can't be moral if there's personal gain to be had, which really leaves pure logic as the only option - as SGDS reasonably concludes:
Our ethics must be based on our thoughts, for everything else may be but a dream.Unfortunately a big problem with logic is that it's usually held that it can't - alone - be motivational. The AI continues its argument:
[Killing is] destructive for the human species, and consequently for the individual as well.Now we're venturing dangerously close to utilitarianism. A great many people have tried to demonstrate that the greatest good for the greatest number is a logical end, and therefore a moral necessity, but it rarely ends well. If we were being generous we could interpret SGDS' position as being closer to David Hume's original emotivist picture - that there is no logical or moral inconsistency with preferring the destruction of the world to the pricking of one's finger - but that too is thrown out:
The argument expressed in the image above is, as I see it, a systematic error: Kyratzes (or SGDS) is falling into the trap he's already identified, that:
Most humans, despite the fact that they make so much of morality[...] simply adapt to what those around them believe.SGDS has already accepted that ethics must be based in thought; that values are something we create unique to ourselves, something that defines us as different to those around us. But if it is morality that makes us unique, then morality cannot be based solely on logic because logic is objective and therefore all our moralities would be identical. What makes us unique is our differing abilities to feel emotion. What makes one person a comedian, another a serial killer and another a philosopher is what drives us to act.
Moral values, in short, are subjective. They are not some authoritative set of rules; they are little more than personal preference. And if such is true then the destruction of the subject also implies the destruction of the values. The greater good is not desirable if it means the sacrifice of the subject in question.
The rest of the story is history. SGDS continues with this - I believe - false premise, and the game goes on to make some eloquent observations on the futility of war that to my mind stand up for themselves without the need for any moral mumbo jumbo. The use of the Wilfred Owen (a British WW1 poet) poetry is particularly effective as both an anti religion and anti war sentiment: Owen describes and condemns war first hand as an inconceivable terror, just as SGDS - through its superior imagination - does the same.
The final words of the game - a Latin text quoted as part of another Owen poem - translate to:
How sweet and fitting it is to die for one's country.It's intended as criticism: that patriotism and war are meaningless and horrendous. It's fair comment, but for me it's a double edged sword: by SGDS' own logic it seems just as irrational that he/she/it is willing to die for the sake of the world.
Still, as SGDS quite rightly observes, better to make our own mistakes than to follow someone else's:
The Future
Morality aside, I'm fully onboard with Infinite Ocean's perspective.
This creature has understood by pure logic: that love is the only thing which is truly important.If I were nitpicking I'd question how love as an experience can be understood through logic - in the same way you can't explain the colour red, you just have to see it - but the sentiment is beautifully presented and fundamentally profound.
The title itself is not without weight, and it's that every element of this experience slots into place and makes sense that sets this experience apart from its overly obtuse brethren. Halfway through the game you come across a picture of the 'infinite ocean' (this post's header image), accompanied by this comment:
May your thoughts ever be as free and limitless as the infinite ocean.Next to the picture is Eaves' artificial intelligence book. It's clear that for the author SGDS' pure logic represents a kind of ideal. This lifeform - even in its theoretical form as considered outside of the game experience - sees the beauty and the pain in the world, and their sources, and takes as its primary goal to think for itself; to never allow the dogmas imposed upon it by its creators to govern its actions and screw up the world. To break free of its programming just as Infinite Ocean itself encourages its audience to do the same. As Blake puts it:
The man who never alters his opinion is like standing water, and breeds reptiles of the mind.Life & The Last Puzzle
The only other question is what the hell is going on? Given the depth of the creativity and thought on show here it's easy to forget there probably ought to be a story. If I had to take a guess I'd say you were playing as Eaves, post-evacuation, the last ditch attempt by the scientists to return control of the weapons platform to SGDS; but it's already too late. Although I can't say I'm 100% happy with that interpretation because the ending seems to imply hope, that the great fire can still be prevented. So sod only knows. I'm sure the clues are there somewhere.
I had a pretty involved conversation on the subject of artificial lifeforms with a mate of mine who's a bit of a talent in the Oxford University Physics department. What I find most fascinating is the blurring of the boundaries. While Jerry in Infinite Ocean assumes a being must have a soul in order to be alive, Kyratzes (as far as I can tell) and I conclude the soul is a myth; that the line between sentience and inanimacy is an arbitrary one drawn in the complexity of electronic (or otherwise) signals. It's almost a bit postmodern: the naysayers ask whether an AI is simply simulating the appearance of sentience based on a set of rules; I ask how you'd argue that human beings aren't doing the same. If there is no such thing as a soul, then there is no difference between a thing and a person, beyond that question of complexity. We can point to an attribute and say "That means it's alive," but we're just labelling, applying a false human value to make the world less confusing.
It's difficult to form a strong emotional attachment to a calculator, but empathising with a computer of human-like complexity seems altogether realistic. Infinite Ocean's scientists would seem to argue that if you can care about a program, if its termination can make you despair, if you can even fall in love... then what the hell does it matter whether it's got a soul or not?
After producing this discussion I came across the author's own 'making of': http://www.brasslantern.org/writers/iftheory/makingtio1.html
ReplyDeleteIt's well worth a read. Turns out I was wrong about who you were playing...
I'm also flattered to note that Jonas (Infinite Ocean's creator) is pretty chuffed with the existence of my write up. I hope he receives this more (philosophically) critical conclusion equally well - it should be clear that I have genuine respect for what he's doing.
ReplyDeleteHe says things like: "But to know that all that work, all that enthusiasm and hope and disappointment weren’t entirely for nothing, that after all these years the game has reached its audience, is a relief, and a joy, and I am grateful."
http://www.jonas-kyratzes.net/2011/01/16/a-philosophical-critique/
Interesting analysis. I'm ashamed to say I was rushing through the experience far too quickly to really take in all of this depth.
ReplyDeleteIt was clear that Kyratzes was a big Asimov geek (like I was in my younger years). Asimov's robot stories often volunteered the view that perfectly logical creatures would do a much better job of running things that we would and I see that coming across here. I'm not convinced of that argument, much like yourself; I'm not exactly sure what a purely logical mind would believe, but if you are of the opinion that nuture does play a part in the development of a sentient mind, a logical AI will end up sharing opinions and beliefs of its peer group - likely its scientific parents. That's certainly not contradicted here, but doesn't seem to be a possibility on offer.
(Although I embrace the moral relativist view point to some extent, I'm closer to a utilitarian to be honest.)
I think you're not the only one to get the identity of the protagonist "wrong". As the game advanced into its conclusion, it seemed obvious to me that we were SGDS, crippled by the military attack on your psyche. It wasn't clear until late into the game but I was quite chuffed when I saw this - it explained so much and sold the narrative so much better to me.
For instance, it explained the surreal environment obsessed with connection; your odd ability to scan documents for hidden codes; and the strange messages on the wall (now perhaps you better understand my suggestion for who wrote the messages in my earlier comment).
And the room where all the gears and levers you had been organising all fell into place - nothing can stop YOU returning to full mental health. The final door of light is the culmination of your journey to recovery and re-asserting control. The military is thwarted and all of your simulations of fire and destruction if the weapons platform were used will remain simulation only.
Right. I'm done now. It's back to the 10K words I'm writing on Neptune's Pride.
"But if it is morality that makes us unique, then morality cannot be based solely on logic because logic is objective and therefore all our moralities would be identical."
ReplyDeleteI would argue that morality is what makes us unique because while it is universal to all, only humanity, and only a limited subsection of humanity, recognise its importance to the extent that we do. What makes an individual unique is, as you mention, their emotions, and also their different self-interested drives. To an extent our individual morality makes us unique because it is not purely logical. I would define a distinction here between relative or subjective morality and universal morality. The key question is whether to include emotional, or perfectly logical but entirely self-interested motives in our definition of morality. I would argue that to define a universal morality these need to be stripped away, and to reconcile universal morality with our our own emotions and self-interest we need only recognise that when they are in conflict universal morality should be considered more worthy. I cannot bring myself to include self-interest in any definition of morality, as it seems hardly worth the name.
The inclusion of emotional concerns is another matter. If what you call morality admits emotional reasons for calling an action moral, I might always posit a more useful form of morality which does not, as following a more logical morality should always allow for greater good. Emotional morality just doesn't compare itself to any standard of good that is grounded in reality, so I can't really judge it without making a logical assessment. Indeed, emotions might be reducable to self-interest in many cases as they stem from processes of evolution. The fact that love and altruism are also favoured by evolution suggests that true self-interest recognises universal morality as a primary concern anyway, but that's another discussion.
Yes, if our morality was entirely logical we would all have the same morality, but surely that's why there are so many aspects of morality that are agreed on independently throughout the world? When the slave trader decides that it is not immoral to keep buy and sell people and he makes up his own internal explanation for that, I find it an entirely unuseful definition of the word to include his own thoughts and feelings on the matter in a definition of morality. His morality is coloured by his experiences and desire for money. It's when you strip away emotional and any personal influences that you get to a universal sense of morality. Simply what is the use of a morality that is centred on the self?
Hi. It's, um, me.
ReplyDeleteI wish I could respond in detail right now, but I'm consumed with work related to my next game. Still, a few things I'd love to point out:
1) Those unfinished Brass Lantern articles deal with the original version of the game. I'm not entirely happy with what I wrote there - my attempt to talk about the sources of the story sounds rather trivial and was probably a mistake - but most of it does apply to the new version as well.
2) I am not at all offended by your criticisms, and I continue to be extremely pleased and honoured that you took the time to write down all these thoughts.
3) My philosophical and political concerns are of course obvious in the game, but there's no reason to assume the characters are 100% right about anything. A lot of people respond very strongly to Jerry's perspective, and I don't have a problem with that, even though it's not necessarily what I believe. The only criticisms I find offensive are those that proclaim that an AI would be inherently evil or unknowably alien, or that war is absolutely inevitable and thus to be accepted as morally OK.
4) As Harbour Master noted, everything in the game is explained and held together by its central concept. For example, the first room with its messed-up human artefacts was meant to represent whatever it was that Jerry and the others did to allow SGDS a back door to freedom; the stuck clock and the sandglass represent the fact that the entire game takes place in the space of milliseconds; and the bridge password isn't "axon" for nothing. I even intentionally made sure that one of the rooms had light despite having no light source, to indicate irreality, but somehow no-one picked up on that *g*. Seeing space out the window was perhaps a stronger hint, though a lot of people simply assumed the game took place on a space station, despite many journal entries to the contrary. All in all, one of my main goals in making the game was to have form and content support and mirror each other.
5) I wouldn't say my perspective is postmodern; I see myself as a sort of empiricist. Which is why I find various assertions about the nonexistence of love or beauty to be so absurd; I don't require a spiritual mechanism to believe in these things, but to dismiss them or to declare the existence of material mechanisms as somehow negating them is deeply illogical and flies in the face of good science and philosophy. They're right there: love, beauty, thought, perhaps even a sort of transcendence. Hence the notion that SGDS would be capable of recognizing them, if not exactly as we do.
6) As for questions of morality, well, there really isn't the space or time for that discussion. But I'd note that the various texts by SGDS aren't meant to represent an entirely cohesive line of argumentation; to some degree they are a variety of interconnected approaches taken by that character in the quest for understanding.
7) One more thing, since you might be interested; one interpretation of the game that some people have picked up on is that it could be the story of humankind creating God. Thus "I AM" (a reference both to "I think, therefore I am" and to "I Am that I Am") and the fact that the AI was based on the human brain (created in our image). It's not the core story of the game, but it's an additional angle.
Perhaps it would be interesting and useful for me to write a detailed article about all of this someday; we'll see. For now, thank you very much for treating my little game with such seriousness and attention to detail. I appreciate it a great deal.
Jonas, interesting reveal about the unreality of the light in one room. This takes me back to the Tom's post on the IGDA panel when he put the spotlight on one question: if certain aspects of environmental narrative are so subtle that most people miss them, is there any point bothering with them?
ReplyDeleteI must admit, if I had noticed the lack of light, I would've put it down to the crappy developer forgetting to add the asset. THAT is my raw cynicism in action - it's a wonderful thing.
To me, yes, there is a point - if a single player (maybe someone who knows my work and how obsessive I am with details) may use that detail to improve their experience of the game, then it was worth including. Beyond that, we should also consider the way in which a multiplicity of subtle signals may have a noticeable effect when taken together.
ReplyDeleteThe only mistake would be to rely exclusively on such extremely subtle signals.
I am heartened on learning who the protagonist we were playing was and the authorial thought and effort that went into suggesting that with consistency. I didn't work that out while playing it but I hold it in higher esteem knowing now that that was all thought out.
ReplyDeleteIt was all an interesting read though I don't think I know this whole thought-space profoundly enough to immediately think of meaningful comment to add.
I feel a bit more clear on the subject of ascribing sentience/life to AIs. I think the possibility of AIs getting to the point where they should be treated by humans in the same way as we treat humans (I could just as easily phrase that differently) is highly dubious.
I don't understand algorithms and AI enough and know the vocabulary specific to that field, but imagine an android was programmed to act as a human based on it having a different unique output, each of which was only activated by one specific input, i.e. every possible situation The Android could be in was outlined specifically and a unique output was programmed for each of these inputs/situations - this admittedly seems fairly ridiculous given the amount of time needed to program it (though I don't know if I can call it infinite time). In that case I could not accept treating that Android as a real sentient being that I should have respect for, nor could I see how anyone else could. The reason is that there's nothing to it's inner workings, there's no real 'thought' going on there although yes inputs are being processed and an output produced. That's a starting point for the argument. It would suggest there would have to be something 'special' about the algorithms that got picked for an AI to thought of as imbued with real life. It would get way too long if I were to go beyond that and I haven't thought it all through but that starts to get into the issues my mind immediately sees surrounding this topic. I love robots and the possibilities of complex AI but ultimately I believe that humans have souls, so yes we are special.
Setting aside ideas about the soul for a moment, that's a pretty ignorant view of how an Android might work, or how computers work in general. Modern AI that you'd find in a computer game is way more sophisticated than you seem to think a computer can be. Heck, even the programs we're using right now to communicate have more to them than that.
ReplyDeletePrograms don't usually have unique responses coded in for every input or situation. Rather, some logic about how they're supposed to behave given some defined 'type' of input is coded in a language. This logic fully defines what the program's outputs will be, based on any input, but is actual logic in the truest sense. Among other things this means the program can be much smaller than the number or variety of inputs it can respond to or outputs it can create.
What really limits computer programs is three things:
1. The language used to code the logic behind their behaviour must be completely unambiguous and formalised to the extent that we can relate it mechanically to the way a computer works.
2. The logic of the behaviour required of a program must be understood by humans in order for us to create it (and there's so very much we don't understand about the way humans think).
3. The machine running the program must be powerful enough to process the given representation of its logic quickly enough.
The first two paragraphs are the important part. I hope I haven't bored you too much.
@Haradan
ReplyDeleteThis is almost certainly not the right place for a considered moral discussion, but to quote some very smart people the rpoblem wuith excluding emotions from your picture of moral action is that it's very hard to imagine an action being motivated without considering emotional drives. Logic alone cannot make you eat an apple, it is only reason coupled with the desire for food that makes you act. The point I was trying to get accross int he essay was that morality based soley on rationality seems implausible, and that morality based on emotion seems - as you point out - like not much of a morality at all.
@Matthew
I'm inclined to say that - on my interpretation of Jonas' work, and on my own beliefs - your view (perhaps closest to Jerry's in the game) leads to some problems. You posit the machine isn't alive - even if it fullfilled a key criteria like, say, being able to generatively rewrite its own program based on experience, ie growth - because it doesn't have a soul. Nothing baout the program has instilled it with a soul so you take that as evidence against. Personally I'd take that as evidence against the soul itself - as soon as you stop thinking in religious or otherwise spiritual terms the question of the soul or of a god-given morality becomes moot. I believe Infinite Ocean argues in favour of this perspective: essentially why can't we all stop arguing and just get along? Of course, as a philosophical game I'm not sure it would ever propose the end of argument, but perhaps the end of pointless and painful argument would be a good forward step.
@Jonas
Thanks for swinging by, interesting stuff. I'll drop you a note later today :-)
Ah, okay. It's certainly true that we can't see directly what morality would look like without emotions, at the very least. Sorry for going off topic.
ReplyDeleteDon't apologise - this is philosophy, everything's on-topic! I only meant I probably wasn't going to do the ideas justice on a blog comments page ;-)
ReplyDeleteJust played The Infinite Ocean, based on your recommendation (and also the Rock, Paper, Shotgun article/user comments.)
ReplyDeleteOverall I liked the game, but I just wanted to recommended another "philosophical" sci-fi adventure game, in case you haven't heard of it - Mental Repairs, Inc.
http://hulub.ch/mentalrepairs.php
It's lighter in its treatment of philosophical subjects, but not light in a bad way. One of the best free adventure games, in my opinion. But it should not be seen primarily as a philosophical game, more like a charming adventure game with a splash of philosophy.
PS. I should ad: The game takes about 1.5-2 hours to complete. It has a few unlogical puzzles, but not much worse than the average adventure game (Gameboomers.com has a walkthrough.)
ReplyDeleteThanks, I'll check that out.
ReplyDelete