Talkback: Facebook, Oculus & the future of VR/3D
OK, this isn’t so much a Talkback as it’s a Talk-Along – though at least I’m not jumping the bandwagon just to say that it’s weird, or for that matter either awful or wonderful, that Facebook has acquired Oculus.
It’s something else that I have to say – it’s inspired by Wired’s article entitled “Why Facebook’s $2 Billion Oculus Buy Is a Bet Too Far”, it has to do with perception and technology, and it suddenly got relevant to talk about (again) after the deal – it is, as Wired quite rightly poise, the question of whether this Virtual Reality bet that Facebook has made is going to pay off.
As I said, it’s about perception, and I’m going to lump 3D in with VR because we’re seeing the same kind of problems with both. The primary problem is that the tech world doesn’t seem to understand how ‘being there’ and ‘experiencing something via a piece of technology’ are fundamentally different. A screen isn’t a hole in the world through which we look into another world, and it would suck if it was.
What the viewer would experience if the TV screen could actually replicate the notion of “looking through a hole in the world”, i.e. true 3D replication – obviously, nobody wants this
Take FB’s courtside basketball game VR example – well, if you’re actually at the ballgame, what you get is the whole mood, the immersion of being in a sea of excited people, smelling the corn dogs and shellac, feeling the thumping of a thousand other people’s feet on the rafters – the price you pay for that is the possibility to be looking the other way, or being in the bathroom, when something awesome happens, for example.
Being present via VR headset, even if we pretend that it’s complete, with smell, tactile feedback/rumble etc. (which it isn’t, and won’t be anytime soon – VR-scent alone has been attempted for years but it’s difficult as hell since the sense of smell is a lot harder to fool than vision, and also it seems silly to investors) you’re still not physically there, which means jumping to your feet, wailing and fist-pumping at a score by your team is going to be utterly not the same, and you certainly won’t be high-fiving strangers or throwing beer or foam fingers, not unless you want to garner the ire of your mom, roommate or significant other.
But you still risk having your head turned the other way when something cool happens – or, slightly more broadly, you’re getting all the disadvantages of being there with none of the benefits.
It’s the same kind of problem we’re seeing with 3D storytelling – sure, maybe some day we’ll hit on a story telling format in which this tech is going to be indispensably interesting but so far all it does, storytelling-wise, is mess with the subjectivity of the camera.
What’s that, you ask? – well, in far most scenes of far most movies, the actual placement of the camera in space is not only not part of the story, it can seriously interfere with the story if it was. The “subjective camera as a character point of view in-story” is so rare it’s considered a director trade mark of those few that use it.
Not to mention that even those guys obviously won’t ever tell an entire story that way (or at least will do it only maybe once in a whole carreer), which means that the subjective camera can, and will, jump around between being scene-independent, omnipresently directly present, character-present, and back. With 3D, the camera will always be physically in the scene, even when it shouldn’t be.
In fact, one of the major selling points of 3D movies is along the lines of “it’s like being there” – but there are just so many scenes you’re just supposed to experience, god-like (or “omniscient narrator”-like, if you’re being mundane), and where “being there” has no bearing on anything and can, as I said, actually ruin everything. Again, we seem to mostly be trading off benefits and getting only disadvantages in return.
A 3D camera. Or something that fell off a Transformer
Also common to both: – the disregard for the incredibly fine granularity of the senses. With VR headsets it’s both speed and sensitivity; even a fraction of a second in lag means the immersion is completely shattered, because if our real view of the world ever lags even the slightest bit we’ll be immediately heading for the doctor (or very, very drunk).
With 3D it’s another few important details of the human physiology:
– som at kameraer har hele synsfeltet i fokus, mens dine øjne kun har et lillebitte område, mindre end din håndflade, i fokus ad gangen, eller konflikten mellem det, dit perifere syn fortæller dig om den virkelige verden, og det, lærredet fortæller dig om den uvirkelige verden. Og også…:
– like cameras having pretty much their entire view in focus while your eyes have only a small area, smaller than your outstretched palm, in focus at any given time, or the conflict between real-world input from your peripheral vision and fake-world input from the screen. Also…:
The faux-3D scene is not seen through your two eyes, it’s seen by your two eyes looking at a flat picture, manipulated via the use of two other “eyes” (the 3D camera set), and the focal point, depth of field, parallax and all the other things the brain and eyes have spent millions of years becoming an expert on get messed profoundly with in ways that are just too subtle to address (not least because they’re different for each viewer, meaning a 3D TV or movie screen adressing them will have to show a different view to each individual looking at it, like in “Mission Impossible: Ghost Protocol”), but way too noticable, even uncomfortable, to ignore.
Now, am I saying “this is impossible, and also against nature, dag-nammit!” while chewing tobacco and shaking my fist like a luddite proper? No – I bet getting used to looking at flat pictures, then moving flat pictures, also took some getting used to (there are primitive human beings in the world who cannot subjectively conceptualize from a 2D picture, even though we must assume they can see it perfectly well in a strict physical sense).
What I’m trying to say is that,
1. VR and 3D tend to not work the way that our current uses (or proposed uses) of them assume they do,
2. the processes they’re supposed to (almost) replicate are simply much, much more complex and subtle than we give them credit for, which amounts to
3. improperly understanding that users are human beings and therefore highly complex can be very expensive.
So yeah, like Wired I believe that Facebook is basically taking a 2 billion dollar stab in the dark here – but hey, they can afford it, and if I’m wrong you can come back in five years and laugh at me.
Better make that ten years. As I said, this shit’s harder than it looks.