Experienced Points: Why the Oculus Rift is a Big Deal

By Shamus Posted Tuesday Apr 8, 2014

Filed under: Column 114 comments

I should note that a great deal of the information from today’s column about the Oculus Rift came from the Dev Days presentation “What VR Could, Should, and Almost Certainly Will Be within Two Years” by Micheal Abrash. That was a talk given by Abrash back in January, before he left Valve to join the Oculus team. I highly recommend watching it if you’re interested in VR.

I’m really excited about VR. I’ve actually downloaded the Oculus SDK and read the docs. Both look pretty solid. I can see Carmack’s hand in it. The API has a tight and intuitive interface that makes it easy to set up a scene and track the camera. If you’re going to develop for VR, using the SDK won’t be a challenge. (Rendering twice as many frames at 60fps without dropping frames? THAT’S the stuff that’s going to keep you up late at night.)

I can’t test it (I don’t have a Rift myself) but it makes for an interesting read. What’s clear is that VR-based games are going to have a huge number of new human interface concerns to worry about. Extensive reading and a complex HUD is bad. Lots of abrupt, rapid motion is bad. Full screen distortions are bad. Grabbing the camera to show the player something is REALLY bad. VR is not something we can just hack into an existing game. To really take advantage of VR we need to build a game around it.


From The Archives:

114 thoughts on “Experienced Points: Why the Oculus Rift is a Big Deal

  1. Scerro says:

    Typically you link your article somewhere in the writing of this post. Today you didn’t. That’s how I normally get to your escapist article.

    Edit: It got fixed.

    Anyways, at first I was really concerned. After a little bit of thinking about it, I wasn’t. Sure, facebook acquires stuff for more money than you think they should pay for things, but they seem fairly hands-off.

    1. lucky7 says:

      He did. Start at “today’s column”.

      1. Shamus says:

        Actually, I forgot to link it when I first posted it, realized my mistake, and then added it a couple of minutes later.

        So, my bad. Fixed mow.

        1. Steve C says:

          Hey look an early adopter problem of First Post. That’s a amusing coincidence given the subject matter.

          1. Mistwraithe says:

            Yeah, to be honest I’m not sure Scerro got enough of a first mover advantage out of it to be worth the extra trouble.

        2. The Rocketeer says:

          You playing the Cat Game? What’s your record? :p

          1. At risk of, well, everything . . Cat Game?

  2. lucky7 says:

    So, the Oculus is fairly stable. Now the only question is this: who’ll make games for it? VR Technology’s gotta be pretty hard to design for, and porting from Console to VR (to the unenlightened) sounds extremely troublesome. Who’ll take the risks?

    1. guy says:

      I’m betting mostly John Carmack and company. People who’ve already made enough money and then some and are in it for the creative challenge. Once the basic questions about how you even begin to do it are settled, and people have bought the system for the first wave, the big companies and the more typical indies will begin to move in.

      1. lucky7 says:

        Steam Machine, Oculus…lots of potential for innovation. I like it!

        I’m just worried about something along the line of “Oculus Sports Adventures” being the only thing made and no one else trying to make games for it.

      2. Ben says:

        I think you are right, initially it will be people who have little to nothing to lose and alot to gain. Oculus’ first party devs Carmack et al. of course. An definitely companies who can throw money away on a chance and not be hurt by it being wasted.

        The other side that will be involved are indies. Small teams with a tiny financial footprint (atleast compared to large companies). If VR takes off being the first out the gate will mean they have a massive return on investment. We may see some new large VR software firms grow out of the launch, and I also expect we’ll see a number of these dev companies purchased by large established groups who will use the indies knowledge to kickstart their own efforts.

        The only group I can see not willing to become part of VR are those who feel they have too much to lose. Game publishers like Ubisoft and EA won’t touch it until there is a proven ROI in a similar range as they are used to. Also medium size firms I can see being leery, a single big failure may crush a company (‘though I’m sure we’ll hear a story about how someone bet the farm and it paid off).

        It’s a new format, I can’t help but be excited to see what comes.

    2. Ranneko says:

      There are already a few games with experimental rift support. Team Fortress 2, Half-Life 2, Surgeon Simulator and AaaaAAaaa For the Awesome are all ones I have tried. Obviously they are not made only for the Rift but they are interesting experiences.

      One of the things you realise when playing TF2 is that environments are tall, seriously in a lot of FPSes doors tend to be twice the height of a human, which ceases to look quite right when you are in there.

      AaaaAaa was actually the game that worked the best for me. Find it hard to describe why other than it didn’t make me nauseous at all and the movement all felt right. I wish I had had a fan handy at the time so I could have had it below me blowing air into my face.

      There are also a bunch of indie titles being made specifically for the rift, I see them on /r/oculus every so often. The virtual movie theatre was the thing that convinced me to buy a share in a devkit with a friend so that we could continue to play around with it.

    3. Well, keep in mind that early games won’t need to be as feature-rich as existing games. You’ll get a lot of awesome just out of the fact that you’re using VR. Personally, I don’t anticipate using VR to play traditional games. (I’m not a fan of first-person viewpoint type games to begin with, and anything that’s not first-person view point is pretty much a non-starter.)

      My dad actually did his dissertation on presence in virtual reality back in 1996, so I got to play around with the equipment that existed back then. It was interesting, but we rapidly discovered that using VR equipment while sitting down at a desk pretty well ruins the best parts of the experience. The conceit of many first-person games where you can’t look down and see your own body also doesn’t work very well. It is exceedingly difficult for the user to match up head movements and controller movements. Most people wind up holding their head mostly still and moving predominately with the controller or vice versa.

      I really don’t expect VR to have any kind of major or lasting impact. It’s only really viable for a very small segment of games that are already staggeringly expensive to produce (big exploration-oriented first-person super-high-end-graphics games). This is a purely spectacle-oriented feature.

  3. I had the opportunity to try the Oculus Rift two years ago when I was working at Stoic.

    The demo was using an Unreal Engine first-person shooter of some stripe (that I don’t remember) in free-camera mode.

    Moving through the game’s interior industrial environment felt like a “flying” dream. It was amazingly immersive and fun.

    Until I accidentally got myself stuck in a floor!

    I was clipping through the floor at what appeared to be about waist-level. My brain immediately short-circuited from the synesthesia and I became nauseous.

    But it was awesome up until then!

    1. Steve C says:

      That’s a great point I never thought about before- people’s brains can’t process bugs. That’s going to mean there will be an even higher quality control bar that has to be reached for Oculus to succeed. That will be brutal on dev time and money.

      1. ET says:

        I think it’s also possible, that somebody will make a game which is playable by people who don’t get nauseous easily, and make money off of that small market for a while.
        I know that most people get nauseous if you do weird stuff, like have crappy VR headsets strapped onto them, or screw up the 3D movie projector, and there’s a smaller portion of people who are even more sensitive, but like most things, I bet it’s a bell curve.
        So, some sizable portion of people will just be able to tolerate the early buggy, crappy software/hardware, and hopefully be big enough to fund the next generation of this stuff.

        I know that back when I played FPSs a lot in high school, I was the only one of my friends, who never even felt queasy in fast elevators, roller coasters, etc.
        Now that I’m older, my stomach is a bit “weaker” so-to-speak, so…I personally probably will need to wait for 2nd or 3rd gen hardware/games.

      2. Eruanno says:

        Oh man, what happens if you suddenly start dropping frames and the image freezes when you’re using VR? Won’t that be incredibly confusing/nauseating?

    2. MelTorefas says:

      I love neurosience and neurobiology, and find the idea of the brain’s reactions to bugs *fascinating*. I hadn’t ever considered it until I read your post (like the poster above me). Now I am imagining all sorts of things and want to do some research…

      1. I’m a pretty tough gal, and I’ve never been susceptible to motion sickness, so I was surprised when it happened.

        Visually, it appeared as if I was “bisected”, so for a few seconds my brain actually conjured the feeling of being physically squeezed in the middle — and that’s when I became nauseous and had to remove the headset.

  4. guy says:

    The graphics challenge might not be as intimidating as it sounds. You’re rendering twice as many frames at 60 FPS, but they’re of the same scene. There are already methods for doing most rendering independent of the view, so I expect the default VR setup will be configured so rendering in two views is only mildly more expensive than rendering one view. The big problem is going to be advanced lighting; raycasting is view dependent.

    Unfortunately, no such luck for getting it to 60 FPS. You could try only updating some things every other frame, but I don’t know what you can do that for without causing problems and GPUs might not be set up for it.

    1. Zorn says:

      Maybe I’m a naive fool, but this sounds like a problem which will be perfectly solved by today’s version of Moore’s Law – we can’t make serial computation faster, but we can add more and more parallelism more and more cheaply.

      1. Mephane says:

        Which is true to 3D rendering in general. A significant portion of today’s advances in GPU performance comes from adding more and more cores to calculate stuff in parallel. I guess one day we will hit a performance ceiling when every single pixel effectively has its own CPU. Albeit a very high ceiling, like, high enough for real time raytracing, I suppose.

    2. Forget 60Hz, the DK2 (and expected consumer model might even go higher) goes up to 75Hz. Most existing stereoscopic rendering (baring a few ‘made for old consoles’ hacks) does render two scenes so you need about twice the GPU resources. You can do other things but it turns out that generating two point clouds (which end up as a screen buffer when you throw away the depth info) from the slightly different views isn’t a terrible plan for getting two renders and not breaking anything (which is important, lots goes wrong when you have something like the shadows your left eye see are from lights in a slightly different place than your right eye because you didn’t take the offset into account properly with the screen-space calculated shadows – you brain can’t meld the two images correctly and starts screaming at you that something is broken and your eyes start to flicker between which image it thinks is the real world and which is the buggy one). nVidia have a lot of experience with this from their stereoscopic work (starting in 1999, ELSA Revelators, now branded as 3D Vision).

      The Rift docs do point out that beyond 20m everything is about parallel projected (there isn’t meaningful difference in what each eye can see from the slightly different angles) and so there may well be games that have a large world where the right idea would be to render the ‘skybox’ world first and place it into both eye’s frames and then paint on the per-eye render of the near objects, paying close attention to making sure the meld point is perfect. But that’s not the typical method that games generally use for stereoscopic rendering today (games developed explicitly for stereoscopic concerns: a few a year, games nVidia hacks with an offset camera and work basically perfectly: up to half of releases; place where the community rewrite the shaders that break with the offset for a second eye position to fix many of the games that aren’t good: HelixMod).

      One of the nasty things about the distortion from the lenses is it squashes the image from the screen and this is both great and terrible. On the positive side the DK1 may only be 1280 pixels wide for both eyes but at the centre of the lens then the effective resoltuion means you’re looking at needing to render a framebuffer 1820 wide (and scale up the vertical accordingly too) to get the 1:1 pixel mapping after distortion for the lens and then reverse distortion by the lenses (page 13 of that linked document). So you may be thinking of the DK1 as only a 720p render (split down the middle, half for each eye) but if you care about pixel accuracy then you’re actually going to want to care about a significantly bigger render than that, sub-pixel (anti-aliasing concerns) then scale up even more / add MSAA overhead.

      So the DK1 is asking for a render of two images (with a 2ms distort pass to fish-eye them for the lens to reverse) of 910×1024 into a 1820×1024 buffer (more work than a 1280×720 screen would make you expect, and it does mean you’re effectively wasting resources on making the AA better the nearer to the peripheral vision the image is – just where it matters less) at 60Hz. The DK2, with that 1920×1080 screen is actually wanting (assuming similar lens distortion) two 1365×1536 renders into a 2730×1536 buffer at 75Hz. How does 4.2 megapixels per frame sound to you? I’m certainly noticing why this technology is talked about as a driver of GPU perf for the next decade (with far more practical changes to our experiences of virtual environments than moving to 4K screen) and the PS4 may be only able to render something with the scene complexity of a PS3 title.

      1. The Rocketeer says:

        I wanna thank Shivoa: every time he comments, I learn something really cool.

        1. I am sure this one is interesting, but I usually skip his posts since I have learnt that I can’t really follow what he’s saying. I don’t want to sound like a tool, but could you possibly translate what he said into English? What is DK1? And DK2 for that matter? What is a point cloud? How about MSAA? Framebuffer?

          1. The Oculus Rift DK1 is the first dev kit that came out of the Kickstarter. The DK2 is Dev Kit 2 and is on pre-order for this Summer, it is an evolution of version first shown off at CES at the start of the year with the codename ‘Crystal Cove’.

            A point cloud is just a set of points in space. It’s easier to think of a screen render with the depth buffer (used for making sure things don’t draw over stuff that is in front of it when rendering triangles) as a cloud of points when talking about stereoscopic stuff and lens distortions because you’re sometimes resampling/reprojecting from a frame and so that isn’t always just distorting the final image that you send to the screen but rather taking it from the 3D version that includes the depth information. When people talk about SSAO (screen space ambient occlusion, the darkening of stuff near occluded areas of geometry) then they’re using the point cloud to estimate how shadowed a pixel on screen will be based around the other pixels but using their depth information to do so. It’s not core to the comment above, just a diversion using some of the lingo. When we do talk about the image as sent to the screen, that’s the frame buffer, although you can also use the term to mean all the data (so it’s a point cloud when you include the depth buffer) so it’s not the best terminology.

            MSAA stands for multisample anti-aliasing. Rather than painting a triangle to the buffer once per pixel, the edge can be drawn several times for multiple sample points in the pixel. Rather than asking if the triangle should be drawn b asking if it overlaps the middle of a pixel and paint the pixel if so and ignore if not, we can divide the pixel into sub-elements and to the test for each element. That means the buffer, when resolved down to the framebuffer for the screen, can blend the several sub-elements and you get anti-aliased edges for your polygons but don’t have to render the entire scene with sub-elements for every pixel. That means you can render 1080p without needing to calculate the entire 4K frame and then resolve down to 1080p output, but every point where a triangle edge exists you do calculate 4 different points and so you get the line edge quality of a 4K render scaled down to 1080p without the rendering cost (pixels that are fully covered by one triangle are only painted once).

            The post is possibly only interesting for people who already know the lingo, which is why I generally don’t try to expand every other word with the description of it or add in links to definitions.

            1. Abnaxis says:

              Incidentally, the first thing I do after starting a game where I’m on the edge of the hardware specs, is turn down the resolution and turn up AA. I greatly prefer fewer sampled pixels over more jagged pixels.

            2. Much obliged! And I do think this stuff is interesting, but I am too far removed from the field to really understand your posts normally (even if we discount the jargon). For instance, I thought I understood anti-aliasing, but clearly I really don’t… Still, thanks for taking the time to clarify, even if (as you pretty much say) I am not really the intended audience.

      2. Abnaxis says:

        Since you seem knowledgeable about this (I’ve done a bit with stereoscopic rendering, though not near enough to be an expert and nothing with VR), can you answer something for me? Namely, why use software for the fish-eye distortion–why not build a screen that automatically distorts its image?

        If you wanted to be *really* sexy with it, I would think you could manufacture a screen that varies pixel density instead of spacing them on a gird, but even just making a a set of polymer lenses with the right focal length seems like a better idea than pushing all that calculating onto the render processors.

        1. The Rift designs all use off-the-shelf components. The screens are designed for phones. Also it is a lot harder to do screens other than fixed grid, rectangular dimension stuff. We’re only just seeing smart-watches with round displays and I’ve yet to see anyone building screens with different sized pixels over the surface. Even FB money Oculus would probably have a hard time getting that R&D budget and production line up and running, and Rift is all about realising that GPU perf is cheap now, it’s practical to do the warp in software and burn some GPU transistors on it. I’m not sure that equation will ever change, even if variable pixel-size screen production does get researched.

          1. Abnaxis says:

            I thought about it for a few minutes and answered my own question–the distortion will have to depend on the actual distance of the vertex from the viewer thanks to focal length. A 1×1 box 3 feet away will distort differently than a 2×2 box 6 feet away, but they would look essentially the same on a 2-D render with a fish-eye lens in front of it. So, a static piece of hardware won’t do it right without software to back it up, though I still think it could help with pixel-density issues.

            It’s really exciting to look at these things and think about what could be possible with dedicated manufacturing facilities and engineering refinement.

  5. Steve C says:

    So Shamus I know this stuff pushes your happy buttons. What’s the high water mark? Procedurally generated virtual reality? What makes you go gawhhh?

    1. Bryan says:

      Dunno about Shamus for sure, but I suspect this was one of his happy buttons, and it’s definitely one of mine:

      > Grabbing the camera to show the player something is REALLY bad.

      OH MY GOODNESS YES. Stop with the camera-grabbing already! Having a setup where that’s going to make people sick seems like a *really* great way to kill it.

      Unfortunately, with this really-nearsighted pair of eyes (I can focus about eight inches without my glasses), I’m thinking it’ll be a long time before I’ll fit into one of these things. :-/

      1. BeardedDork says:

        As long as you can focus at the depth of the screen you should do as well as anybody else the input is right in front of your eyes regardless of where it appears to be. The people who will be in trouble are the unfortunate farsighted folks. As an example I am also nearsighted occasionally when I lose my glasses I can use the camera on my phone to find them. I can hold the camera close to my face where I can focus on it and use it to see fine, the occulus rift should be very much the same.

        1. Ranneko says:

          I had never actually thought to try that trick using a phone’s camera. That is awesome! Thanks.

      2. perryizgr8 says:

        contact lenses would be helpful, methinks.

        1. Asimech says:

          There are people who need subscriptions that are so strong that you can’t make them into contact lenses. (Not applicable in this specific case, as BeardedDork mentioned, but with people who are farsighted.)

          1. Ranneko says:

            I don’t know if this is terribly practical, but could the prescription be worked into the lenses inside the goggles for those people to use.

            It is also possible to wear your glasses and use the Crystal Cover version of the Rift, so that may end run around the whole problem, where people who need glasses can still wear them when using VR.

            Finally I would like to point out that mainstream and accessibility for people with disabilities (near/farsightedness may not be enough to get a pension or other benefits but still count) are sadly not that linked. There are people I follow on twitter that cannot the gamepads for most of the current generation consoles because they have disabilities that affect hand coordination and size, locks them out of most game consoles until someone hacks up an alternate controller.

            1. Asimech says:

              Someone else commented here (can’t find them for some reason) that searching for this they found out the Oculus starts having problems with strong lenses. Something about the image getting too warped?

              Anyway, in the same comment line it was mentioned that Oculus does, in fact, have intergrated lenses because people without nearsightedness already require reading glasses with it for comfort. So with farsight you’d need a stronger lens than usual.

              On market penetration:

              Doesn’t the accessiblity problems pile up with VR, though? You’ll need a “no sight” controller you can use and a VR headset that works with your vision, so anyone who has problems in one field may not be able to use it at all. Of course if there are accessibility gamepads already in circulation and those work well with Oculus then it won’t pile up.

              But that’s not taking into consideration that Oculus’ popularity is also hindered by limited content and the fact that just importing as-is is a bad idea. Also VR isn’t suited for all types of content, which lowers the potential pool. Since niches are more important to access the more niche the product itself is, this may turn into a problem that matters for the Oculus.

              There’s also the topic of price, but that’s a long, arduous topic that’s further complicated by a lack of official price.

              That is if we want to talk about Oculus Rift’s, or VR’s, potential to become mainstream, of course. Most of the discussion here seems to be more about “can I even use it/am I being accommodated?”

              Post Script: There’s also Hype Backlash and Hype Aversion, which don’t matter quite as much with established technology.

              Edit: Clarification, dropped words/changed sentence structure.

  6. Sorites says:

    I’m not sold on VR just because it’s been tried so many times and has a 100% failure rate. Eventually, the presumption has to be that it’s not a good idea.

    1. Scerro says:

      It failed because the technology wasn’t mature. It’s far more mature this time around. We can get real-time near photo-realism, that wasn’t possible 15 years ago.

    2. KremlinLaptop says:

      While these Wright fellows have certainly built a fine and impressive flying machine with their latest Wright Flyer, a century worth of failed attempts at flight leads me to presume that manned flight is simply not a good idea.

      And don’t even get me started on that Benz fellow and his preposterous horseless carriage.

      1. Asimech says:

        Past is not proof of future. Even if a stance was wrong in hindsight doesn’t mean it was an illogical belief at the time.

        I can understand resentment when the people acting like that are the ones in power and who therefore stunt technological improvement, but I seriously doubt Sorites is holding the tech industry’s reins. And it is not the responsibility of the consumer to drive technology improvements.

    3. Shamus says:

      Did you read the article? Because I dedicated a significant portion of it to this exact point. It’s fine if you disagree, but if you’re not going to respond to the article than I’m not sure why you bothered posting?

      1. Sabrdance (MatthewH) says:

        I’m not so concerned about the technology part of it -but I suspect this will be like 3D in theaters. The technology arrived (brighter screens) and there was a rush of authentic 3D movies, then a second rush of knock-off 3D movies, and now it’s a niche. Some movies get 3D treatment, but even those movies have 2D releases, and a lot of movies don’t get 3D at all. Ultimately, 3D for movies was a gimmick. A useful gimmick for certain films, but still a gimmick.

        I’d expect VR to fill the same spot -a peripheral. However, as a fan of flight simulators, I have no shortage of peripherals, and I’d love to play X-Wing with 360 degree look-down-shoot-down controls. I could also see a use in training pilots or racecar drivers without needing the massive simulators. Price would have to come down for me, though.

        1. ehlijen says:

          There are a few problems with the comparison.

          -3D movies are not as amazing a leap as Rift would be. It’d be the difference from one graphics card generation to the next, or the addition of surround sound. You’re still in a big dark room watching a screen on the wall. It’s prettier, but still just a screen.
          -3D movies came to a beleaguered medium. Cinema was losing the fight against home entertainment systems and piracy. 3D is boost that’s extending its life, not a revolution.
          -The presence of 2D only movies doesn’t make a difference. Some movies don’t gain from 3D just as some games won’t be suitable to Rift. Even if all First person games go rift only (and 3rd person ones disappear for argument sake), that still leaves all the god perspective games. I don’t think Civ, SimCity or StarCraft would suit Rift at all.

          When 3D acceleration cards came onto the scene, PC gaming got a big boost. And yet, it took a long time for games that didn’t need it to go away (and they’re actually coming back to a degree in the form of indies).

          Rift may not be here to stay, there are still dangers. Maybe too much of the population just isn’t tolerant enough to the nausea issues? Maybe the first wave of games will make mistakes too bad that turn away customers? Maybe the logistics of basically blinding yourself for extended periods of time will prove too dangerous to make it widespread?
          But some ‘gimmicks’ sure proved fun enough to stick around. I think Rift has a chance.

          1. Peter H. Coffin says:

            3D movies are also usually made in (IMNSHO) the absolute crappiest way possible.

            They often aim for spectacle. And not the IMAX-style “flying through canyons” way, but rather “ice-axe to the forehead”. It’s cheap sensation that expects an audience to appreciate only the biggest possible impact and chases that instead of expecting that they’ll respond to nuance. Audiences tend to live up to what you aim at with them.

            They don’t understand that 3D is different. It’s as different as photography is from painting. Flat movies use depth of field to help focus attention (“bokeh”). 3D movies instead (should) be using window-setting. Related to that stuff going off the edge of the frame is pretty minor in flat, but it’s CRITICALLY important to pay attention to in 3D work, because what the brain processes as depth for this is the difference between two images and their relative positions, especially points of non-overlap (“rivalry”). And if that overlap is huge chunks of something at the edge of the frame, THAT is what the viewer’s brain pays attention to. And it’s probably not what the director wanted.

            Edit: Another difference is how you mess with scale. Flat stuff makes things look big by pointing up at them, or down at them, or blurring things around the edges (“tilt-shift”). 3D messes with scale by changing the baseline between the images outside of a set proportion. You make someone feel small by setting the “inter-axial” to something smaller than usual, AND putting the camera lower. Just putting it lower and it feels like the viewer’s sitting in a hole.

            1. False Prophet says:

              I have to assume that even if filmmakers wanted to take full advantage of 3D filmmaking, there are practical reasons they don’t. Home video and other secondary markets are still extremely important to film, and 3D TVs have not had extensive adoption. Making a gorgeous 3D film that doesn’t translate well to 2D could cripple those secondary sales.

              1. Peter H. Coffin says:

                That’s not wrong.

                If anything, the problem is bigger than it seems because the secondary market’s reached the point of size where it CAN BE (if not yet IS) the primary market. And the 3D part won’t grow until there’s more material that actually isn’t crap.

                MOST of the things that make 3D work well, though, are fairly transparent, or respond reasonably well to just applying only limited flat “tricks” (like bokeh) instead of going all in with them. It means some compromise in what one is doing artistically that way, but we’re coming off two decades of shooting in Panavisions and knowing that the VHS/DVD market’s going to sell mostly to people that still only owned 4:3 Trinitrons, so your black-to-white contrast was about 1:60 and they were gonna only be seeing half of the frame you were shooting anyway. The real trick is teaching cinematographers and directors that they don’t know everything about this new medium and half their instincts are wrong for it.

            2. Mephane says:

              I am not keen on 3D movies for all these very reasons. Though I see it not as a gimmick technology, it is being mostly used by film makers as if it were. The one movie I watched in 3D on the big screen was gravity, and while a good film (disregarding the gross scientific mistakes) I often had the feeling it was 3D for 3D’s sake, but because it actually enhances the experience that much. I remember in once scene where stuff was floating about and of course they had to have a drop of water drift closely in front of the camera; I think they even changed the focus to render the background less sharply and draw attention to the drop. It just screamed “look at me I can do 3D” to me.

              And besides that, what feels especially cheap about 3D movies is when you can actually see how they deliberately split the scene in merely a handful of layers. Like, front, middle back, clearly separated and no continuous depth within or between, as if the characters in the foreground were cardboard cutouts placed in front of a landscape.

              1. Mephane says:

                Should have been “not because it actually enhances”.

              2. Peter H. Coffin says:

                yup, the drop of water thing was the bokeh I mentioned. In 3D, it wouldn’t have been (as) necessary to defocus the background. Merely bringing that right up to the window, or brush gently into negative space would have been enough…

              3. Avatar was great in 3D. It was used well,, and turned the screen into a portal into the movie world. The 3D technology and movie were designed in tandem so it makes sense. I haven’t seen another live action film where the the 3D was worth anything special, as it’s always just tacked on for a few cheap pop out effects.

                Computer animated films can also look good in 3D, as you already have the 3D data sitting there for free anyway.

        2. Kamica says:

          Have you been able to try the Oculus Rift?
          In case you haven’t, it’s really hard to explain how it feels, the best explanation is basically: You feel like you are there, but your sense of feeling and smell has been turned off, it is a vastly more amazing experience than just playing a game, you can actually estimate scale of things, a dark corridor goes from “Oh, there’s a dark corridor” to “Damn, a dark corridor, did I just see something in it?!”

          From the time I used the Oculus Rift I remember playing a game called “The Dread Halls” or something like that, if you watch youtube videos of it, it’s not scary at all, it can even be slightly funny. But if you have the oculus rift on and a good set of headphones, suddenly you’re scared of walking too fast, you don’t want to go around corners, and opening doors is so tense, and as soon as you actually meet a monster, it is actually in your face, not behind a screen which is about thirty centimeters (1 foot) away from your face, it looks like the monster is actually just thirty centimeters away from your actual face, I remember sprinting into the other direction (in game), playing for maybe a minute longer after I lost the monster, and then just quitting because I couldn’t handle being that scared =P.

          Apply this to other games, and it’d have a similarly immersive effect.
          I personally believe that VR is going to replace the monitor in gaming, it’s more versatile than a simple peripheral, as it can be used on any game, and the price shouldn’t be too much.

          Basically, to anyone who doubts VR will take off, I suggest holding on to your arguments, and trying out the Oculus Rift first.
          ’cause it’s almost like arguing nobody will love the colour green, without ever having seen it =P.

        3. Blake says:

          Pretty much this, it’s a neat trick (have used it), but I just can’t see many things I’d ever want it on.

    4. Alexander The 1st says:

      Eh, I feel like part of it is like building a castle in a swamp: the people who keep wanting to make VR work don’t care how many times it fails – if they get it to work once, they’ll be the one who gets VR to work. Much like having a castle in a swamp that doesn’t sink into the swamp, they’ll be the one who has free reign over the VR because everybody else will have to spend time figuring out how they got VR to work before they can even compete.

      This is probably why Sony decided to try and get Morpheus under development after the Oculus Rift started to pick up stream – they don’t want the Rift to free reign over all VR tech; they saw the results of Nintendo having free reign over motion controls last generation, and any advantage they can prevent someone else from single-handedly getting away from them with is probably something they really want to get.

      See also: everyone who’s ever tried to fight Nintendo for the handheld gaming console market.

      1. Mephane says:

        While I appreciate the competition and very much hope Sony’s device will be successful (and good, the two are not necessarily related), I also hope that it won’t take too long before we get something like a universal API for VR and games supporting them regardless of vendor. I’d hate to have a situation where device A works only with nVidia cards and device B only on ATI cards that support Mantle.

        (For the same reason I am both afraid and looking forward to Mantle at the same time.)

  7. TMTVL says:

    How would VR work for a game like (say) Fallout 1? It seems to me like it would only be really interesting for 3D 1st person games.

    1. KremlinLaptop says:

      Without having tried the Rift and just based on my own thought processes? Nope.

      I can sort of imagine it working for 3d RTS games like Total War or the like with some set of camera controls where the motion tracking scrolls the map and head movement moves the camera, etc. However I can imagine a lot of people would just feel sick due to the visual feedback not corresponding to what they expect.

      I think the bigger problem would be that RPG, RTS, and other games of that nature tend to be played with a lot of keyboard and mouse input. Having the Rift on your noggin’ might get in the way. Like I touch-type when typing posts and such but if a game suddenly asks me to hit the L key? I’m probably going to end up looking down at my keyboard going, “Durrr, where’s the L at?”

      1. Kamica says:

        VR wouldn’t work for 2D, unless you make a scene where you are virtually sitting behind a monitor… but that sounds kind of silly (Unless you replaced your monitor with a VR thingy)

        With RTS and other top down 3D games, I’d reckon it would be sort of like playing a board game or a tabletop game (real life), Successful VR is a new field, and people will have to experiment to find out what people find jarring and what not, making assumptions would mostly just exclude possibilities that might be perfectly viable at this stage, prototype everything I’d say =P.

        And for controls, they’d have to be made so you don’t actually have to look at the controls, VR is so different from what we’re used to, that it’d require quite a significant change in how we think about games and game design, especially input =P. Controllers might be preferable for VR.

        1. Peter H. Coffin says:

          Interestingly, tabletop is exactly where CastAR is starting with. Not starting with full-on VR, but giving you a table-size space to play things in, like a more free-form version of the “let the Wookie win” chess game in Star Wars.

          1. Disc says:

            Speaking of which, they’re not getting near enough coverage. Their “solution” to VR is pretty interesting and somewhat different to OR. The smaller, significantly more comfortable-looking size, and not having to deal with possible nausea could definitely give it an edge in the market.

            For those who’ve never heard of CastAR; this video covers most of the important bits and a lot of the history behind it: http://www.youtube.com/watch?v=cc2NQVQK69A

            Alternatively, read about it on their old Kickstarter page: https://www.kickstarter.com/projects/technicalillusions/castar-the-most-versatile-ar-and-vr-system

          2. Asimech says:

            The key is in the name: AR is exactly about Augmented Reality (adding to environment) not Virtual Reality (replacing environment).

            I’m more interested in AR, but that’s partially because I’m far more interested in “virtual tabletop” than “immersion”. Unfortunately the CastAR is using active shutter technology which I can’t use. (Tried two separate 3D TVs. At least one was Sony’s. Couldn’t use the glasses for more than a few seconds before discomfort got too bad.)

            1. Disc says:

              It’s not limited to AR, though it’s what they seem to be focusing on commercially. But I don’t think they would be selling it as “castAR: the most versatile AR & VR system” if the potential for VR wasn’t there. The videos I’ve seen definitely support the idea of a limited form of VR, if not the full package that you can get with Oculus Rift.

              1. Asimech says:

                Well, yes. But back when I checked the Kickstarter I got the impression that AR was the focus, VR was there because they found a cheap-ish way of getting both. Which in itself is nothing to scoff about, of course. But it didn’t feel like the focus.

    2. ET says:

      A straight port would definitely fail, since you’d be moving your head around, and getting a static image – primo barf circumstances!
      Now, I think you could still do games like Fallout, but you’d need to partially 3D-ify it.
      Like, imagine that the Vault Dweller, and the enemies on the map are standing up 3D, like a giant chess board.
      (You’d probably still want it to not fill your whole field of view, so size it like one of those two-meter-wide strategy board games they sell to nerds. :)
      You move your head, and you get a different angle to look at.
      That’d work great, I think.
      Heck, I don’t even think you’d need to fully re-render the game into 3D models, if you just scaled the sprites, and used the forward/back/left/right facing poses that the game already shipped with.
      Sure, motion sickness sets in with things that fill your whole field of view, but I think that little guys on a map who act like weird digital paper cutouts* would be acceptable, and not vomit-inducing.

      * Kind of like the bad guys in Wolfenstein 3D.

    3. TMTVL says:

      I just thought of this: Rift + Street View = instant vacation.

  8. Amazon_warrior says:

    Every time I see a photo of someone wearing a VR headset, I think, “WOW, that looks awkward and uncomfortable. And also like somebody forgot to bring the ocean to their scuba-diving party.”

    Tbh, I don’t give much of a crap about VR currently and I couldn’t give two hoots that FB bought Oculus Rift. What nobody has yet explained (at least, as far as I’ve seen) is how those things will interface with some very important hardware I have. It’s so important that I can’t leave the house or function without it. Without it, I’m blind – yes, I’m talking about the glasses I have to wear to see anything, be it a bus or some shiny pixels. Those headsets look very incompatible with my current face furniture. I can’t see how glasses would fit in the goggles and that band looks like it would pinch around the arms something fierce if you had to wear it for any length of time. Do Not Want. I would love to be wrong about this, mind, but I’d still be pretty firmly in the “yeah, now PROVE it” camp even so.

    (Sorry about the mini-rant – I’ve wanted to say this on pretty much every OR/VR article I’ve seen recently and apparently this was the tipping point!)

    (Also, I will growl at anyone who says, “durrr, get contact lenses!” -_- They’re not cheap, not everyone can wear them, and I certainly strongly dislike wearing them if I’m staring at a screen for any length of time because they make my eyes even more dry and tired. I imagine it would be even worse with screens a few cm from my eyeballs.)

    1. Jan says:

      Yes, a thousand times this (being in the same situation).

      Here is some speculation on my part. What I imagine (but I have no direct information on this, can anyone more in the loop comment on this) is the following: due to the closeness of the screen to they eye, in order to reduce eye-strain by focusing close all the time (also inducing headaches, due to the disconnect between focus distance and parallax distance), there should be lenses between the eyes and the screen (think of them as some sort of extreme reading glasses, allowing you to focus on things very close to your eye). These lenses should be adjustable to allow for people with various corrective lenses to be able to use them without glasses, because glasses are an obvious no-go with VR helmets. There is still the slight discomfort of starting up the thing without glasses, but as soon as you put on the helmet, everything is good to go.

      What I’m really interested in (myself being very near-sighted, only able to focus up to 10 cm away from my eyes) is the following: how wide is the adjustment range? In fact, I see myself being able to do without these extra lenses at all, due to my extreme near-sightedness. Is this possible/supported? Also, for extreme farsighted people, the problem is even more pressing: they don’t even have the option of going lens-less.

      1. Jan says:

        OK, so I googled some info, and it seems that currently they just have 3 different sets of lenses, going from no glasses needed to people with -4 diopter (see for example: http://www.youtube.com/watch?v=QaWRBM7GrEU ).

        This is clearly unworkable for me, as I have -10 diopter. Also, it seems that the distortion at the higher diopters is giving problems, which makes the current system unworkable for me unfortunately.

        1. Amazon_warrior says:

          I confess, I’ve been so put off by the whole thing that I didn’t look further into it beyond dutifully reading the handful of articles on RPS/Escapist recently telling me that I should care about VR, but none of them, nor any of the comments I’ve read (and I did try to read a reasonable number), have mentioned anything about this. That’s interesting about the lenses thing, and it did cross my mind as a possible solution though clearly it’s lacking at the moment. I’m “fortunate” that my shortsightedness is nothing like as bad as yours, but your situation is the kind of thing I was thinking of, not to mention the problems farsighted people will have.

          It also depends what the “approximate” lenses are like, too. I learnt to scuba-dive relatively recently and ended up buying myself a facemask with broadly the right prescription for my eyes so that I could see all the fish/shrimp/corals/whatever – again, I’m “fortunate” that my eyes aren’t vastly different in their defectiveness, since the prescription lenses are only sold in matched pairs. The new lenses helped a lot and greatly improved my enjoyment of diving, but they were by no means perfect and I wouldn’t want to spend all day trying to see through them. I already get migraines, I don’t want more of them. :(

        2. kingmob says:

          Tbh, this technology is so compatible with the need for glasses that I don’t understand why this is a thing at all for some people.

          – The first obvious thing, contact lenses. Most people that need glasses can and do wear contacts, so for the rift this instantly solves a big problem with their possible user base.
          -Second, the lenses themselves can be easily switched out for different ones and it has been that way from the start. It has clearly been designed with this in mind from the start (since they are probably not stupid). The fact that for these prototypes only a few simple ones are available means nothing. At worst you can not be an early adopter. Keep in mind that it is not the final product.

          I see comments that just drip with anger and it confuses me. I simply don’t see the problem. They can’t fix your eyes for you, this is literally the only way to do immersive VR now and they’ve thought of a solution. What more do you want them to do?

          1. Jan says:

            Hmm… Dripping with anger might be the wrong impression. I’m mostly just annoyed by the sparseness of information for the glasses thing.

            As Amazon_warrior stated: (almost) nobody seems to talk about, and certainly not in the specifics I was looking for (the fact that everybody keeps talking about how amazing it all is, while consumer models are an unspecified time away in the future also annoys me). It seems that a significant amount of people trying the things out should have glasses, and nobody seems to comment on it (not even a short: I had no problems at all while wearing glasses), except for that youtube movie I found, which seemed to indicate that it currently *is* an issue.

            Certainly, the solutions seem obvious, but experience (pessimism) tells me that they’re not certain to be implemented, or become very expensive. Try finding good ski-goggles that comfortably fit over glasses for example. Or the fact that prescription swimming goggles became affordable in my diopter range only recently (and it’s a poor solution still for me, due to the difference between my eyes).

            Lenses are certainly not an option for everyone. Apart from the obvious hygiene issues, contact lenses can only be worn for a certain amount of time (my eye doctor recommended up to 8 hours a day). This means that people will have to choose: wear your lenses to work/school, or wear them while gaming.

            Furthermore, it seems that there is a lot of software correction being done by the Oculus Rift, in order to present a natural image to the eye. Different lenses require a different software correction, it is currently an open question (again, nobody seems to talk about it) whether this is supported, and how difficult it would be for others to implement this.

            Also, given how expensive my glasses are (the “designer” frame is an insignificant after-thought in the price total), I’d be afraid that good aftermarket lenses could double the price of my Oculus Rift.

            To conclude: I just want to know some information on what they’re doing about these issues. I can understand that they will not be able to solve these issues completely, or immediately, but just some information would be good to have.

          2. Amazon_warrior says:

            You don’t see the problem….? Perhaps you need glasses. ;)

            But anyway, all what Jan just said in response to you. As for “dripping with anger” – yeah, I’ll admit my OP was kinda shirty, but that’s mostly because I’ve read a number of these “VR is GR8! You should care about it!” articles now and I finally got annoyed enough to comment and it came out snarkier than perhaps it should have. *shrug* But it would be nice to have *some* acknowledgement of the issue given the number of people who need some form of eyesight correction. It is certainly *not* as simple as you seem to assume.

            Also, GRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR for not reading my section on the subject of contact lenses. NAUGHTY! -_-

    2. ET says:

      Yeah, I can see this being a big problem early on.
      Basically, anybody who needs glasses is going to be out of luck, at least for the first generation or two.
      Making lenses is hard.
      It’s basically one of those “cheap, small, good: pick two” problems, but really bad.
      It’s why pro cameras are still the size of a small animal: it’s hard to jam enough optics into the things, without exploding the price of an already-expensive piece of hardware. ^^;

      1. Amazon_warrior says:

        Heh, you sound like my best friend, who is a camera nerd as well as a computer and engineering nerd. :) From reading what Jan posted above, it’s pretty much confirmed that VR is firmly in the “this is irrelevant to me” category at the moment. I just can’t get my knickers in a knot over something that’s incompatible with me on such a fundamental level.

    3. urs says:

      I should be able to try out the Oculus Rift at some point within the next days – there’s a gamedev network thing in town and they are quite nicely equipped.
      But, yes, the main reason for me to go there and say “Hi” is to find out whether there’s any VR to be experienced at all for me. My eyesight as such isn’t too bad – but my stereoscopic vision is.

      1. Amazon_warrior says:

        Oooo, good luck and have fun regardless! :) It’d be very interesting to hear how you get on – you’re not the first person I’ve encountered who has problems with stereoscopic vision and I guess that’ll be a hindrance with VR.

  9. ehlijen says:

    What about the issue you have noted with increased graphic power in the past?

    How tolerant is the brain to having a presence inside a highly stylised simplistic graphics world? Will VR increase the need for assets and graphics design effort? If so, with the issues AAA titles are already claiming to have in the expense regard, how profitable will designing games for the Rift ultimately be?

    Or will the effort be pretty much similar to the graphics levels we already have (or hopefully tolerant to less than that, since the current level is apparently pushing the limit hard) ?

    1. Shamus says:

      This is my question as well: What sorts of stylized techniques work and which don’t?

      This is one of the reasons I want to play with an Oculus so bad. Everyone is working on getting good images to your eyeballs. I want to work on the images themselves.

      1. Peter H. Coffin says:

        bookmark dump of stuff relevant to this:

        Some differences between rendering in “3d” for flat display and rendering in 3D for real 3d displays of an arbitrary nature: http://www.it.uu.se/edu/course/homepage/vetvis/ht09/Handouts/Lecture05.pdf

        Intro to stereophotography (probably about half of the first 70% is about 3D presentation, the other half about stereo visual perception, can skip the last chunk if you don’t care about photography per se) http://www.shortcourses.com/stereo/samples.pdf

        Stereography with some OpenGL dabblings that may inspire: http://paulbourke.net/papers/vsmm2007/stereoscopy_workshop.pdf

        I’ve got a lot more but they’re more photography-leaning than theoretical… If you’re interested, I can get ’em out.

      2. Agreed.

        We have Valve showing off an expensive VR sets where users don’t want to stand near the edge of a ledge over a 5-quad cube hole textured in nothing but old website (Reddit or similar) which can’t be more clearly not a real space but still makes people strongly feel a sense of presence. At the same time they’re talking about how normal maps are often obviously fake and baking detail into models that way isn’t the clear answer when you can inspect them up close with two eyes. There’s a massive range of experiments we can look at doing and finding out what styles can be applied to reality without it stopping to feel like a real place.

        Here’s a good question: does Antichamber work in Rift? What about that bit in The Stanley Parable where you find the corridor that’s longer on the inside than the outside. We’re used to Euclidean space. We have games that clearly craft inspectable places without following those rules. Now we have the tech to make us feel like we are actually there. Yes, where do I sign up; I want to be one of the VRnauts that sails those uncertain seas.

        Even things like simple stereoscopic 3D show us clear lines of attack for non-detailed rendering. World of Warcraft is a great papercraft world of simple objects that do look like they’ve been folded if you configure your 3D just right to make it map to your (held still) head position as a window into the monitor. Homeworld was the same way, only with great trails coming out the back as these small low-poly objects moved around in the void inside the monitor.

        Rift seems to need low lag (hit that 75Hz, never miss an update because that means for one frame the world looks like it rotated with your head and that’s not a good thing to experience) and obviously stability of the effect (matching the render FoV with that that the user experiences, use their interpupillary distance as they recorded in the device config, making sure the things you render are stable enough for the various image-processing stages the brain uses to determine a 3D reality perceived in front of us) but so far I’ve not heard much at all about requiring lots of assets or realistic rendering approaches. In fact, people being able to get engrossed in a virtual space and preferring to move at human speeds (not a 20-60mph super-run) would indicate content will probably last longer. On the other hand, correct lighting is a massive help for some cues we use to determine depth/shapes so a non-realistic lighting model may be something that needs a lot of work to get just right for people who primarily perceive depth with those cues.

        1. ET says:

          “What about that bit in The Stanley Parable where you find the corridor that's longer on the inside than the outside?”

          My hunch is that stuff like that wouldn’t make people barf, although it would definitely be really cool.
          Like, anything that needs to be thought about for longer than a second to sink in, i.e. “Oh, wow, this building is bigger than it looked from the outside, once I step inside!” isn’t going to be a problem.
          Immediate stuff, like moving your head/body, and not having the viewpoint move accordingly – that’s going to be barf-inducing.

          TLDR: The camera tricks in Antichamber shouldn’t be a problem. :P

          1. I think you’re almost certainly right. I’m really excited about getting to test it and be sure of it (which won’t really be helped by simulation sickness being correlated with lack of exposure so those of us who do a lot of stuff in VR will become partially immune to the negative effects – I know my vergence accommodation reflex disables for stereoscopic 3D; after 15 years of on-off exposure my lenses don’t even try to refocus* when I change my convergence angle looking at a 3D monitor and it doesn’t feel at all unnatural when I flip to this 3D processing mode). Optical illusions would indicate we have quite a high threshold for things that seem to ‘break’ when you move outside of the angle of inspection where something looks weird and having to rebuild our internal representation of what is actually in front of us.

            * At least this is my experiential observation, for all I know they do rapidly lose and then re-acquire the focus at ~3ft for the screen when I make the vergence switch but I no longer perceive it. No idea where in the chain I’ve broken the reflex but it no longer hits my perception layer.

        2. Peter H. Coffin says:

          Yes. That will be part of the How To Do 3D Properly: If you need to drop a frame, you MUST drop frames in pairs. Even if it means settling for a lower frame rate overall.

          1. I was actually talking about an update as an update for both eyes, obviously only updating half the screen for one eye is very bad (although, stereoscopic 3D using 120Hz (with updates coming for alternate eyes so a dropped frame will mean one eye gets the old frame while the other may retain a full 60Hz per eye update schedule) says it isn’t as sickness-deadly to the player as you might expect, it does kill presence from my limited experience).

            I think current (DK2-looking) Rift best practices could actually be to drop frames with black blanking frames. I’m not sure because the current docs are DK1 focussed and there the LCD stays lit for the duration of a frame so duplicating an old frame just fails to update for even longer (and the thing is smearing all over the place anyway because of this). The DK2 being low-persistence means you see black with flashes of an image every update. This means a duplicated frame from a dropped frame would involve moving your head and seeing the old position for both eyes as a new flash of light. That sounds bad. I’d certainly code to the expectation that missing a scan-out should render a black quad once you know you’re going to miss. It’s not ideal to drop to 37.5Hz updates from a flicker perspective but showing an old frame = probably not good for presence.

            By the time DK2 is out, maybe we get to make a good solution. You’re going to stall/catch your render early to get in the lens distortion pass (best practices: also query the Rift position/rotation data again and do a warp on your rendered frame to accept this, at least the rotational data can be done without data loss beyond blacking the peripheral area missed – changing position in the warp will expose things the original render couldn’t see and the Oculus guys are currently working on making a least-bad answer for how to handle it) but what if you didn’t complete your render? You’ve got a point cloud from the last frame, unless they really rotated their head a lot then you’ve got most of the data you need. It’s not a lot worse than the reprojection you’re already doing to reduce lag with a late reading of the orientation of the device so if we get to 10ms down, 3ms before scan out then maybe see how far you are in the frame render process and if you’re going to drop a frame then flush and move to rendering a reprojection of the old frame with the new orientation data. You wanted to stop then anyway to run the warp if you did have a new frame, so it’s just a simple if/else decision.

            No more black frames with flicker issues, solving the same issues with translation warps that we already knew we wanted to find a least-bad solution to. Maybe set the next frame to render with a slightly lower detail pass so you definitely don’t miss two frames in a row because repeatedly using a warp reprojection from old data will quickly give us that horrible MPEG-smear issue of pushing about an old frame with only vector data and that’s not going to be good, even if we do something very clever (like store extra data in an new buffer that is for polygons that only-just point away from the camera as we do a render & save the overdraw data so we can make a more complete point cloud from which we reproject from without gaps) on top of the marginally clever idea of rendering a larger frame (bigger FoV) than we end up using so when we do some rotational reprojection we’re less likely to rotate off the edge of the render and have to paint black into the peripheral vision areas.

            That’s a bit of a rushed explanation. You can read more on AltDevBlogADay (Carmack) and Vale’s VR blog (Abrash) when they talked about just this issue (and some of it is mentioned in the Rift best practices document too).

      3. ben says:

        Minecraft have been noted to work very well with the rift.

    2. ET says:

      I too, am curious about this.
      My hunch is, that a lot of the nausea is in your subconscious brain, but some of it might even be from the nerves right at the back of your eye, before the brain.
      So…maybe this stuff is mitigat-able with some art styles, but maybe not.
      Hopefully stylized art works better with this technology, since I don’t think that the already-inflated budgets of AAA games could survive having all the extra burdens of VR tech on top of it all. :S

      1. Mephane says:

        As far as I am aware, nausea completely happens in the processing of all sensory input combined, and is not connected to eyesight itself. The most common source, I think, is when vision mismatches the signal from the equilibrium organ, and you either see motion you don’t sense, or the other way round (this is why closing your eyes when getting sick on a bus trip does help, turn off one side of the conflict and the brain happily accepts the other).

        1. ET says:

          Hmm…then that pretty much means that art style won’t have an effect.
          So, good news for indies, after the prototyping and 1st-gen stuff is done, but bad news for people who need to make the hardware even better than what we’re already seeing from the trade show videos. :|

  10. Thomas says:

    >The screen you’re using to read this article – even if you’re using a high-end smartphone – is vastly superior to the screens of just four years ago

    Uh….my screen is 6 years old, a 1920×1200 Sceptre LCD. D..do I need to buy a new monitor?

    1. Sabrdance (MatthewH) says:

      Good point.

      Gosh, mines over a decade old… At least its still an LCD.

    2. Shamus says:

      Actually, that was sloppy of me. I was talking about how much better smartphone screens are getting. More pixels, lower refresh, better view angle, less persistence. This applies less to desktops.

      My desktop LCD is about the same age as yours. :)

  11. Ciennas says:

    The biggest problem I see was touched on above. To wit: How much do these weigh, and how long would it be before they could shorten the center of gravity down to glasses level?

    Ooh, I got another good one. Can these headsets correct for terrible eyesight, or would we need contacts or glasses to go Literal Web Surfing?

    1. ben says:

      From the Wikipedia page 379 g (13.4 oz) for the DK1 and it doesn’t include headphone. From diverse users experiences, the weight is barely noticeable.

  12. The Rocketeer says:

    Seems like the Oculus Rift needs two things:

    1:A camera on the front
    2:An “I’m trippin balls” button on the interface/controller to switch to that camera view on demand.

    Seems like a useful panic button for those times when the device starts throwing the user into a nauseous fit. But of course, I don’t know what that would do to the size/weight/cost…

    1. Ciennas says:

      Very little. My cellphone has two, and they’re not even attributed towards the weight.

      So, you put two right where the user’s eyes would be, and then the panic button projects them instead. Maybe a couple of backup cameras to give you peripheral vision.

      You’ve added maybe half an ounce to the end product, not changed the balance by any measurable degree, and only increased the build cost by … Eleven bucks a camera.

      So, as far as it goes, negligible at all costs.

    2. Mephane says:

      Also related: invisible input devices. It works great with something you don’t ever look at, e.g. a gamepad, but when you have a game with a lot of keys to press and sometimes glance at the keyboard to make sure you will hit the right key, a VR device obviously prevents that.

      I can also imagine that VR will even worsen the situations of the left hand in a WASD movement setup accidenally shifted 1 key to the right. In the normal case, you immediately wonder why you character does, for example, throw a grenade instead of strafing to the right; you glance down and see your mistake and quickly adjust your hand. With VR, you cannot do that, and the mistake itself might make you nauseous when the actual action does not match the motion you expected.

    3. Arstan says:

      I am pretty much sure it wouldn’t be that expensive, but would be the devs be interested in such an upgrade? Who wants users to stop VRing on a whim?

      1. Ciennas says:

        Users would go for it, for the reasons you’d expect.

        Wanting to quickly get a soda without getting out of a headset, or finding socks, what have you.

        1. Ranneko says:

          Given that the headset is not wireless, you would still need to remove if you were getting anything out of arms reach, but it would be handy if you would like to just take a drink.

      2. The Rocketeer says:

        That’s one of the things Chris said people were complaining about at GDC, though: since the headset is time consuming to adjust and recalibrate when you put it on, and because you have no way of seeing the REAL environment if you’re trying to interact with it (I think dropping your controller and then not being able to find it was the big example), being able to toggle virtual and actual reality seems like a big boon even without taking into account the OR and its software’s tendency to bug out from time to time and make the user feel ill due to vertigo.

        There are very, very few tech products that can accrue a significant number of reviews related to barf, and I bet the devs would be eager to mitigate that.

    4. Ranneko says:

      Adding cheap cameras won’t actually fix the nausea issue, because you have the latency of the camera input and processing. Remember you need to take the camera input and distort it so it looks right. It would be handy for finding controllers and things but you will still have lag and tearing to contend with.

  13. RandomInternetMan says:

    I haven’t tried the Oculus Rift, but one of my biggest concerns I don’t see anyone ever talking about is comfort.

    I already shy away from audio headsets for this reason. No matter how comfy they pretend to be, for me it always starts off unconvenient, and eventually builds up to physical pain.

    1. Amazon_warrior says:

      I touched on this a little further up, and I’m totally with you on the audio headset thing – no one likes numb pins’n’needles ears! I skimmed some of the following posts and saw a weight of ~375g or 13.5oz (ish? I was reading fast…) mentioned. The comment seemed to imply that that was a fairly negligible weight, but it doesn’t seem so to me – 13.5oz is most of the way to 1lb or 0.5kg, which sounds like a fair bit of weight to have strapped to your face for any length of time. I guess they’ll get lighter and more refined with time and development, though. *shrugs*

      1. Asimech says:

        I have a headset that’s 278g and it’s been described as comfortable, though I’d argue its weight is at the upper limits. If I’d have to put that and a 375g weight on my head I’d just give up. Not worth the potential neck pain.

        I have a friend who managed to try it who said it’s too heavy/front-heavy and mentioned a counterweight might help, but I’m not sure if that won’t just cause a more subtle, and serious, form of strain on the neck that’ll creep up on you.

        Not that they seem to be going for that. Anyhow, it seems Oculus is more of a “short burst of gaming” type interface. I imagine it would get disorienting to just about anyone after a couple of hours of continuous playing.

      2. Ranneko says:

        From my experience with the Devkit 1 of the Rift, the weight only really bothered me if I was trying to do something inactive for a while with it. I honestly didn’t encounter the weight being something I really noticed whilst playing games where I was actively moving my head such as TF2, Surgeon Simulator or AaaAAaa for the Awesome. This is probably because I was more actively engaged in the game and so the game distracted me from the weight and after about 20-30 minutes I would start getting nauseous (in TF2) and would stop.

        But when I tried the movie theatre app, I really noticed the weight after a short time and it did affect comfort.

        Devkit 2 and the consumer version are meant to be lighter which should help with that problem, will be interesting to see how it works out.

  14. Asimech says:

    “Lots of abrupt, rapid motion is bad. Full screen distortions are bad. Grabbing the camera to show the player something is REALLY bad.”

    If these start disappearing from non-VR games I’ll be one happy camper. However that doesn’t cover “flicker” or “flash” type effects, so I doubt they’ll start going away. And even if anything is done to them I think they’ll just go to “subtle enough not to bother the majority”.

    And based on Thief 2014 that won’t be enough for me. In it there’s a white screen-edge flash whenever you step into light that caused enough discomfort in a YouTube video (not enlarged, the default “small” size) I had to look away at times and ultimately couldn’t finish it. Yet I haven’t ran into a single person who’s had the same problem or even mentioned the flash until I brought it up. It also can’t be turned off.

    What this amounts to me, is that even Oculus Rift won’t hardware problems that prevent me from being able to use it, it’s more than likely content side will have. If nothing else I’ll have to go through extensive vetting for each game in order to find out if I may or may not be able to play it.

    So it sounds like VR won’t be a thing for me for the foreseeable future (i.e. “until it’s established that devs either don’t use flicker or critics mention them reliably”). So this worries me a bit:

    “VR is not something we can just hack into an existing game. To really take advantage of VR we need to build a game around it.”

    While I think it’s possible to make a game that works both with and without VR, I think we would be seeing a lot of “you can play with/without, but it’s quite awkward” like we’ve seen* with controllers. Since we got games that only really worked with mouse & keyboard or gamepad even when the genre already had established working controls for both, I suspect this will be more serious and long-lasting.

    * Well, see. But the problem seems to me peripheral enough by now that past tense is more accurate.

  15. Elilupe says:

    Talking about John Carmack leaving id for Oculus, I realized; does that mean he won’t be doing his signature keynote at QuakeCon anymore? Those were some of the most interesting talks about gaming I’ve ever heard.

    1. Shamus says:

      I am seriously saddened by this. And yet, I can’t imagine him doing the keynote. What would he talk about? He can’t cover id Tech anymore. I mean, I’m sure he could fill the time, but he couldn’t talk about what the audience really wants to hear (the next id Game) and unless the next id game is going full-on Oculus support, there’s only so much he can say about VR.

      1. The Rocketeer says:

        That said, the idea of an Oculus DOOM game is pretty badass.

        1. Asimech says:

          Fun titbit: Back in the 90s I tried out a VR headset and they were running Doom. I wondered why it looked so strange and didn’t find out until a couple years back that the VR headset in question had a lower maximum resolution than what Doom used.

          (It was also interesting, but unusable, piece of tech. But that should come as no surprise to anyone who has had experiences with the past VR devices.)

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply

Your email address will not be published. Required fields are marked *