Quantcast
Channel: vr input – Road to VR
Viewing all 23 articles
Browse latest View live

Why CES 2017 Will Set the Stage for the Next Year of VR

$
0
0

The annual Consumer Electronics Show (CES) has served as a good way-point for VR’s progress over the years. With CES 2017 kicking off next week, here we take a look back at the highlights (and low-lights) from 4 years of VR at CES to gauge how far the industry has come and look for clues as to where it goes from here.

Wedged somewhat inconsiderately at the very start of every year (it’s OK CES organisers, no one in the tech industry have families they want to spend time with), the annual Consumer Electronics Show held in Las Vegas is still the biggest event for hardware in the world. A swirling mass of corporate marketing excess and the single platform showcasing the best (and worst) new gear from around the world expected to vie for our attention in 2017 and beyond. Virtual reality has figured prominently at the event in recent years of course, quickly rising to become one of the shows hottest technologies. With that in mind, and with CES 2017 imminent, we thought we’d take a look back at the notable VR events from past shows, charting VR’s progress to the present day.

CES 2013 / 2014: The Early Years

From the advent of the Oculus Rift in 2012, we saw Oculus attend the show for the first time in 2013 to show off their pre-production Rift headset prototype ahead of the DK1 launch, following their wildly successful Kickstarter campaign. Press response to the closed-doors meetings was almost universally positive. Road to VR was still in its infancy at the time, but Tested.com went hands-on with an interim Rift prototype at the show along with giving us a glimpse at the near-complete Rift DK-1 design that would ship to Kickstarter backers later that year. The demonstration included the now familar Unreal Engine 3 powered citadel scene, one which would become the setting for one of the most famous early VR applications of all time, Rift Coaster. The Rift had of course been covered by media before, most notably when Id co-founder John Carmack at E3 2012 showed an early, modified Oculus Rift prototype sent to him by the device’s inventor (and future Oculus VR founder) Palmer Luckey. CES 2013 however gave us the first glimpse of Oculus VR operating as a company.

Oculus' Pre-DK1 Prototype, shown at CES 2013
Oculus’ Pre-DK1 Prototype, shown at CES 2013

The following year at CES 2014, Oculus had to share the immersive technology limelight with a slew of new startups who had appeared in the wake of Oculus’ success. The unique (and formerly Valve developed) retro-reflective-powered CastAR system gave us a glimpse at one of augmented reality’s possible futures; Avegant turned up with their bizarre yet technically impressive personal media player the Glyph; PrioVR had their new entry-level motion tracking / VR input system to try.

But the star of the show remained Oculus who, having grappled with the DK1’s biggest technical flaws, showed their latest prototype which resolved two of them in one fell swoop. The Crystal Cove headset featured a cluster of IR LEDs and a tracking camera to provide positional tracking and also introduced low persistence of vision displays. Both advancements provided a vast improvement in user experience, and provided the baseline technical platform for the consumer Rift when it appeared in 2016. A few months after CES 2014, Facebook would acquire the company for $2Bn.

oculus-rift-crystal-cove
The Oculus Rift Crystal Cove prototype VR headset and tracking camera a shown at CES 2014

CES 2015: Consumer VR Takes Shape

CES 2015 brought yet more impressive advancements from VR and AR fields, and some notable setbacks. With the huge uptick in interest surrounding VR technology it inevitably drew opportunistic businesses to join the bandwagon with minimal work.

The most infamous example is of course, the legendary 3DHead system. This was a product which purported to offer a full-fledged VR experience with no lenses and mostly, off-the-shelf technology. Backed by the eccentric billionaire Alki David, the product’s aggressive (and as it turns out misleading) marketing had already drawn ire from the VR community, adopting as it did taglines like “Oculus killer” in its promotional material and then booking a booth directly next door to Oculus themselves sporting those same slogans. The headset itself was enormous – somewhat akin to the head of H R Geiger’s Alien, although somehow less attractive – and the advertising was painfully bad, but we nevertheless did our best to keep an open mind. Inevitably however, after Ben Lang tried 3DHead for himself, and subsequently interviewed the seemingly sincere James Jacobs (at that time COO of the operation), it was clear 3DHead was at best a terrible product and at worst, a complete sham. Watch the interview for yourselves below (along with Ben’s write-up of his experience) but needless to say Ben’s original and succinct summary of his experiences were right on the money; it was indeed, “beyond bad”.

Elsewhere however, things were looking much more promising. Oculus had once again brought along its latest prototype, the Oculus Rift Crescent Bay. Unveiled originally at the company’s 1st developer conference Connect in September, Crescent Bay gave us what we know now to be a pretty good sneak peek at the device that would eventually ship in March the following year. It had integrated, high quality headphones (supported by a custom inline DAC and amp), lightweight construction, and rear-mounted ‘Constellation’ infra-red LEDs for 360 degrees of positional tracking with a single camera sensor. We would also later learn the device (as with the consumer version) sported dual OLED panels and Fresnel lenses, quite a departure from all Rift devices that had preceded Crescent Bay. For the first time, Oculus had shown a device that looked like a consumer product.

The Oculus Rift Crescent Bay Prototype
The Oculus Rift Crescent Bay Prototype

2015’s CES was the first for Samsung’s ‘Oculus powered’ mobile VR headset ‘Gear VR’ having been unveiled and impressing a few months earlier at IFA Berlin. The then Galaxy Note 4 powered device was featured heavily at Oculus’ booth both in front and behind the scenes. It was clear Oculus, thanks in no small part to its CTO John Carmack, was serious about the future of mobile, untethered mobile virtual reality.

A new VR headset challenger also entered the ring at 2015’s CES, one which promised to eschew the proprietary, walled garden approach which Oculus had adopted and open up both the hardware and software for developers to tinker. The Razer-fronted Open Source Virtual Reality (OSVR) platform was announced along with its very first flagship hardware, the Hacker Developer Kit (HDK for short). This was a headset designed to be pulled apart redesigned and put back together then shared with the community. Built atop an open source set of APIs, the platform was a refreshing take on how to deliver immersive technology. Although the platform left a little to be desired in the overall experience compared with the Rift, it was encouraging to see such a fresh approach.

The OSVR HDK Headset
The OSVR HDK Headset

Unbeknownst to most (with nary a whisper uttered at 2015’s CES), Valve and HTC were working in secret on a virtual reality system that would shake up the fledgling VR industry and present Oculus with their first serious competitor in the PC VR space. At MWC in Barcelona in March that year, HTC unveiled the Valve/SteamVR driven Vive headset and arguably went on to dominate the Game Developer Conference (GDC) show which followed. The Vive was powered by Valve’s new laser-based room-scale tracking technology ‘Lighthouse’ and gave many people their first taste of presence thanks to the system’s then prototype motion controllers which demonstrated an at that time unprecedented level of input fidelity. Vive’s entrance would help shape the conversation around what we should expect from consumer virtual reality throughout 2015 and beyond.

Sony however, having debuted it’s PlayStation 4 powered ‘Morpheus’ (later

The HTC Vive (DK1), SteamVR Controllers and Laser Basestations
The HTC Vive (DK1), SteamVR Controllers and Laser Basestations

re-christened PlayStation VR) headset at GDC in March of 2014, was largely absent from 2015’s CES, with the company focusing more heavily on its more traditional consumer electronic lines. Sony however would go on to push the Morpheus hard at gaming shows throughout 2015 such that, by the close of that year, the PlayStation VR would become one of VR’s best hopes at reaching out to a mass market audience.

Sixense also showed off the latest iteration of their STEM motion controller. The company had run an extremely successful in 2014 riding the wave of interest in VR and aiming to plug the gap for VR-centric controllers. At CES

Ben Lang trying out the Sixense STEM at CES 2015
Ben Lang trying out the Sixense STEM at CES 2015

2015 the company demonstrated a new version which integrated IMUs to tackle tracking drift and distortion inherent in the device’s electromagnetic tracking system. It was impressive stuff at the time, and Sixense was at the time confidently contemplating shipping finalised devices to Kickstarter backers later in the year. Alas, at the time of writing this piece, and thanks to a series of frustrating delays, the company is yet to fulfill that promise.

CES 2016: Consumer VR Wars

All of that brings us to 2016, and a CES which marked the beginnings of what would turn out to be VR’s most important 12 months so far. Most industry observers (including us) had expected to usher in the first generation of consumer virtual reality in 2015. The hardware felt ready and there had even been indications to that effect from the company largely responsible for VR’s resurgence, Oculus. As it turned out, we had to wait until CES 2016 to learn when we could pre-order the world’s first consumer desktop VR headset. Oculus announced that pre-order sales would go live during CES itself (which posed some logistical issues for those of us covering the event and wanting to get their hands on a Rift let me tell you). On January 6th 2016 Rift pre-orders went live at with headsets expected to ship a couple of months later in March. Also, as a nod to the company’s roots, and as a (largely unprecedented) “thank you” to the original Rift Kickstarter backers that launched the company, Oculus gave announced that every supporter would receive a free consumer Rift. Sadly, the Rift’s launch would become mired in familiar shipping difficulties, in part blamed on component shortage, exacerbated by some seemingly poor logistics management.

The Final Oculus Rift consumer edition
The final Oculus Rift consumer edition

Throughout 2015, the Rift’s biggest competitor, the HTC Vive had made phenomenal gains in public awareness and word-of-mouth PR. Its room-scale credentials and those precisely tracked SteamVR motion controllers had been demo’d around the world and its particular flavour of immersive interactive entertainment was a big hit. Oculus’ handicap, its resolutely seated/standing experience focus and (most importantly) its lack of dedicated out-of-the-box tracked motion controllers, helped Valve and HTC present the Vive as the first ‘complete’ VR solution and people were buying into the idea that room-scale VR might be the future – albeit one which many may not have the room for. In any case, Vive’s launch in April 2016 kickstarted a formation of Oculus and Vive factions, ushering in the dawn of VR format wars with partisan arguments strongly reminiscent of every console generation past.

The Vive Pre (right) versus the Vive DK1 (left)
The Vive Pre (right) versus the Vive DK1 (left)

To highlight the rapidity at which the Vive was approaching a consumer reality, HTC took the opportunity to demonstrate what would turn out to be the HTC Vive’s final form. We’d already seen various iterations of the Vive developer kits, in fact Valve showcased the hardware’s evolution as part of it’s unveiling at CES, but at the show, HTC showcased the Vive Pre, sporting some significant hardware enhancements over its predecessors. The Pre packed in a new, front-facing camera-sensor which allowed users to glimpse their real world within VR. The Pre also came with Mura correction, a process to help minimise artefacts and disparity between the unit’s OLED display panels. It was also notably smaller than what had come before too. The Pre was, to all intents and purposes, what the retail Vive would turn out to be when it was launched just a few months later in April. It was an encouraging show of readiness from HTC then, although perhaps a far cry from the previously teased “very, very big breakthrough” teased by the company’s CES just a couple of weeks prior.

The HTC Vive Pre
The HTC Vive Pre at CES 2016

On the VR input side of things, Virtuix looked in bullish form at the event with a generously sized stand and a dedicated multiplayer event featuring 4 Omni treadmills and a new, in-house developed FPS for people to compete in. We got our feet on the new improved omni-directional treadmill Infinadeck and, despite its gargantuan size and weight, came away intrigued by its unique take on VR locomotion. Equally quirky, the intriguing Rudder VR made an appearance in its final form at the show and announced it would go on sale in 2016. We also got hyped for an experimental input device which promised to bring electromagnetic field powered positional input tracking to Samsung Gear VR. Alas, the Rink controllers were early prototypes and, once we found them, were disappointed by the performance. We’ve not heard anything of them since.

Eye tracking finally began dovetailing with virtual reality at CES 2016 too thanks to Sensomotoric Industries and their impressive demonstration of both gaze-based input and an implementation of foveated rendering, all on a neatly hacked-up Oculus Rift DK2. Eye tracking, the main USP of the recently released FOVE headset, seems to be one of the most likely additional technologies to make its way into future generations of consumer virtual reality given its obvious experiential and performance benefits.

Finally, Oculus brought their tracked motion controllers Touch to CES for the first time and we caught a glimpse of the latest design iteration which would prove to be near identical to the retail editions once they arrived. We’d have to wait almost a full year however to get our hands on the consumer version.

So that’s it, a massively condensed history of VR at CES over the years. CES is by nature a hardware focused event so details of the huge leaps and bounds made by developers and industry leaders in VR content is beyond the scope of this piece. But in truth, as all major VR platforms are now in the hands on consumers and Oculus, Sony and Valve/HTC concentrate on heavily content production for their new babies, the focus for CES this year will likely be less about hardware revisions and more about a glimpse of technologies we may see forming part of the next generation of yet-to-be announced VR hardware. We’ll see the strides made by companies in the field of wireless VR, hopefully progress in the eye-tracking arena and perhaps a handful of VR-centric input devices. That said, the joy of CES is that you never can tell what might happen. In either case, Road to VR will be there to find out as it has done since the beginning.


Road to VR will of course be at CES 2017, and if you have something VR related you’d like to show or talk to us about, drop us an email at tips@roadtovr.com.

The post Why CES 2017 Will Set the Stage for the Next Year of VR appeared first on Road to VR.


Hands-on: Massless Wants to Bring High-precision Stylus Input to VR

$
0
0

Massless is developing a stylus designed specifically for high-precision VR input. We got to check out a prototype version of the device this week at GDC 2018.

While game-oriented VR controllers are the norm as far as VR input today is concerned, Massless hopes to bring another option to the market for use cases which benefit from greater precision, like CAD. Controllers like Oculus’ Touch and HTC’s Vive wands are quite precise, but they are articulated primarily by our wrists, and miss out on the fine-grain control that comes from our fingers—when you write on a piece of paper, notice how much more your fingers are in control of the movements vs. your wrist. This precession is amplified by the fact that the tabletop surface acts as an anchor for your finger movements. Massless has created a tracked stylus with the goal of bringing the precision of writing implements into virtual reality, with a focus on enterprise use-cases.

Image courtesy Massless

At GDC I saw a working, 3D printed prototype of the Massless Pen working in conjunction with the Oculus Rift headset. The system uses a separate camera, aligned with the Rift’s sensor, for tracking the tip of the stylus. With the stylus held in my left hand, and a Touch controller in my right, a simple demo application placed me into an empty room where I could see the tip of the pen moving around in front of me. I could draw in the air by holding a button on the Touch controller and waving the stylus through the air. I could also use the controller’s stick to adjust the size of the stroke.

Photo by Road to VR

Using the Massless Pen felt a lot like drawing in the air with an app like Tilt Brush, but I was also able to write tiny letters quite easily; without a specific task comparison, or objective means of measurement between controller and stylus though, it’s tough to assess the precision of the pen by just playing with it, other than to say that it feels at least as precise as Touch and Vive controllers.

SEE ALSO
Oculus Research Devises High-accuracy Low-cost Stylus for Writing & Drawing in VR

Since the ‘action’ of writing in real life is initiated ‘automatically’ when your writing implement touches the writing medium, it felt a little awkward to have to press a button (especially on my other hand) in order to initiate strokes. Of course, the Massless Pen itself could have a button on it (so at least it might feel a little more natural since the stroke initiation would happen in the same hand as the writing action), but the company says they’ve steered away from that because the action of pressing a button on the pen itself would cause it to move slightly, working against the precision they are attempting to maintain.

Photo by Road to VR

If you’ve ever used one of a million trigger-activated laser-pointed interfaces in VR, you’ll know that this is actually a fair point, as pointing with a laser and then using the controller’s trigger to initiate an action causes the laser to move significantly (especially as it’s amplified by leverage). It felt weird using my other hand to initiate strokes at first, but I feel fairly confident that this would begin to feel natural over time, especially considering that many professional digital artists use drawing tablets where they draw on one surface (the tablet) and see it appear on other (the monitor).

Inside the demo I could see the white outline of a frustum projected from a virtual representation of the Rift sensor in front of me. The outline was a visual representation of the trackable area of the Massless Pen’s own sensor, and it was relatively narrow compared to the Rift’s own tracking. If I moved the stylus outside the edge of the outline, it would stop tracking until I brought it back into view. As Massless continued to refine their product, I hope the company is prioritizing growing the trackable area to be more comparable to the headset and controller that it’s being used with.

While the Massless Pen prototype I used has full positional tracking, it lacks rotational tracking at the moment, meaning it can only create strokes from a singular point, and can’t yet support strokes that would benefit from tilt information, though the company plans to support rotation eventually.

Photo by Road to VR

More so than drawing in the air, I’m interested in VR stylus input because of what it could mean for text input handwritten on an actual surface (rather than arbitrary strokes in the air); history bred the stylus for this use-case, and they could become a key tool for productivity in VR. Drawing broad strokes in the air is nice, but writing benefits greatly from using the writing surface as an anchor for your hand, allowing your dexterous fingers to do the precision work; for anything but course annotations, if you’re planning to write in VR, it should be done against a real surface.

To see what that might be like with the Massless Pen, I tried my hand at writing ‘on’ the surface of the table I was sitting at. After sketching a few lines (as if trying to color in a shape) I leaned down to see how consistently the lines aligned with the flat surface of the table. I was surprised at the flatness of the overall sketched area (which suggests fairly precise, well calibrated tracking), but did note that the shape of the individual lines showed regular bits of tiny jumpiness (suggesting local jitter). Granted, this is to be expected—Massless says they haven’t yet added ‘surface sensing’ to the pen (though they plan to), which could reasonably be used to eliminate jitter during real surface writing entirely, since they could have a binary understanding of whether or not the pen is touching a real surface, and use that information to ‘lock’ the stroke to one plane.

The Massless Pen is interesting for in-air input, but since the stylus was born for writing on real surfaces, I hope the company increases its focus in that area, and allows 3D drawing and data manipulation to evolve as a natural, secondary extension of handwritten VR input.

The post Hands-on: Massless Wants to Bring High-precision Stylus Input to VR appeared first on Road to VR.

Valve Psychologist to Explore Brain-Computer Interface Research at GDC

$
0
0

At GDC 2019 later this month, Valve’s Principal Experimental Psychologist, Mike Ambinder will present the latest research pertaining to brain-computer interfaces—using signals from the brain as computer input. Ambinder says that BCI is still “speculative technology,” but could play an important role in the way players interact with the games of the future.

As time moves forward, the means by which users interact with computers have becoming increasingly natural. First was the punch card, then the command line, the mouse… and now we’ve got touchscreens, voice assistants, and VR/AR headsets which read the precise position of our head and hands for natural interactions with the virtual world.

More natural computer interfaces make it easier for us to communicate our intent to a computer, making computers more accessible and useful with less time spent learning the abstract input systems.

Perhaps the final frontier of computer input is the brain-computer interface (BCI). Like the virtual reality system envisioned in The Matrix (1999), the ultimate form of BCI would be some sort of direct neural input/output interface where the brain can directly ‘talk’ to a computer and the computer can directly ‘talk’ back, with no abstract I/O needed.

While we’re far, far away from anything like direct brain I/O, there has been some headway made in recent years at least on the input side—’brain reading’, if you will. And while early, there’s exciting potential for the technology to transform the way we interact with computers, and how computers interact (and react) to us.

At GDC 2019 later this month in San Francisco, Valve’s Principal Experimental Psychologist, Mike Ambinder, will present an overview of recent BCI research with an eye toward its applicability to gaming. The session, titled Brain-Computer Interfaces: One Possible Future for How We Play, will take place on Friday, March 22nd. The official description reads:

While a speculative technology at the present time, advances in Brain-Computer Interface (BCI) research are beginning to shed light on how players may interact with games in the future. While current interaction patterns are restricted to interpretations of mouse, keyboard, gamepad, and gestural controls, future generations of interfaces may include the ability to interpret neurological signals in ways that promise quicker and more sensitive actions, much wider arrays of possible inputs, real-time adaptation of game state to a player’s internal state, and qualitatively different kinds of gameplay experiences. This talk covers both the near-term and long-term outlook of BCI research for the game industry but with an emphasis on how technologies stemming from this research can benefit developers in the present day.

Ambinder holds a B.A. in Computer Science and Psychology from Yale, and a PhD in Psychology from the University of Illinois; according to his LinkedIn profile, he’s been working at Valve for nearly 11 years.

The session details say that the presentation’s goal is to equip developers with an “understanding of the pros and cons of various lines of BCI research as well as an appreciation of the potential ways this work could change the way players interact with games in the future.”

SEE ALSO
Facebook is Researching Brain-Computer Interfaces, "Just the Kind of Interface AR Needs"

While the description of the upcoming GDC presentation doesn’t specifically mention AR/VR, the implications of combining BCI and AR/VR are clear: by better understanding the user, the virtual world can be made even more immersive. Like eye-tracking technology, BCI signals could be used, to some extent, to read the state and intent of the user, and use that information as useful input for an application or game. Considering Valve’s work in the VR space, we’d be surprised if Ambinder doesn’t touch on the intersection of VR and BCI during the presentation.

Road to VR will be at GDC later this month to bring you the most important news. Showing something awesome in AR or VR? Get in touch!

The post Valve Psychologist to Explore Brain-Computer Interface Research at GDC appeared first on Road to VR.

Viewing all 23 articles
Browse latest View live




Latest Images