Chrome 79 Beta is out and with it the public support of the WebXR Device API which has been under development for around two years. Up to now, it was only accessible via Origin Trials. The API is about accessing virtual reality (VR) and augmented reality (AR) devices, including sensors and head-mounted displays, on the Web. For now, Chrome only supports immersive Virtual Reality, with Augmented Reality as the next step. The Beta version is a parallel install without affecting your present Chrome installation. It is scheduled to become the stable version by Dec 10, 2019.
To complete the confusion, the Origin Trial for WebXR for Chrome version 76 to 78 is scheduled to work until Dec 4, 2019. So before Dec 4, the VR examples linked in this post might still work with the then stable version of Chrome, though looking a bit different. Whereas, after Dec 10, you won’t need the Beta version anyway.
One usable virtual reality device is in your pocket or lying next to you on the desk or you are even reading this blog post with it: your mobile phone. With a viewer like Google Cardboard or any similar one like the very nice and ultraportable Homido mini, you can view any of the VR examples mentioned here.
The Immersive Web Working Group has published a lot of information on GitHub, including some VR examples together with all the source code, published under the MIT license. So let’s play around a bit and see what can be done.
The first example works just by modifying the 360 Stereo Photos sample. By changing the link to the sample photo to point to any photo of your own in the same format, you can view whatever you like. The 360° photo used here was taken at a very famous spot in the Berchtesgaden area in the Bavarian Alps. With this VR sample, you can see a draggable preview in every browser, but only with Chrome Mobile one can start the immersive VR session, which looks like this:
Just click on the picture to see the preview of the VR session in a new tab.
Ok, that’s not that exciting. Let’s try something more sophisticated, like something that’s not present in the published samples: a dynamic WebGL scene. The sample source code provides a glTF importer and – inspired by this blog post – I used the glTF exporter of THREE.js to transfer the scene with the rotating globe. This combination works, but unfortunately, a lot of the 3D models in glTF format found in the Web won’t be displayed out of the box. One of them, downloadable at poly.google.com, did work though and you can see it in the VR session when you look behind your back. See below for a solution to that problem.
Just like above, click on the picture to see the preview of the VR session in a new tab:
Digging down to the problem of not displaying a lot of 3D models in glTF format, it turned out that the sample code does not enable the WebGL extension OES_element_index_uint which is necessary to support the gl.UNSIGNED_INT data type used by many models. To use those models, you have to explicitly enable that extension after creating the WebGL context:
Due to asynchronous loading of model data, some more complex 3D models may take some time until they appear in the scene, especially on slower internet connections.
As a bonus, I stumbled over a very nice Visual Studio Code extension named glTF Tools, enabling the editor to preview glTF models in different ways and offering a lot of context information about single elements in the file.
Where to go from here
As stated in the Chromium blog: many experiences can be enhanced with immersive functionality. Examples include games, home buying, viewing products in your home before buying them and more. Even more so, when this functionality can be included in every web page without the need of creating (and downloading) an additional app for every purpose.