The VR web is here

Native applications

The HTC Vive includes base station cameras for tracking a player within the room

As support for VR web content grows across desktop and mobile devices, native applications on different platforms have embraced the API in unique ways as well. AltspaceVR, an application for desktop and GearVR, developed an in-app web browser with a custom WebGL renderer to make the 3D objects from WebVR sites appear within the VR world as holographic mixed-reality content. Developers can use A-Frame or three.js to enable web experiences to be viewed in 3D.

This is the minimum required code for creating an A-Frame site that renders 3D content in-world within the Altspace in-VR browser:

<head>
 <script src="https://aframe.io/releases/0.3.0/aframe.min.js"></script>
 <script src="https://cdn.rawgit.com/
AltspaceVR/aframe-altspace-component/v0.3.0/dist/aframe-altspace-component.min.js"></script>
</head>

<body>
 <a-scene altspace>
  <a-box></a-box> //Scene Elements
 </a-scene>
</body>

Samsung’s GearVR browser allows additional metadata to be injected into existing web apps to specify and set environmental values, enabling a hybrid experience that blends traditional and immersive sites. Developers looking to add VR elements to an existing website can use Samsung’s experimental JavaScript or JSON APIs in their sites to customise settings like skyboxes, or enable access to the WebVR API for fully interactive sites.

For example, setting a 360-degree photo sphere background for an immersive website in Samsung’s mobile browser would look like this:

 window.SamsungChangeSky({sphere:'http:// your.url/skysphere.png'}); 

3D Graphics Libraries

The Samsung Internet browser for GearVR displays web content in 2D ‘tabs’ that surround the user

Any 3D experience rendered in a browser will make heavy use of WebGL, a subset of OpenGL framework that serves as the core for graphics rendering on computational devices. Writing graphics applications in pure WebGL is generally a tedious and extensive task, but a number of JavaScript libraries have been developed to sit on top of the WebGL layer and facilitate the development of rich 3D environments within a WebGL canvas.

Gone are the days of specific graphics plugins: WebGL has emerged as the standard for these types of experiences and is supported by all major browsers. Many of the popular 3D JavaScript frameworks, including three.js and BabylonJS, have added direct support for WebVR to further streamline the process of integrating VR functionality into interactive experiences.

Creating scenes with a library will vary syntactically between frameworks, but will generally include a representation of each of the following components:

  • A scene that wraps and defines all the objects in your 3D environment
  • A camera that projects a specific view of the scene to the user
  • 3D objects that make up the virtual representation of the 3D world
  • Various lighting and physics effects
  • Audio elements

VR artist and developer Erica Layton creates intricate virtual worlds within the browser using A-Frame, including this scene in which an anthropomorphised island watches the user as they move around

Developers can utilise 3D frameworks to generate objects, materials, textures, lighting and application components for their experiences. They handle the translation of these concepts so they can be rendered directly to WebGL canvases and displayed in browsers. When a user visits a page with a WebVR-enabled 3D experience, it will appear with a single camera until the browser is told to move into VR mode.

WebVR boilerplate

Many of these frameworks rely heavily on the WebVR Boilerplate framework developed by Boris Smus, which surfaces the capabilities of the WebVR API to 3D frameworks via VRManager, VREffect and VRControls functionality. The VRManager handles the browser’s transition between non-VR and VR modes; the VREffect calculates the specific distortion based on the VR device properties; and the VRControls provide the transformational data to the virtual camera within the scene once the browser is in VR mode.

ReactVR

Following on from the success of React and React Native, Facebook announced the launch of ReactVR in October 2016. Although most of the details around ReactVR are still to be announced, we know it will give developers a new way to add to the declarative framework with a VR UI layer.

Individuals looking to work with ReactVR can expect the framework to build off many of the existing principles of the React libraries, with an additional OVRGUI layer to handle the UI elements within a scene, likely optimised for the Rift headset. Additional components may include a framework for audio and video, both spatial and 2D, within the given VR application.

Beyond JavaScript

When rendering content in a VR display, the scene will require a distortion effect to counteract the curved lenses and left-right eye offset

A number of different tools have begun to evolve around the WebVR framework, giving developers more flexibility when choosing an approach for creating VR experiences for the browser. These may be in the form of a markup-styled wrapper around JavaScript, such as Mozilla’s A-Frame, or as a compete visual editor such as the one developed by Vizor.

Unity game engine

A scene from a development project in Unity after being exported to HTML5. The blue button with the headset handles switching the game into VR mode

The Unity Game Engine is one of the largest developer tools for creating virtual and augmented reality content. It has been quick to support mobile and desktop devices for immersive technologies by offering developers the ability to export 3D applications to a number of different platforms.

With its HTML5 export functionality, games built with the Unity editor can be exported as web apps, and developers can utilise a plugin from Jump Gaming to support WebVR. The flexibility provided by the engine around physics, graphics, asset management, multiple scripting languages and large scenes make it a popular choice.

The final site generated by Unity with WebVR support contains an HTML5 site with a WebGL canvas hosting the game’s content, and converts the underlying .NET code to C++ via IL2CPP before using asm.js to generate a browser-friendly version of the application.

Conclusion

There is still much to be seen in how the web ecosystem will to grow around immersive content. Interaction patterns will vary across different devices, and experiences will need to adapt to the variety of input opportunities that may be surfaced through the VR browsers in different devices. Link traversal and navigation-related security considerations also require a reimagining of how global information is conveyed across the web.

As we redefine what the browser is capable of, the ability to shape new worlds on the web will prove influential in both the web and virtual reality development communities, which are increasingly starting to overlap. Now, more than ever, is the web’s time to shine, and through virtual reality it will continue to enable us to connect, learn, communicate, and experience the world in unprecedented ways.

This article was originally published in net magazine issue 288. Buy it here.

Related articles: