Magical Reflections, a WebXR experience powered by NanoGL

To support Johan Erdmann Hummel exhibit at Alte Nationalgalerie, we created a virtual art experience which leads into one emblematic painting from the artist, The Granite Dish in the Berlin Lustgarten. In this painting, users are able to move freely, discover the art piece from the inside and take pictures to share with the community.


There is one twist, the project has to be accessible in VR through the browser using Oculus Quest 2 and WebXR.

As for all our realtime virtual experience, we used our homemade WebGL microframework nanogl and push it to the next level with WebXR capabilities. Frontend is handled with Vue.js and all app states via xstate. Photos are stored and presented through a gallery, we used the serverless framework deployed with AWS.

The museum
The Granite Dish in the Berlin Lustgarten

The museum

Our vision of the museum was to create large dreamy rooms with beautiful lights and colors. The museum relies a lot on lightmap, baked using Houdini. Houdini uses a non destructive workflow to generate on the fly lightmap uv with good texel density scattering, which is great for quick iterations. And since it’s node based, it enables other things, like, cleanup the scene, add and place interactible frames, generate the navmesh used in XR mode, color some part of the scene and more... all at once.

Gallery

Some elements are rendered using full PBR and image based lights with nanogl-pbr, for instance the frames and the vases.

To contribute to the dreamy feel, we added special effects and postprocess, like lens flares, Chromatic Aberration and real time reflections to the floor.

Since all mobile or computer specifications can't afford all these effects. We implemented dynamic quality adjustment based on compute performance. Basically the average fps on time is sampled and after some time, it adapts the quality by removing one effect. It does it again until the frame rate is considered as stable and acceptable.

Painting

The painting and elements not visible in the painting like the museum or trees have been entirely reproduced in 3D by our 3D artists. It involved placing volumes, hand painting all the content, creating from historical documents missing parts, modeling all near characters and generating impostors for far characters.

There are 16 characters in the scene (including the dog 🐶). Performance wise, we couldn’t afford two textures (normal, albedo) per character and complex shading. To reduce textures and draw call count we used Simplygon to aggregate characters. Beside other features it can merge selected mesh, generates a new uv set and creates new textures based on sources according to the new uv set.

By using this we reduced characters and textures count to 4. From then, we baked global lighting from the scene inside blender and on top of it hand painted again each character to enhance visuals.

Special FX

The core piece of the painting is the granite bowl reflection. Since it’s a flat surface on top of a pillar, using solely a cubemap resulted in a floating bowl and non believable reflections. We decided to take an hybrid approach. Real time reflections on the bottom, cubemap on the sides and top. Using only one shader we used vertex color to blend those two reflections according to the red level.

//Granite reflection sample shader
vec3 viewDir = normalize( uCameraPosition - vWorldPosition );
vec3 worldReflect = normalize(reflect( -viewDir, vWorldNormal ));

float NoV = sdot( viewDir, vWorldNormal );

// Base cubemap reflection
vec3 specular = textureCube(tProbeCubemap, worldReflect).rgb; 

// Sample the reflection texture
vec2 rcoord = gl_FragCoord.xy / uViewport;
vec3 rcolor = texture2D(tReflectTex, rcoord).rgb;

specular = mix(specular, rcolor, vertexColor().r); 

// Apply specular smoothness
specular *= F_Schlick( NoV, vec3(uSpecular), uSmoothness );

FragColor.rgb = baseColor() * uDarken + specular;

The real time reflection is done by reflecting the camera matrix from an arbitrary axis and rendering the scene from this perspective. To gain some fill rate performance the bottom of the bowl is used as a stencil mask to render only pixels that matter.

We wanted to make the scene more alive. We decided to animate the fountain using a flow map generated with substance painter and then animate, blend the texture based on this flow map.

// vec2 uFlowMapConfig
//   x : time
//   y : speed 

vec3 flowDir = texture2D(uFlowMap, baseColor_texCoord()).rgb;
flowDir = (flowDir * vec3(2.0)) - vec3(1.0);
flowDir *= uFlowMapConfig.y;

float phase0 = fract(uFlowMapConfig.x * 0.2 + 0.5);
float phase1 = fract(uFlowMapConfig.x * 0.2 + 1.0);

vec4 tex0 = texture2D(tex_basecolor, baseColor_texCoord() + flowDir.xy * phase0);
vec4 tex1 = texture2D(tex_basecolor, baseColor_texCoord() + flowDir.xy * phase1);

float flowLerp = abs((0.5 - phase0) / 0.5);
vec4 FragColor = mix(tex0, tex1, flowLerp);

We added wind by rotating the sky dome and making trees move. Since trees are impostors and therefore simple planes we can add local displacement using the tangent of the face to make it move.

The trunks need to be sticky and the top needs to move. To handle this and add variety between trees, we added a subdivision on the trunk and used vertex color to adjust the displacement factor. The Z position of trees faces is used to offset the displacement time period.

 float height = aColor0.r;
 vec3 tangent = cross(vec3(0.0, 1.0, 0.0), aNormal);
 vertex.position += sin(uTime + vertex.position.z * 0.1) * tangent * height * 0.4;

WebXR

WebXR is the new standard to handle virtual reality within a browser (previously called WebVR but has become WebXR to handle augmented reality or any other reality as well). If you are not familiar with the concept, to present a VR frame to the viewer, the scene has to be rendered twice, one for each eye. The WebXR api returns the rendering informations for each eye, transform, projection, viewport and, if available, controllers/hand positions.

Inside an immersive session there is no HTML/CSS presentation, everything happens inside the canvas. The experience had to work in mobile/desktop and XR at any time.
So we adapted all the UI with a custom toolset in WebGL, including text rendering.

We are always using Unity a lot in our workflow and text rendering uses SDF textures generated with TextMeshPro and then to a json containing the typography specifications.
Unity makes it very easy to develop custom editor tools. We wrote an exporter that generates the json and exports the texture in the desired folder. For instance here is one of the font used in the site, Gaphik 400.

And a sample of the json with the letter “ A ” below.

{
  "characters": [
    …
    {
      "unicode": 65,
      "glyphIndex": 55,
      "scale": 1.0,
      "glyph": {
        "index": 55,
        "metrics": {
          "width": 65.625,
          "height": 77.9375,
          "horizontalBearingX": 3.921875,
          "horizontalBearingY": 77.9375,
          "horizontalAdvance": 73.46875
        },
        "glyphRect": {
          "x": 702,
          "y": 113,
          "width": 67,
          "height": 78
        },
        "scale": 1.0
      }
    }
    …
  ],
  "faceInfo": {
    "lineHeight": 119.9000015258789,
    "scale": 1.0,
    "pointSize": 109,
    "atlasPadding": 4,
    "textureWidth": 1024,
    "textureHeight": 1024
  }
}

Every renderable object in the project, and not only ui, follows the same design pattern to be rendered easily from any camera at any time, making graphics components reusable.

Controllers are handled with a separate standard, Gamepad API. It is a barebone api and we needed to access controller information from anywhere within the app. So we developed a middleware to easily access controllers events and position, and adapted controllers models supplied by oculus.

WebXR Optimisations

Rendering the scene twice or even four times (one time for each eye + real time reflection) can be heavy. It is important to minimize WebGL state change between renders and when possible only update MVP uniform between eyes render. This is where nanogl-state shines. This module stack GL states and apply only changes needed between renders.

Here is a sample of how nanogl state is used :

var GLState = require( 'nanogl-state' );
 
// create the state machine from a gl context
// glStage will be sync with the actual gl state
var glState = new GLState( gl );
 
 
// create a config to change glCullFace for both opaques and blended
var cfgG = GLState.config()
  .cullFace( gl.FRONT );
 
 
// create a configuration to enable DEPTH_TEST
var cfgA = GLState.config()
  .enableDepthTest()
  .enableCullface();
 
// create another config for alpha blending and depth test
 
var cfgB = GLState.config()
  .enableBlend()
  .blendFunc( gl.SRC_ALPHA, gl.ONE_MINUS_DRC_ALPHA )
  .enableDepthTest()
  .enableCullface()
  .depthMask( false );
 
 
 
// somewhere in your render loop
function render(){
  // context in default state
 
  // add the cullFace cfg to the stack
  glState.push( cfgG );
 
  // add cfgA to the stack, apply the complete stack to the context, then pop cfgA
  glState.now( cfgA );
  renderOpaqueStuffs();
 
 
  // apply the cfgB, enable DEPTH_TEST and CULL_FACE are skipped since they are already enabled
  glState.now( cfgB );
  renderBlendableStuffs();
 
  // pop the cullFace config (cfgG)
  glState.pop();
 
  // sync the glState and the actual gl context
  glState.apply();
 
  // here we are back to the default state
}

To optimize fill rate performance it can be handy to use the optional WebXR feature Fixed Foveated Rendering. It reduces the resolution of the frame in peripheral vision, which eventually reduces the pixel count, without noticeable downside as the viewer is generally looking forward.

Finally maintaining solid 90fps in WebGL and in XR is a hard task, and the motion sickness generally comes from unstable fps. Oculus supports a feature to set at runtime the target fps. We setted it to 72.

navigator.xr.requestSession('immersive-vr', {
  requiredFeatures: ['local-floor'],
  optionalFeatures: [
    'low-fixed-foveation-level',
  ]
})
  .then(this.sessionStarted);
 
//...
// SESSION STARTED
if(session.updateTargetFrameRate)
  session.updateTargetFrameRate(72);

This project has been really fun to work on, from a creative and technical point of view. We hope you enjoyed the reading and if you haven’t seen the website yet, check here