adrian's soapbox

Project Description: Ellis Island Virtual Reality Experience


From September 2014 to June 2016 I was the Project Lead and Lead Programmer on a Virtual Reality project at the Bergen County Academies (my high school) in collaboration with the the National Park Service’s Ellis Island / Statue of Liberty National Monument. The idea of the project is to digitally recreate Ellis Island as it was between the years 1918-1924, to be viewed in a Virtual Reality headset. At the time of writing, the project supports Oculus Rift, Oculus DK2, and HTC Vive in both seated and roomscale paradigms. The project is still ongoing, and I passed it on to current students when I graduated in 2016.

In this writeup I detail some of the systems that I developed for the project. Please note that the content here is in-development and in-beta and is subject to change.

Immigrant Interaction System

One of the central aims of the project is to teach important historical concepts in an engaging and immersive way. To achieve this, I designed a fully-scriptable immigrant interaction system. The system is designed around a custom XML schema that allows writers to construct complex conversation trees with the immigrants. This way, the player can talk to immigrants of the period and hear their unique story. In the video below, you can see one example of this system.


On the top half of the immigrant interaction panel, we added a historically-accurate immigrant inspection card, detailing the name and background of this particular immigrant. On the bottom half, the player may interact with the immigrant to hear their story.

Dynamic Immigrant Customization and Headmesh Morphing

The team that I worked with was quite small, so one of the challenges that we ran into was one of large-scale asset creation. Put simply, we didn’t have the time nor the resources to realize our vision of dozens of individually modelled and rigged immigrant models. With this in mind, I set out to find ways to reduce the art team’s workload. The system that we settled on was a dynamic immigrant headmesh editor. This allows us to simply create a single high quality head basemesh that can be edited directly in the Unity editor, as seen below.


After the artists decide on the physical features of a given immigrant, they can save the headmesh data to a JSON file. In addition to facial features as shown above, we can also assign peripherals to each immigrant (such as hats or scarves):


The system has proven to be quite versatile and powerful. It is also scalable: new facial morphs can easily be added in Maya or Blender, and loading extra peripherals is a matter of editing a JSON file.

Intelligent Pathfinding and Obstacle Avoidance

I also added a dynamic obstacle avoidance system to the immigrants’ movement. This way, immigrants will try not to walk into each other in a way that breaks immersion. For static obstacle avoidance, we use Unity’s standard baked navigation meshes. The system is illustrated below (notice how the immigrants stray off the path to their target in order to avoid each other).

Custom Control Schemes

We wanted the Ellis Island experience to be accessible to as many people as possible. We found in our closed beta tests that many players (especially older users who may not have used a game controller before) had trouble using the Xbox controller bundled with the Oculus Rift. Instead we chose to use a Dualshock 4 controller. The Dualshock 4 has two features that make it particularly accessible: First, it has a large button (the touchpad) in the middle of the controller. This is very easy for the user to find while in VR (that is, when they can’t see where the buttons on the controller are). Second, the Dualshock 4 has a built-in gyroscope, and we can leverage this in our control scheme.

The actual control scheme that we decided on for seated VR uses the combination of these two elements. To walk forward (in the direction that the player is facing) the player simply presses down on the DS4 touchpad. To modulate their speed, the user can tilt the controller up and down. In order to achieve this, I developed a custom communication layer between the Dualshock and Unity (it turns out that the touchpad and gyroscope are not accessibly by Unity’s default input systems). This input system is demonstrated below.

In roomscale VR (such as with the HTC Vive), we chose to use a locomotion and interaction solution similar to that seen in Valve’s The Lab. Roomscale VR is much more natural and intuitive than seated VR (as the user can simply walk around like in real life) so we did not find the need to adopt an alternative control scheme for broader audiences. I have documented and open sourced my implementation of this locomotion mechanic: you can find more information in this writeup. You can see an example of the mechanic in the video below.

Tutorial Scene

For seated VR (with the Dualshock-based control scheme described above) I additionally developed a Tutorial sequence to introduce the user to our locomotion system. The tutorial starts outside of VR, on the computer monitor. This way the player can familiarize themselves with the controller before moving into VR (where they can’t see the controller). Halfway through the tutorial, the player is prompted to put on the VR headset. The tutorial sequence is shown in part below.

Localization

We have additionally added the ability to localize the tutorial sequence to multiple languages. This has been quite useful when presenting the project to visitors of my high school from other countries. Below is an example of this localization, in Japanese.


Comments

comments powered by Disqus