Many current Head Mounted Displays (HMDs) currently in use for research are tethered to a PC. These HMDs use the processing and graphical power from the computer. They allow the researcher to view what the participant is seeing via the computer display. Additionally, any research data is stored directly on the PC for easy access. For any research that’s done relatively stationary (e.g. in a single lab space), these devices are a good solution. They also require the least amount of steps to take for the participant, because the researcher is using their PC to start and display whatever the participant is supposed to be experiencing. 

However, since these devices are not portable, they cannot easily be taken to other locations. They require a dedicated setup space. Additionally, due to the relatively high cost of the PC they’re tethered to, research and education done with multiple concurrent participants is not always possible. 

One solution to these problems is using standalone HMDs. These devices do not require a PC to run the experiments / educational material, and are therefore more portable, and generally more affordable. These devices do not use a powerful PC to do their processing, but use their onboard chip to do so. This does limit their power in terms of graphical fidelity, but this should not be a major roadblock for most research. 

Possible HMD Devices

One of the most popular devices in this category is the Meta Quest 2 (formerly known as the Oculus Quest 2). These devices are affordable (€900 for a business edition, required for doing research) and powerful enough for most research and educational purposes. The position of the hands can be tracked with and without use of the controllers. 

Another strong device is the Pico Neo 3 (€600), which is also available with Tobii eye tracking (Pico Neo Eye 3, €750). The Eye version also allows for foveated rendering, which can enhance the perceived graphic fidelity of the device. The Pico Neo does not have hand-tracking, and the tracking of the controllers is a little less precise than can be found on the Quest 2. 


Out of the box these standalone devices do not allow the researcher to see what the participant is seeing. One simple solution is to cast the view of the HMD to a mobile device or Google Chromecast, which is easily done using the Quest. Using this, you can only see what the participant is seeing, and not influence the experience in any way. 

Another option is to build a networked version of the program you wish to run, and to use a mobile device such as a tablet to log into the same session that the HMD is in. This allows you to see what the participant is seeing, but also to view the participant from other points of view, or to influence the VR environment live. The rest of this article is describing some of the steps you need to take in order to do this. 

Viewing Device

In this example we’re going to assume you will use a modern Android-based tablet as your viewing device, such as a Samsung Galaxy Tab A7. These devices have a good battery life, a large screen, and are very affordable. 


See Multiplayer for additional information. See the very accessible Photon Tutorial for more information. 

Matching specific participants and viewers

As an example: you run a large scale study, where you want to have two participants interacting with each other, with a researcher observing the experiment using a tablet. Multiple groups of people should be able to participate in your study at the same time:

In this case you can use rooms with a specified name using:

PhotonNetwork.JoinOrCreateRoom(roomName, roomOptions, typedLobby, expectedUsers);

Use this code instead of the two calls given in the Photon Tutorial:



public override void OnJoinRandomFailed(short returnCode, string message)
    Debug.Log("PUN Basics Tutorial/Launcher:OnJoinRandomFailed() was called by PUN. No random room available, so we create one.\nCalling: PhotonNetwork.CreateRoom");

   // #Critical: we failed to join a random room, maybe none exists or they are all full. No worries, we create a new room.
   PhotonNetwork.CreateRoom(null, new RoomOptions());

With JoinOrCreateRoom: when no room yet exists with a given name, it will create said room. If a room with that name already exists, the player will attempt to join the room. 

The variable ‘roomName’ should be set using some UI on both the tablet and each of the HMDs. One good way of doing this is using a 4-digit pin, which you’ll need to generate externally and provide to each player in each of the group.



Photon works with a concept of ‘ownership’. Networked items (usually) need to have a PhotonView attached, and should be owned by the player who controls the object (e.g. position and rotation). 

Some items are not owned by a specific player, but are considered ‘room objects;. These are either already present in the Scene during development, or are spawned using the call: 

PhotonNetwork.InstantiateRoomObject(name, position, rotation); 

Difficulty arises when a room object should be manipulated by one of the players. This is the case for instance when the room object is also an XR Interactable (see this YouTube tutorial, and this documentation for explanations). Once a player grabs the object, this object should then be owned by them, before the new position and rotation can be sent over the network.


Getting data from HMDs


XWiki 14.4.7