9 min read

Migrations

Aspect 1 – Demonstrating the impossible is possible – Guided – Verifying abilities

Our user tests established the validity of checking and exploring the visual abilities of a participant in the following order:

  • Monocular abilities
  • Biocular abilities
  • Binocular abilities (fusion without depth perception)
  • Binocular abilities (fusion with depth perception)
  • Eye straightening

Now, lets explore these a little more deeply.

Monocular abilities

Without the ability to see with each eye individually, the participant will not gain from further experiences nor be able to hope to gain stereo vision. Indeed, if they have not already done so, they ought to seek professional guidance to be sure they are not experiencing a more worrying underlying condition.

In establishing a person’s monocular abilities there are several abilities we can check for, given only a basic VR setup.

  • Is the eye able to recognise a light source being active (and at what luminance does this become apparent).
  • Does the eye have the ability to see all colours (not colour blind)?
  • What is the minimum object size the eye able to resolve.
  • What is the minimum contrast for the minimum object size that the eye is still able to resolve.
  • Without eye tracking hardware integrated into the headset, we cannot do reliable motility tests (check the ability of the eye to follow the movement of an object)1.

These steps are built up sequentially and depend on one another. We implemented no monocular calibration scenes and thus need partial implementation in iteration 2.

Biocular abilities

In this case we are attempting to either verify the participant can see with both eyes simultaneously, or find a way for them to do so. We have found the most promising way to get to this baseline is to show the participant different objects, with no perceived spatial conflict, no visual confusion in the background, and with control over the relative luminance in each eye.

As such we are attempting to minimise the impact of “dominance cells” in the mind (particularly by changing the relative “volume” of the luminance received in the brain from each eye) whilst avoiding the triggering of “conflict cells” at various levels of abstraction (stimulating different areas of the retina by changing relative position, avoiding overlapping stimulation via background visual noise, and encouraging the brain to classify what each eye is seeing as being fundamentally different objects (in shape)).

This is effectively identical to our existing Binocular Suppression calibration scene.

The next step for Aspect 1, however, is for the person guiding the experience to be able to alter the experience and ask more questions –  at the very least, what happens when we place the two test images at the virtual location? Does this visual conflict then change the necessary luminance ratio to see, or even make binocular vision impossible?

Binocular abilities (fusion without vergence/depth perception)

In this case we are attempting to discover, or build, the ability of the participant to see “the same” object with both eyes simultaneously. In particular, we focus on the simplest case in which the fixation object (the object to look at) is projected onto the same location of the participant’s retina while their eye is as near to rest as possible.

To help the participant we minimise conflict potential in their visual environment, provide a high level of contrast, and follow through on luminance levels from their Biocular abilities.

In particular, to allow the fixation object to appear in the same position, we need to give the participant control over finding the merge point at which the individual (identical) objects appearing in each eye then appear to be in the same location (which also compensates forAnomalous Retinal Correspondence).

We have found that giving the objects different colours (colours which do not cause problems with colour blindness) helps both in differentiating the objects enough to understand what it happening, and also to identify overlap (as the overlapping image will appear to shimmer with both colours, which will not happen if suppression is then triggered at the moment the images appear to be in “conflict”).

This is similar to our existing Misalignment Calibration – which we will modify to begin in biocular starting configurations. We will also ask whether the participant has an eye which falls “inwards” or “outwards” -this is something they generally know, and modify the misalignment/fixing algorithm to alter the fixation eye depending on whether the head is moving left or right (i.e. doing away with needing to explicitly detect which eye is strabismic).

Binocular abilities (with vergence/depth perception)

There are several cognitive tricks humans (and other animals) use to perceive depth ordering (such as object occlusion, reflections, shadows, relative size…), but in terms of raw depth perception there appears to only be accomodation and vergence.

Accomodation is the re-shaping of the eye lens to increase or decrease its focal power – which is irrelevant in a VR environment where the display being perceived is at a fixed distance. The second technique, for objects within 6m of the participant, is vergence. Vergence occurs when each eye tracks a common object, and in doing so, they are no longer parallel to one another. The brain senses the tension in the eye muscles and in doing so deduces the relative angle of each eye towards the object and thus the distance of the object (through a form of “learned triangulation”).

Only vergence is relevant to VR experiences. By altering the relative position of a common object in the view field of each individual eye, it is possible to create the illusion of different distances from the participant.

If, by this point, the participant has mastered the ability to see the same object with each eye simultaneously (particularly as the system is compensating for suppression and eye misalignment) then we may also be able to give the partipant (and their brain) the experience of perceiving depth through vergence (relative to their strabismic eye positions).

This is similar to the  depth perception calibration scene we have already built.

Strabismic eye straightening

Finally, can we get the user – over time, to start being able to learn to pull their eye inward? If they have built up the ability to use both eyes simultaneously, and to fuse that input into a depth aware mental image, then their brain will learn that suppression is not necessary. Unfortunately, when taking the VR headset off… we are no longer able to alter the world to line up with each retina, so suppression will reoccur unless we can help them align and parallelise their eyes.

We may be able to build the participants ability once they have reached the previous level of binocular ability, by gradually moving their perceived world towards a more central position, encouraging their eye to follow their fixation point to keep the object fused. In this manner, their brain can learn that “it is ok” to use both eyes simultaneously, and in a parallel alignment.

In an nutshell therefore:

  • Exercise 1 : Get the user to line up fixation objects, taking into account all previous information about their abilities. Begin to parallelise the world views, and ask the participant to report when they lose fusion. Measure. Repeat with subtle variations.
  • Exercise 2: Get the fixation objects to move around, quite randomly, and at increasing speeds. Ask them to report when they lose fixation.
  • Exercise 3: Combine the above exercises.
  • Output : track improvments over time

NOTE: If the participant has reached this stage, but shows no signs of improvement, there may be some theoretical risk of getting “horror diplopia” (their brain might stop suppressing the strabismic eye, which would mean that outside the VR headset they constantly see double) so we would advice the participant to stop using the system, or at the very least, watch for such symptoms and then stop immediately.

Aspect 2 – Demonstrating the impossible is possible – Auto didactic – Sensitisation

Monocular abilities

In the spirit of falsification (see motivational aspect 2) we would likely present all the previously listed features simultaneously to the participant.  The participant would move their head and/or give additional input to deactivate/tune-out features to explore their abilities.

Biocular abilities

We are interested in finding the point at which a combination of factors suddenly “switches off” the participants ability to see with both eyes simultaneously. Our current thinking is to allow the participant the ability to explore multiple dimensions of ability (in fact, to explore “irritation factors”/Storfaktoren) simultaneously by mapping the to axis of head movement.

To initially falsify the experience, we might display a real-world scene containing a low-contrast object which appears to be subtly different in each eye.  If the difference is not visible (only one eye perceives the object) then we allow the participant to progressively remove the complex background, and reposition the two objects to avoid conflict, to get a feeling for how these features interact in their visual system.

Binocular abilities (fusion without vergence/depth perception)

Allow the participant to “lock” or “unlock” the eye-misalignment calibration with the headset. We incrementally add new objects from equal “absolute” positions (which will appear as double-images) which then migrate to equal “virtual” positions (on the retina).  We can alter this with more/less objects, altering object complexity, working with objects which continually move, while we ask the user for feedback (e.g. nodding/shaking for Y/N answers with the headset, whilst the misalignment is locked).

Binocular abilities (with vergence/depth perception)

It makes sense to allow the participant to explore, under their own control, how altering the distance of objects might look to them.  In many cases, we suspect this alone will not lead the participant to perceiving depth.

We would like to allow them to add other supporting depth cues (e.g. shadow, or positional movement to create parallax) and remove them over time or per session (i.e. we help guide their brain with ever fewer and more subtle cognitive cues, giving and forcing their brain to gradually build up a different set of depth perception skills based around vergence).

Aspect 3 – Falsifying and Quantifying

We’ll look ahead to an idea for how we might gain more precise insights into the exact abilities of players.

For example, imagine moving the participant through binocular environments in which we progressively navigate the limits of their individual envelope, within a paradigm such as “locate object X”. We might present the object within increasingly “noisy” environments, or as a low contrast object within high-contrast objects etc. The only limit is the developers/researchers imagination. This would be a very interesting approach both to validating and “pushing” the shape of that participant’s visual envelope over time.

1We might be able to approach motility tests in a simplistic way such that we ask the user to track a minimally sized object and indicate when its appearance changes in a subtle manner – but there would be many variables (reaction time, saccades etc.) confounding any results.

 

Would you like to beta test EyeSkills* or just follow what we are doing?