6 min read

The first iteration focused on technical feasibility and laying out an initial foundation we could build on.

We split the project into three core sections :

  • Framework
  • Calibrations
  • Experiences

The Framework is the most fundamental part of the whole project. The motivation is to allow developers to easily concentrate on producing useful VR environments (whether for problem detection, vision training, research and experimentation) for strabism and amblyopia, without having to constantly “reinvent the wheel”.

In this first iteration we created “prefabs” (prefabricated objects) which a developer can simply drag and drop into any project, such as :

  • Simple per-user contexts and server-side data storage
  • A powerful camera rig capable of supporting individual eye
    • alignments
    • image intensities
    • pose tracking/restriction
  • Example calibration scenes which auto populate a calibration object
    • The calibration object is integrated into the Unity IDE, to allow developers to very simply apply any desired calibration settings automatically to the camera rig.
  • A log manager, integrated into the native Unity logger, which (coupled with some server-side code) allows per-user log messages to be stored on a server for later analysis.
  • A series of UI elements such as:
    • A customiseable menu for the VR app, with audio navigation, which can also be controlled server-side.
    • Support for bluetooth control

To validate that the framework is useful and ergonomic from a developer perspective we then used it to create a series of calibrations and a user “experience”.

We were assuming, at this point, that calibrations would simply tell us what we needed to know about the participants vision, whilst the following experiences would provide the challenging game environments which would act to train the participants vision.

We built calibrations to establish whether a user could see with both eyes simultaneously:

To help us gauge which of their eyes was strabismic:

To measure the angle of eye misalignment – so that we can manipulate the positioning of the virtual world to center it in each eye despite their misalignment:

To test whether the participant was able to perceive depth:

We also then created an example game using similar framework elements to test that classical game development approaches would be easily compatible with our approach:

Although it proved more difficult to organise that expected, we waited to gather user feedback before continuing with the next technical iteration – and were extremely glad that we did so.

Fundamentally, we discovered visual abilities in our three testers, which classical testing said simply weren’t there, and which (according to many experts) are theoretically impossible. These three testers were in their thirties to fifties – and none were able to use both eyes simultaneously in everyday life, no whilst being testing using classical techniques.

In all three cases, the circumstances in the VR environments we built, enabled them to not only see with both eyes simultaneously, but also to fuse (albeit without depth perception in these user trials). During this process, we realised that the calibrations were actually exploration and learning environments in and of themselves, and ought to be treated as such. We also learnt that the role they play was very different for each participant, as each participant had subtly but fundamentally different types of visual ability. For example, one participant could use both eyes in a biocular fashion, but we then discovered that they unexpectedly had panorama vision (a rarity we had not considered).

Two participants also had alternating amblyopia, what use are eye misalignment measurements – when the fixing eye can constantly switch? Furthermore, the nature of this alternating amblyopia varied. The first tester could consciously control which was was fixing, whilst the second could not (the fixing eye was determined by the side of the face on which the fixing object was present)!

In further casual testing with two very lightly strabismic participants, it became quite obvious that – where symptoms are only very mild indeed – it is actually far more difficult to establish progress as the participant is often capable of seeing perfectly well with effort. The problem is reducing the effort required to achieve good visual results, but this is a very difficult measure to quantify or even “make visible”.

We have not, and can not, answer all the questions which were raised at this point – but we can increase our understanding of the underlying complexities and continue to validiate that the way we are building the framework will provide the needed flexibility to answer many of these questions at a later point.

Hardware limitations

We also confirmed some real difficulties and limitations with the current approach, which we suspected would come to the fore as long as we lack some form of eye tracking.

Without the ability to objectively observe the position of each eye, we are unable to adjust to the dynamic misalignment of alternating strabismics (as their strabismic eye changes almost instantaneously from side-to-side).  We cannot accurately measure the degree to which we can influence eye straightening. We cannot even confirm which eye is definitely the strabismic eye without human intervention!  We also lack the ability to measure saccades, or do motility testing.

Whilst eye tracking devices are inordinately expensive, they need not be.  The cause of their cost is the very high frame rate and extreme accuracy withi which they must operate.  This leaves the system with very little time to take a picture, detect the pupils, and detect the direction of gaze. In our case, where we need real-time measurements, the occasional point measurement would suffice (i.e. based on a momentary picture).  Where we are interested in detecting changes in high speed (e.g. stuttering of the eye during saccades) we do not need this in real-time – we can record video and post process it to recover the information we need. In this way we can minimise costs.

One consequence of implementing low-cost eye tracking using off the shelf standardised webcams, would be that we ought to (most likely) be connecting the devices over bluetooth to the mobile device connected to the headset (although WiFi connectivity would also possible, they increase complexity and cost).  Unfortunately, bluetooth generally only supports a single connected device at a time – which means that we can no longer rely on using a bluetooth connected controller for user input.

The consequence of not being able to rely on bluetooth controller input, is that we must revert to earlier plans of a completely “headset” controlled input mechanism (which we have already partly developed and use in some of the Iteration 1 calibration scenes). We may need to/want to extend this input mechanism to support additional input such as “knock” recognition (the participant “knocks” on the headset once for “yes” and twice for “no”) or speech recognition (not ideal for use in regions with low internet connectivity.

Falsification

It became apparent that participants often didn’t know themselves whether they were, or were able, to see with both eyes – or which eye they were currently using.  We have realised that it is incredibly important to start with “as near to real” situations as possible in which amblyopic and strabismic effects are obvious, such that we can “peel back” layers of complexity until the participant (hopefully) gains the ability to see biocularly/binocularly or fuse.  In this way we are able both to validate that the effect depends on circumstances we present, understand more about when the ability comes into play, and validate that this ability is not present under normal circumstances.

Conclusion

The next iteration should bring some fundamental and useful new ideas to the framework, and also bring us closer to a generally distributable application.

Would you like to beta test EyeSkills* or just follow what we are doing?