EyeSkills Prototype for Lazy Eye – Iteration 2 – Practitioner View

A first quick look at how the second iteration of the open-source EyeSkills prototype works. This prototype is designed to test the visual abilities of a person with Lazy Eye, and evaluate the effectiveness of a few techniques which may be useful in allowing a participant to re-establish binocular vision.

Continue reading “EyeSkills Prototype for Lazy Eye – Iteration 2 – Practitioner View”

Two more binocular break-throughs

This is a quick note about some more user tests we ran, this time with two ladies in their forties and fifties.

We had our vision therapist with us, who ran both through a series of standard tests.  In both cases, neither could use both eyes simultaneously!

As soon as they were in the VR environment, suppression was broken.  We believe the cause was the “low conflict” nature of the environment they were looking at (mostly black backgrounds).  I apologise for not having the time for writing up the full test, but we have – more importantly – implemented the ability to place more of our calibrations/test into and out of conflict in the recently finished second iteration of our prototype.

In the next phase of testing, we will revisit this phenomenon in more detail!

The development of Amblyopia and Strabism

The development of Amblyopia and Strabism


For many animals with multiple eyes, their brains combine the electrical signals from each eye into a “master” (cyclopic) image which gives them stronger environmental awareness.  Sometimes physical problems seem to prevent the emergence of binocular vision, and sometimes it simply never emerges due to poorly understood neurological issues.

Continue reading “The development of Amblyopia and Strabism”

Free code! TTS and Device Gestures.

I’m going to throw a few snippets of code in here that are coming out of the current sprint, because they are generally useful.  I’ll also spend a few minutes commenting what’s happening so that non-unity/C-Sharp developers can start to get a feel for it….

The snippets are code for easily generating offline Text-To-Speech and extending gestures on an Input Device.

By the way :  all the code we are writing is going to be free, as in speech, in the end 😉   Thanks to the prototypefund.de!

Continue reading “Free code! TTS and Device Gestures.”

Iteration 2 – Concepts


In Iteration 1 we had a participant flow which went through a series of calibration scenes covering different aspects of vision (e.g. Monocular vision present? Biocular vision? Depth perception?…). These scenes were originally focused on building up a calibration object which could describe the participants visual abilities, to then calibrate the “main” part of the app which would be games developed by third-parties.

Continue reading “Iteration 2 – Concepts”