What a wonderful test. Mr R has alternating strabismus, and a very strong suppression. The Binocular Suppression scene is now designed well enough that it not only demonstrates the suppression switching on very well (by introducing conflict), but allows us to find that breakthrough point where – despite the suppression – Mr R can see…
EyeSkills Prototype for Lazy Eye – Iteration 2 – Practitioner View
A first quick look at how the second iteration of the open-source EyeSkills prototype works. This prototype is designed to test the visual abilities of a person with Lazy Eye, and evaluate the effectiveness of a few techniques which may be useful in allowing a participant to re-establish binocular vision.
Two more binocular break-throughs
This is a quick note about some more user tests we ran, this time with two ladies in their forties and fifties. We had our vision therapist with us, who ran both through a series of standard tests. In both cases, neither could use both eyes simultaneously! As soon as they were in the VR…
The development of Amblyopia and Strabism
The development of Amblyopia and Strabism For many animals with multiple eyes, their brains combine the electrical signals from each eye into a “master” (cyclopic) image which gives them stronger environmental awareness. Sometimes physical problems seem to prevent the emergence of binocular vision, and sometimes it simply never emerges due to poorly understood neurological…
Our Philosophy and Approach
What are EyeSkills building? We are building tools to help people with amblyopia and strabismus (lazy eye) have a chance to learn more about their condition, their visual system, and even to open up some opportunities to overcome and even correct their condition.
Free code! TTS and Device Gestures.
I’m going to throw a few snippets of code in here that are coming out of the current sprint, because they are generally useful. I’ll also spend a few minutes commenting what’s happening so that non-unity/C-Sharp developers can start to get a feel for it…. The snippets are code for easily generating offline Text-To-Speech and…
Iteration 1 – review
The first iteration focused on technical feasibility and laying out an initial foundation we could build on. We split the project into three core sections : Framework Calibrations Experiences
Iteration 2 – Migrations
Migrations Aspect 1 – Demonstrating the impossible is possible – Guided – Verifying abilities Our user tests established the validity of checking and exploring the visual abilities of a participant in the following order:
Iteration 2 – Motivation
Motivation We have identified four underlying motivations for the project. Whilst these build atop one another, we try to isolate them conceptually to help us manage our limited development resources.
Iteration 2 – Concepts
Concepts In Iteration 1 we had a participant flow which went through a series of calibration scenes covering different aspects of vision (e.g. Monocular vision present? Biocular vision? Depth perception?…). These scenes were originally focused on building up a calibration object which could describe the participants visual abilities, to then calibrate the “main” part of the…