Here is the first part of our current user testing script, which helps us establish what is happening with a new testers vision before we introduce them to the EyeSkills app.
We are building the EyeSkills framework and app in cooperation with other local sufferers, getting feedback and inspiration as early as possible in each development cycle to improve not just what we build, but how we understand what it is we are building.
The goal of EyeSkills isn’t to replace vision therapists, nor is it to produce super expensive equipment- our goal is to provide people with the best initial self-directed training environment we can for the lowest cost possible. It won’t be perfect, we’re aiming for adequate – especially at the beginning.
To do this we need to provide basic facilities for detecting what is happening with a user’s eyes. This should reach or surpass the kind of insight an initial inspection by an eye professional might manage.
For the first user tests over our early iterations, our goal is to validate the very most basic aspects of the system, so we begin the testing regime with some simple questions and physical tests:
Do you know how old you were when you, or somebody else, first noticed there was a problem with your eyes?
How old were you before you were taken to an eye professional?
Do you know what diagnosis you were given?
Did they attempt any form of therapy or treatment?
Have you had laser eye surgery or any other form of eye surgery?
Do you wear glasses or contact lenses? If so, do you know what your prescription is?
[In the case of strabismus] Do you have a permanent lazy eye, or does it only occur occasionally?
Does the eye which is lazy alternate between left and right?
Can you make the lazy eye alternate by willing which eye you use to focus on objects?
These questions are important to establish up front if the person is at all suitable for the types of training techniques we are attempting to develop at this stage.
Now we move onto some basic tests.
Please look directly at my nose.
This is a first test to see if one eye looks straight ahead, and one to the side (which would be the strabismic eye at this moment).
Perform individual eye cover test.
If you have detected a strabismic eye in the previous stage, cover the “good” eye with a vertical motion and watch if the strabismic eye then focuses on the fixation point you have provided. If this is the case we have strabism and have also validated that the muscle is not atrophied, but is instead not being controlled as it ought to be by the brain. Repeat for both eyes. This helps us detect a tropie.
Perform alternating eye cover test.
In this case we are trying to detect or confirm a phorie – an intermittent lazy eye. Hold the card over one eye for a few seconds. If that eye is intermittently lazy, it will relax into a resting position as it is currently “unused” (the brain is not straining to position it). When you move the card to the other eye, watch closely for signs that eye is suddenly jumping back into a central position now it is being required for sight. Repeat for both eyes, and make sure you can just peak the eye over the top of the card.
Ask the person to focus on a fixation object, and move this up/down, then left/right, then along both diagonals, and finally in clockwise and anti-clockwise circles around the extremity of their vision. Watch closely for the failure of one or more eyes to track the object. This may indicate muscle paralysis or other underlying muscular problems.
Depth perception and vergence test.
We use a Brockstring for this (a long string with three beads on it, at equidistantly spaced along the string). We ask the person to focus on the middle bead, and hold that focus. We then ask them how many strings are leading into that bead, and out of it – using their peripheral vision. If they see two strings, they are using both eyes. If they see one string, they are only seeing with one eye. Next we ask them to repeatedly focus on the far/near/far/near bead to check their eye’s ability to verge (to pull the eyes together to focus on near objects).
We can then move onto testing the calibrations and experiences in the first iteration of the app and comparing the results!
Where is the conflict detection taking place? When is the suppression activated? When objects appear at the same position in the retina, or further back in the brain’s object recognition networks? Perhaps both. Perhaps the binocular suppression scene implies that it was happening at the retinal level (although we’d need to make identical objects which are almost overlapping to check that it isn’t happening at an object recognition level) and/or perhaps it’s happening at the object recognition level (for example, when Mr C’s brain adds on information from the strabismic eye which isn’t overlapping with what the fixating eye is providing – how does the brain detect that intersection?).
If you are a neuroscientist interested in vision, perhaps we can help you answer these questions, by helping you use our open source framework to run quantifiable experiences amongst a potentially huge community of sufferers to find out!