A prototype is, well, a prototype. It breaths life into an idea, it takes an interactive form, and this generates new ideas and insights. Alongside functional ideas and insights (hey! wouldn’t it be cool if the user could do *that*!) – are engineering ideas and insights (hey! wouldn’t it be cool if we could make it do *that* more quickly/flexibly/reliably)!
Now that most of the functionality I wanted is in place, I’m literally losing sleep over those engineering insights…
Thanks to a great suggestion from Rafal in Poland, I have a first working version in which a participant can watch video in way taking into account both their eye misalignment and also the binocular suppression ratio needed to get both eyes working. When they click “ok”, the video screen on the strabismic eye starts to straighten up, very slowly and gradually.
Strabismus and Amblyopia are like a lock, which evolution has put in place to enforce monocular vision once a problem with binocular vision has been detected. We are learning how to pick that lock, how to open the visual system back up to a point where a person has a second chance at learning how to use both eyes again.
Each user test is an exciting event. Each user test throws up at least half a dozen “aha” or “why didn’t I think of that” moments, driving on and further inspiring development. In our most recent user test (yesterday evening with Mr R) there were some obvious but useful minor improvements we could make to help practitioners, and a couple of major issues were also raised about determining what the participant is really perceiving.
When building software, one is creating a simplified model of reality, capturing those parts which are relevant to achieving the system goals. This model is not generally not built to be passive, it should then interact with reality to alter the nature of reality. It’s an interesting feedback loop called “active modelling”. If we don’t iteratively test as we design and build, we will inevitably design systems that fail to capture reality and fail to then interact with it as desired. This is particularly true of systems which interact with people. Continue reading “Why is designing EyeSkills difficult? – a quick note”
What a wonderful test. Mr R has alternating strabismus, and a very strong suppression. The Binocular Suppression scene is now designed well enough that it not only demonstrates the suppression switching on very well (by introducing conflict), but allows us to find that breakthrough point where – despite the suppression – Mr R can see both (despite the conflict). The eye misalignment worked well. Mr R couldn’t see any depth in the depth test – which was precisely what I expected at this stage from him. Unfortunately, there was ambiguity in the alternating fusion scene and the eye straightening scene, because it wasn’t clear enough if both eyes were active or not.
I will make a series of improvements which allow us to interactively introduce/remove conflict in these later scenes, and provide visual cues so it is clear without a doubt, what they participant is actually experiencing from their descriptions of what they are perceiving.
A first quick look at how the second iteration of the open-source EyeSkills prototype works. This prototype is designed to test the visual abilities of a person with Lazy Eye, and evaluate the effectiveness of a few techniques which may be useful in allowing a participant to re-establish binocular vision.
This is a quick note about some more user tests we ran, this time with two ladies in their forties and fifties.
We had our vision therapist with us, who ran both through a series of standard tests. In both cases, neither could use both eyes simultaneously!
As soon as they were in the VR environment, suppression was broken. We believe the cause was the “low conflict” nature of the environment they were looking at (mostly black backgrounds). I apologise for not having the time for writing up the full test, but we have – more importantly – implemented the ability to place more of our calibrations/test into and out of conflict in the recently finished second iteration of our prototype.
In the next phase of testing, we will revisit this phenomenon in more detail!