3 min read

When building software, one is creating a simplified model of reality, capturing those parts which are relevant to achieving the system goals. This model is not generally not built to be passive, it should then interact with reality to alter the nature of reality. It’s an interesting feedback loop called “active modelling”. If we don’t iteratively test as we design and build, we will inevitably design systems that fail to capture reality and fail to then interact with it as desired. This is particularly true of systems which interact with people.

In our case, we have a very hard time indeed – we are building a system which displays colours on a screen. As a developer you can be reasonably sure that these colours will be correctly interpreted by the user of the system – if you have displayed them as words, colours, shapes etc. In the case of people with amblyopia and strabismus, however, all these bets are off.

Eye misalignment, suppression, semi-mature or missing neurological structures for vision processing, are all messing with how that person perceives what you are showing. We are accepting this, and attempting to build a system which gradually discovers how a particular person’s visual system is behaving, what it can do, and how we might help them progress it back towards a fuller functioning.

Now add to this mix and external person (probably an eye doctor) who needs to interpret what that person is perceiving!

We approach this as scientists approach the unknown.  We play with the unknown, we prod it and poke it, we read what other people have discovered.  Gradually, we identify self-contained behaviours we might consider “parts” and then uncover some basic, rough, dependencies between these parts. It appears there may be order in there – that B follows A under certain circumstances.  It might be that a force F is responsible for this transition, or is it? Lets see if we can isolate just A, and then apply  F, !F (not F) and F, F and F to see what happens.

The EyeSkills calibration scenes are ordered to uncover visual abilities in the order these abilities build on one another.  First we start with individual eyes, then we consider whether both can operate simultaneously, and under what circumstances.  We then consider how well this simultaneous is coordinated… and so on.

One thing our user testing has thrown up, is how unique each individual with strabismus and amblyopia is.   There are a complex interplay of factors, which produces a high degree of variety.  As an expert exploring what is happening to a particular person, you need the power to experiment.  What we are having to do more, is consider how the expert can falsify their own assumptions.  This means, they must be able to chance the context of a transition over multiple attempts to see which factors are really significant.

We find ourselves trying many of the same steps, with results leading to predictable next steps.  Perhaps, in the medium term, we will be able to map these pathways into an expert system for a tool which provides the basics of expert support in areas where there is simply no provision for vision therapy or access to expert eye doctors.

Would you like to beta test EyeSkills* or just follow what we are doing?


1 Comment