Iteration 2 – Concepts

Concepts

In Iteration 1 we had a participant flow which went through a series of calibration scenes covering different aspects of vision (e.g. Monocular vision present? Biocular vision? Depth perception?…). These scenes were originally focused on building up a calibration object which could describe the participants visual abilities, to then calibrate the “main” part of the app which would be games developed by third-parties.

Continue reading “Iteration 2 – Concepts”

Free code! TTS and Device Gestures.

I’m going to throw a few snippets of code in here that are coming out of the current sprint, because they are generally useful.  I’ll also spend a few minutes commenting what’s happening so that non-unity/C-Sharp developers can start to get a feel for it….

The snippets are code for easily generating offline Text-To-Speech and extending gestures on an Input Device.

By the way :  all the code we are writing is going to be free, as in speech, in the end 😉   Thanks to the prototypefund.de!

Continue reading “Free code! TTS and Device Gestures.”

EyeSkills Prototype for Lazy Eye – Iteration 2 – Practitioner View

A first quick look at how the second iteration of the open-source EyeSkills prototype works. This prototype is designed to test the visual abilities of a person with Lazy Eye, and evaluate the effectiveness of a few techniques which may be useful in allowing a participant to re-establish binocular vision.

Continue reading “EyeSkills Prototype for Lazy Eye – Iteration 2 – Practitioner View”

First test of second iteration with Mr. R

What a wonderful test. Mr R has alternating strabismus, and a very strong suppression. The Binocular Suppression scene is now designed well enough that it not only demonstrates the suppression switching on very well (by introducing conflict), but allows us to find that breakthrough point where – despite the suppression – Mr R can see both (despite the conflict). The eye misalignment worked well. Mr R couldn’t see any depth in the depth test – which was precisely what I expected at this stage from him. Unfortunately, there was ambiguity in the alternating fusion scene and the eye straightening scene, because it wasn’t clear enough if both eyes were active or not.

I will make a series of improvements which allow us to interactively introduce/remove conflict in these later scenes, and provide visual cues so it is clear without a doubt, what they participant is actually experiencing from their descriptions of what they are perceiving.

!

Why is designing EyeSkills difficult? – a quick note

When building software, one is creating a simplified model of reality, capturing those parts which are relevant to achieving the system goals. This model is not generally not built to be passive, it should then interact with reality to alter the nature of reality. It’s an interesting feedback loop called “active modelling”. If we don’t iteratively test as we design and build, we will inevitably design systems that fail to capture reality and fail to then interact with it as desired. This is particularly true of systems which interact with people. Continue reading “Why is designing EyeSkills difficult? – a quick note”

Dissecting a user test – how we improve!

Each user test is an exciting event.  Each user test throws up at least half a dozen “aha” or “why didn’t I think of that” moments, driving on and further inspiring development.  In our most recent user test (yesterday evening with Mr R) there were some obvious but useful minor improvements we could make to help practitioners, and a couple of major issues were also raised about determining what the participant is really perceiving.

Continue reading “Dissecting a user test – how we improve!”