User Test 2 – Mrs. S – Alternating esotropic strabismus

Today’s second user test was, again, extremely interesting and rewarding.  Our second tester – Mrs. S – also has a similar form of strabismus to Mr. C –  alternating strabismus.  Although we are still unsure how the app should handle the more complex form of alternating strabismus (where the eye which falls to the side changes almost spontaneously) we decided to proceed after our first success.

Continue reading “User Test 2 – Mrs. S – Alternating esotropic strabismus”

Two more binocular break-throughs

This is a quick note about some more user tests we ran, this time with two ladies in their forties and fifties.

We had our vision therapist with us, who ran both through a series of standard tests.  In both cases, neither could use both eyes simultaneously!

As soon as they were in the VR environment, suppression was broken.  We believe the cause was the “low conflict” nature of the environment they were looking at (mostly black backgrounds).  I apologise for not having the time for writing up the full test, but we have – more importantly – implemented the ability to place more of our calibrations/test into and out of conflict in the recently finished second iteration of our prototype.

In the next phase of testing, we will revisit this phenomenon in more detail!

First test of second iteration with Mr. R

What a wonderful test. Mr R has alternating strabismus, and a very strong suppression. The Binocular Suppression scene is now designed well enough that it not only demonstrates the suppression switching on very well (by introducing conflict), but allows us to find that breakthrough point where – despite the suppression – Mr R can see both (despite the conflict). The eye misalignment worked well. Mr R couldn’t see any depth in the depth test – which was precisely what I expected at this stage from him. Unfortunately, there was ambiguity in the alternating fusion scene and the eye straightening scene, because it wasn’t clear enough if both eyes were active or not.

I will make a series of improvements which allow us to interactively introduce/remove conflict in these later scenes, and provide visual cues so it is clear without a doubt, what they participant is actually experiencing from their descriptions of what they are perceiving.

!

Future enhancements to eye straightening scenes

What we won’t manage in this iteration are further extensions to the eye straightening environments.

*) We then need to start pushing the visual system to improve coordination with the eyes, so we begin a random continual displacement of the fused scene, to force the eyes to maintain fusion whilst simultaneously tracking the scene in view.
*) After this, we start altering the perceived distance of the scene (scaling)…
*) and introduce instabilities in each eye (almost imperceptible loses of signal which increase in length and frequency) to get the mind used to dealing with instability.

I look forward to implementing all of these!

EyeSkills at Bionection 2018!

For the next two days I’ll be in Dresden, promoting EyeSkills at Bionection.

Our presentation will be in Panel 4 (Smart Medical Devices) alongside some very interesting other speakers (https://www.bionection.com/en/program).

 

EyeSkills will be presenting at the 35th CCC

This year, the famous Chaos Communications Congress will be holding its 35th meeting, in Leipzig Germany.

Here is a description of the lecture I’ll be giving:

We mostly see with the mind, and the mind is flexible. For the four hundred million people with Lazy Eye, their brain encountered an installation error when linking both eyes as babies. As a PlanB their brain switched one eye off. I’ll talk a bit about how the visual system works, and how our open-source virtual reality software (backed by social impact lab Leipzig and the prototypefund.de) can hack through that suppression and provide a chance to “re-install” full sight with two eyes.

By providing an open set of tools for creating comparable experiments, our goal is not just to provide a tool, and a set of tools for building more tools, but to provide the basis for one of the world’s largest open-science experiments.

Nobody claims to have predictive scientific models of how the visual system works in its entirety, and that means there is so much more still to discover. In the case of Lazy Eye, some aspects of the visual system are de-activated and/or dormant. What we can do is to comparatively explore which techniques and approaches have which effects on opening visual perceptions, and thereby drive our understanding of the system forward on a theoretical and practical level.

If you’d like to know more, check out www.eyeskills.org and come along to this talk 🙂

Building EyeSkills in Unity

These are some coarse instructions to help you get started.  Although they are for OSX it ought to be a similar process for Windows.

git clone --recurse-submodules https://gitlab.eyeskills.org/community/EyeSkillsCommunityApp.git

cd EyeSkillsCommunityApp/Assets/EyeSkills

#The following will work on OSX/Linux. You can also just grab the .zip from https://cloud.eyeskills.org/s/GziZbNNWQAoyxXr and do the following steps manually

wget -O out.zip https://cloud.eyeskills.org/s/GziZbNNWQAoyxXr/download

unzip out.zip

mv FrameworkDependencies-master FrameworkDependencies

rm out.zip

Next, open the EyeSkillsCommunityApp directory in Unity as a new project.

Open File->Build Settings… and choose “Android” as your Platform, then click “SwitchPlatform”. Then you can “Build and Run”.

You should be good to go 🙂

Here’s a cheatsheet for some of the bluetooth keyboard presses inside the different scenes : https://www.eyeskills.org/keypress-cheatsheet-for-using-eyeskills/