7 min read

This is a quick overview of what the EyeSkills framework contains.

Firstly, the framework is – perhaps obviously – focused on Lazy Eye. We are trying to provide the pieces needed to produce diagnostic and therapeutic environments for Lazy Eye in virtual reality environments.

For background, I noticed some time ago that there are many very talented software developers out there with Lazy Eye, but it seemed to me that everybody (including myself) was focused on cooking their own soup. There was little commonality in the languages being chosen, and little re-usability of components being developed, because we all had different and seemingly different ideas about what exactly it was that we wanted to try to build.

There was (and still is) a focus on using simple anaglyphic approaches, using red/cyan filters to separate content for each eye, because it is simple – but the anaglyphic approach is not great. Colour separation is far from perfect, the background behind the device is a constant source of optical conflict, view-fields are very small .. amongst many other issues. This is also where I started off, experimenting with my myself and my son.

It was also pretty obvious that we were collectively suffering from the curse of the Pareto principle. It’s quite simple to knock out a super-basic prototype for individual use, but dramatically more time-consuming to handle all the “less-interesting” infrastructure around it.

When I realised that virtual reality headsets could be *really* useful for Lazy Eye, not because of “virtual reality” so much as the ability of VR headsets to address each eye individually, with high fidelity and controllability, I figured this might be the place to lay down some communal foundations. Virtual reality seems difficult, but what if we could simplify things for developing tools for Lazy Eye? What if we provided the infrastructure and automated everything possible to take away the pain and allow a focus on experimentation, offering pre-fabricated elements to speed up development?

Wouldn’t it be better if we were were combining our effort(?), even if, precisely because, we are all interested in different aspects of the same topic. Lazy Eye is a ridiculously complex and far-reaching condition, but there is a lot of common ground, nevertheless.

To open it up as widely as possible, we chose to use Unity. This is the world’s most popular game development platform, is open-source, cross-platform (both for mobile devices and desktop operating systems) and free to use.

The EyeSkills framework is still taking shape, but it can already do quite a lot. We hope that, whether you are a professional researcher who can code a bit, a contractor, or a dev interested in exploring your own condition – you will find you can do more, more quickly, and more precisely, with EyeSkills than alone – and that you will pay for your benefits in kind by contributing back to the framework.

The worst possible way to start building this framework would be to play architectural astronauts – designing some monstrously abstract framework, which would ultimately have little to do with reality! After all, who has the experience to say how VR should be used in relation to Lazy Eye? Nobody.

Our focus has been on experimentation, seeing what we can actually do with the technology and what actually seems to work, through many iterations of user testing. This means the framework has been massively in flux. For every line of code you see now, we’ve thrown away three times as much along the way – although things are settling down now.

We are not fighting the unity development paradigm, but trying to fit into it naturally. This is where I would like to give a shout out to Michi, a very experienced Unity Game Dev who has been helping keep the architecture “Unity Friendly” (it’s hard to say how helpful Michi’s contribution has been, really!) – which means tightly integrated with the Unity IDE, making use of Prefabs where possible, being written in C#, and using an event polling style coding paradigm. I’m sorry if those last two points make you feel a bit ill, I sometimes get a queasy stomach myself, but it’ll be fine. I promise.

So, lets fly over the basic structure we have now.

If you choose to make a completely new project, the first thing you’ll want to put in your list of scenes to build is the Bootstrap scene. This takes care of lots of plumbing, making sure you’ve got immediate access to all sorts of crucial infrastructure at the tip of your fingers, and then it just gets out of the way.

Some highlights :

We’ve added out-of-the box support for handling the loading and exit from scenes (where a scene is a collection of scripts and 3D or 2D elements which make up something like an environment/level or mini-game), saving and reloading user configurations, and remotely controlling what a participant experiences in a scene, over a real-time streaming web-socket server connected to a remote web-page which automatically builds its interface on the basis of the control elements you register.

We’ve also created a custom input manager because we quickly realised that the richness of control some scenes need just overwhelms the usability of standard bluetooth VR controllers. We added an extra two dimensions of control to all input options – differentiating between short, long and hold states. This turned out to be non-trivial to do in a Unity-friendly way!

Pretty much every application for Lazy Eye will need some kind of menu, an idea of users (and possibly practitioners or supervisors) and support for mundane but annoying things like switching into and out of VR mode, whilst waiting for the user to insert their phone into a VR headset.

We have provided a set of simple scenes which you can customise without any programming knowledge directly in the IDE. You can see, for example, how we define the menu for the EyeSkills app directly in the IDE.

You don’t have to re-invent the wheel, so you have more time to concentrate on your own experiments.

Over time its also become clear that the EyeSkills CameraRig is really at the heart of everything you will build. Standard VR cameras assume that both eyes are straight, and essentially, identical – that they are both able to look at a virtual object and that both eyes will perceive that object as the same colour and brightness. We cannot make this kind of assumption in Lazy Eye.

The CameraRig requires a whole video just for itself, but at its core it can handle misaligned eyes, breaking suppression, and an increasing amount of functionality relevant to amblyopia and strabismus through the addition of micro-controllers. We have found it very helpful to be able to directly access these micro controllers within arbitrary scenes to help us localise why participants are perceiving things the way they are. These can be used both in diagnostic scenes to understand a person’s visual abilities, but also to stretch and exercise those abilities. At any point, a person’s abilities can be stored in a configuration object, and selectively loaded into a camera rig in another scene at a later point.

As we’ve been building and experimenting with different diagnostic scenes, we’ve begun to uncover principles which affect what designs we use for shapes shown to participants – in order, for example, to promote or remove conflict between the eyes, or definitively establish whether the participant is currently experiencing monocular or binocular vision. These shapes are available as prefabs which can be dropped into the scenes you build, so you immediately have something to start working with. This library of parts will continue to expand as time goes by.

Finally, we have found that it is useful for scenes to be able to give audio feedback. Precisely because we *cannot* rely on robust visual perception, audio feedback becomes our most reliable feedback channel. We also chose not to rely on the availability of always-on fast internet. For this reason, we provide support for generating language sensitive offline audio interfaces via simple text files (although you’ll need OSX at the moment to do this).

So, I hope that was an interesting fly-through. Finally, I’d like to say thank you very much, to the PrototypeFund.de . Without financial support from them, it simply wouldn’t have been possible to put this much effort into developing the framework. Keep an eye out for updates on www.eyeskills.org where we’ll be adding more tutorials, documentation and downloads as time goes by.

Would you like to beta test EyeSkills* or just follow what we are doing?