Our goal is to get objective feedback about eye position and behaviour, which requires some sort of eye tracking. A decent eye-tracking headset costs anywhere between $400 and $10,000 dollars… which is just too much for the majority of the world who earn less than $10 a day. So, lets make it affordable!
This is an example for how you can build dynamic assets in the EyeSkills Framework.
The code walk-through:
If you have alternating strabismus and want to play with this, take the “development” branch from the eyeskills git repository and report/describe your findings as carefully as you can.
I’ve added an .apk for self-install here.
The last couple of weeks have been really busy (as always).
We’re in love
It started off positively with a official Letter of Recommendation from the international Open Knowledge Foundation arriving by snail mail. They are are supporting EyeSkills both morally, and in helping open doors.
…and we’re getting creative
Another great piece of news is that we were officially accepted into the first Berlin HACKademy, which starts at the beginning of March. They are assembling volunteer engineers and design students to help us work on …
…building an ultra-low-cost eye observing VR headset
It’s going to be crucial to have some form of camera objectively observing what the eye is doing while training. There is nothing out there which can do this at a price most people can afford. The price span is roughly $400-$10,000+ dollars for existing solutions.
We’ve spent months (and things are hotting up now) working on a patent-free design which can deliver the simple kind of eye observations we need, for well under $100, including assembly. I think we’ve figured out a really excellent way to go ultra-low cost using off the shelf endoscopic camera components, larger VR lenses, and using the participant’s phone for power and vision processing.
At the moment, with the help of Moritz, Lukas and Fabian, we’re making and testing prototypes, trying out different cameras, and all the while Rene is stubbornly and heroically hammering away at making android recognise and play with USB UVC OTG cameras – something Google has failed miserably to implement in a sensible way.
Here are some pictures of early prototype:
…while we start to look for more money to keep this thing going!
I applied for the FutureSAX “Innovation in der Gesundheitswirtschaft” event in Dresden and was accepted to give a short pitch next week.
Community research should stay free and creative, but packaging, advertising, delivering and supporting the results for millions of people requires a financially self-supporting organisation. If you are interested in helping build or fund that organisation, get in touch.
Some video tutorials
Alternating strabism is a really interesting category of Lazy Eye which needs more exploration. First I put together a quick video explaining how to add new assets to better explore what happens within the “conflict zone” of somebody with alternating strabismus.
After feedback from Fabian (join the discussion here https://chat.eyeskills.org/channel/alternatingstrabismusapp) , I realised that we need to go further, to redesign the asset and add dynamic head tracking to control the position of something like a luminance panel… I’m just working on implementing this in code. Here’s a video explaining the way I wish to proceed :
Randy (from BioCity) has joined us in the #legal-medical channel on RocketChat. Hi Randy! It’ll be good to have you around 🙂
…some interesting meetings have taken place…
Thanks to the Vision Therapy center in Gohlis (Leipzig). We sat together for a few hours last week and I took them through EyeSkills, discussed some of their edge-cases, and generally swapped ideas an insights. They are excited about the project, particularly home-training, and will support us where they can (particularly in the realms of providing external and independent validation/measurement of process/non-progress made by participants).
…some interesting meetings coming up…
I’m particularly looking forward to travelling up to Berlin with Moritz next week to meet Craig Garner. He’s a neuroscientist with very interesting ideas and connections, whose son has a lazy eye. It should be a stimulating discussion.
…whilst some things needed clearing up…
The hardware development was starting to become bogged down in misunderstanding and miscommunication, so we settled on an initial “universe of discourse” for describing all the different parts of a headset, you can take a look here:
…and finally a call for help and advice…
Firstly, it’s getting urgent that we refactor the code to use a command pattern. This would allow us to efficiently record what exactly the user did, and when, so that we can (for instance) record an audio-overlay of their perceptual experiences and reconstruct it by playing back the command stream for the given build of the system they were using. Recording video from screen capture is just impossible on lower-end devices, and requires far too much bandwidth to retrieve over poor connections and our limited server capacity. Is anybody up for getting involved in this refactoring? Get in touch 🙂
Secondly, is there a different way we can fund the “delivery” side of the idea? What’s the point of making something amazing, if it doesn’t reach anybody, or they can’t afford to use it?
Crowd-funding is alright to make some initial production ready prototypes (and we may do this), and it can solve the chicken-egg problem, but it’s not a long-term solution to the “discovery” problem.
Going the institutional charity route is another option – but I fear that makes the entire setup extremely vulnerable to the whims of politics.
I know we can raise enough capital from classical VC, but I worry about being strong-armed into delivering the system to the wrong people and at the wrong price to really make a difference.
My models suggest we’ll need at least six months to a year of runway (for at least four people) to get the production, advertising, delivery and support infrastructure for the headsets and app optimised to a point that such an organisation can support itself. That’s a need for capital with a solid five 0s on the end.
One solution might be to raise capital the regular way, but with a group of investors (with a board level representative) who themselves have lazy-eye. I feel this would help block any board-level tendencies to short-termism and dishonest business development. Does this resonate with you?
At around 5 independent investors putting in between 20-30k€ per person this is actually interesting to many of the investors I have spoken to. More than this number becomes unmanageable.
Any other ideas? Why not join the discussion !
…onwards and upwards
Of course, this is only a selection of what’s happened in the last couple of weeks – so thanks to everybody that’s contributed their time and energy… lets see what happens in the next couple of weeks!
Finally, just in case you are feeling generous : you can support us by donating a few quid here https://www.eyeskills.org/donate/. We’re buying quite a lot of bits and pieces for experimenting which it would be good to have some help with (not to mention I’m also working full-time without pay so long as I can continue to afford to 😉 ).
I’m very happy that EyeSkills has been selected as one of the four projects which will be represented in Berlin at the world’s first HACKademy! A team of volunteer specialists will be working to develop our ideas for a VR open-hardware prototype for EyeSkills this March Here’s the flyer!
If your external camera is attached and you haven’t got the right kind of hub, you’ll need to debug what’s happening from an android perspective, wirelessly. How does this work?
with the usb cable attached
adb tcpip 5555
adb connect phoneip:5555
you can now disconnect the cable and attach your camera… the connection is maintained wirelessly
adb logcat -c && adb logcat
Here is a PDF cheatsheet containing some of the keypresses you need for controlling the EyeSkills environments!
I just did an improved job of integrating the new conflict backgrounds (alternative assets) for alternating strabismus.
To access them, go into binocular suppression, hold “up” (w) for about four seconds to get into the micro-controller menu, scroll (w) until you hit “asset swap” and select (e). Then press “up” (w) until you get this asset, then select (e). Proceed to describe your experiences 🙂
Some quick notes. Aniseikonia is basically when one eye sees the world at a different scale to the other.
You can start the scene in EyeSkills, but you need a bluetooth keyboard attached.
The keys “awdx” are mapped to “left, up,right,down” respectively.
Press and hold either left or right for 3 seconds, then release. This will select which eye is strabismic and/or for which you want to try repositioning the world view.
If you then do a short press on left/right it will change what you look at.
Up/Down will move the object for the released eye further away or closer (making it larger or smaller).
See if you can make the two images fuse, and if so. Let us know your experiences here : https://chat.eyeskills.org/channel/aniseikonia
To avoid confusion, we’re trying to settle on a useful set of terms to describe what we’re talking about when it comes to designing our VR headset.
Here’s the current diagram – it will evolve I’m sure:
A quick look at how to add new conflict backgrounds, and how and where data elements like the suppression ratio are stored. This is particularly relevant for people exploring alternating strabismus.
The original .svg mentioned in the video is here : https://cloud.eyeskills.org/s/eJXb2p7ky3ffmrs