Rather than implementing a new VR Headset for EyeSkills, could we push the price-point down even further, by hacking the standard Google Cardboard V2 design? That’s one of the questions we explored last weekend at Careable’s first Hackademy!
The standard design has a capacitive button which touches the screen as user input when the user depresses an “origami” lever on the right upper side:
In the manufacturer’s schematics it becomes clear that this is a separate unit… so after pulling it apart…
…we figured, why not replace this with a new core capable of supporting on-device eye tracking? Here’s the first mockup. The tabs at the bottom represent the cameras, the fuzzy tubes are cables, the yellow foam blocks are the USB connectors, and the purple block is the USB hub (all to correct size).
This could be folded back into the existing headset to give us just what we need!
I’m now moving towards programming a parametric model of this interior component which we can prototype using the Hacker Space’s laser cutter. Very much looking forward to going back there this weekend!!!
At the same time, other members of the group have been exploring how to modify a standard v2 cardboard to improve both the ergonomics and reduce stray light entering the viewing area (after all, people’s foreheads and noses vary more than you might realise until you start looking closely!!!).
Here we are in Potsdam’s wonderful little Maker Space…
And here are some of the wonderful team explaining what we’re doing to members of other teams at the Hackademy….
It was an inspiring weekend, but 7am starts and getting back to bed at midnight after solid work three days in a row have taken their toll a little!
Now that we’re increasingly certain that alternating strabismus is something in its own right, with different perceptual side-effects to “regular” fixed strabismus, we’re starting to think about how we could better understand/categorise/explore what those perceptions are in a way that can be reliable reported….
Here are some initial ideas one the very simplest first steps we could take:
After a stressful Christmas preparing this talk, it finally happened.
Here’s an introduction to EyeSkills, what we’ve achieved, where we’re heading, and what we hope to achieve!
Perhaps the most surprising thing about the talk was the resonance it created. We’ve had over thirty people sign up to help as volunteers, over 4 thousand views of the uploaded video, and for a couple of days we were at #7 on HackerNews. Some people are just too kind 😀
The most interesting and important thing of all, was listening to the many people with some form of Lazy Eye share their experiences after the talk. It’s confirming a suspicion that’s been growing for some time, that the term “Lazy Eye” is intellectually far too lazy. There is a large amount of variety in the different symptoms individuals experience, but these seem to be clustering so strongly that it seems possible we are looking at a range of individual disorders which have their own specific differences, under the umbrella of “Lazy Eye”. It is becoming a priority to understand these groupings, so that we can develop more effectively for each specific scenario.
A prototype is, well, a prototype. It breaths life into an idea, it takes an interactive form, and this generates new ideas and insights. Alongside functional ideas and insights (hey! wouldn’t it be cool if the user could do *that*!) – are engineering ideas and insights (hey! wouldn’t it be cool if we could make it do *that* more quickly/flexibly/reliably)!
Now that most of the functionality I wanted is in place, I’m literally losing sleep over those engineering insights…
Strabismus and Amblyopia are like a lock, which evolution has put in place to enforce monocular vision once a problem with binocular vision has been detected. We are learning how to pick that lock, how to open the visual system back up to a point where a person has a second chance at learning how to use both eyes again.