The second Hackademy weekend.

The energy, the passion, the determination of this team never ceases to amaze me. More amazing progress.

Let’s start talking about hardware:

My focus was on continuing the work to produce a new central component for Google Cardboard (v2) capable of supporting eye tracking.

This is an intermediate design – laser cut from card, with one of the first 3D printed custom camera holders mounted and containing the internals of an endoscopic camera.

Laser cutters are a *lot* of fun!

Here’s a closer look at the holder –

A later iteration has mounting slots and a redesign of the cardboard so that (with the help of a 3D printed tool) the mounts are always installed in the right direction and in the right location.  The channel/slot running lengthways up the middle is there to allow heat from the metal backplate of the camera and PCB to escape.

Designing these parts in OpenSCAD (they are fully parameterised so we can cope with different camera parts or headset designs) was a bit of a headscratcher at times.  It took several iterations and alternative designs until I was happy.

…but I’m actually really pleased with the OpenSCAD description.  I built it so it both shows the flattened cut, but also applies all the correct transformations to show what it would look like folded – very helpful when seeing what works best!

At the same time, Moritz was busy working out how to remove the IR filters from the camera. He ended up making a custom tool to unscrew the camera lens assembly, and had to break out the IR filters using brute force :

He also custom shortened and re-soldered the camera cables so we could mockup the prototype, and added an IR light source to replace the LED rings on the original cameras – resulting in our first IR shot of an eye!

Of course, we only managed to get that result because of the hard work put in by Johan, Andre and Rene (left to right) on interfacing Android with those cameras. The two main sticking points which are left are how to address both cameras simultaneously, and integrating OpenCV with Unity.

At the same time, what use is all this when a google cardboard is so uncomfortable and poorly fitting, letting in light and disturbing from what needs to be a concentrated experience?

Asieh’s incredibly intuitive and empathic design sense, coupled with Cong’s ability to synthesise many (often) divergent requirements resulted in what I think is a truly inspiring solution :

This is an add-on to a GCV2 which is amazingly comfortable, fits all heads, and blocks all light!

I also liked this “starwars” style prototype :

So, with hardware and ergonomics, we come to the other essential component which makes for a good experience… the User Experience!  Flo (sadly, no picture?!?!) has an inspirational mind in a super relaxed soul.

After many boards full of analysis :

he drew this little cartoon “Eh?!? Where are you buggering off to?!?” which connected with old ideas I’d had from a story telling perspective many years ago. Together this is going to form the foundation for how we well the EyeSkills story and build the experience!

It’s been fourteen days straight with a level of intensity that is barely sustainable, but it’s been worth it.  I’m looking forward to the third and final event happening on the coming event, and hope that we can all get our tasks finished in time this week!

 

A quick note on Unit Testing in Unity

The Unity/VR learning curve hadn’t left me space to tackle unit testing until now. Well, it had, but my initial encounter was so bad that I decided to leave it until I had a little more time to look again.

Well, at the moment I’m building out the basic structures for handling a more fluid and complete user experience, so a reliable structure and repeatable experience is essential – so it’s time for Test Driven Development.

I may extend this post as time goes by with tips and tricks as I encounter them, but first off – the unit testing won’t work unless you explicitly create assemblies for the relevant parts of a project.  In this case:

I needed to create an assembly “Hackademy” for the spike (pre-prototype code) I’m developing, which then references the “EyeSkills” assembly (so it was able to find the relevant EyeSkills framework classes), so that the “Tests” Assembly could then reference the “Hackademy” assembly (so that the tests could find my scripts to test).  It was also necessary to explicitly reference the “EyeSkills” assembly in the “Tests” assembly so I could create Mocks referencing interfaces within the Framework.

It’s also worth pointing out that, despite running from within the context of a unit test and within the EyeSkills namespace, any classes doing dynamic loading from within the Framework will fail to find classes only in the testing assembly. You need to move them into the same assembly which will be looking for them. A bit weak really.

Annoying. Clunky. Poorly documented.

As usual, Unity’s IDE also failed to keep track of alterations to the assembly files (I needed to delete and recreate the Framework folder) causing a terrible mess which was only fixed after semi-randomly deleting .meta files and several restarts of Unity.  The IDE has now reached a level of software quality where it is almost inevitably a buy-out target for Microsoft.

For all my occasionally deep dissatisfaction, however, when Unity works it works well, handles every situation imaginable, and does get the job done.  It’s not perfect, but then, perfect is the enemy of the good!

Newsletter – Happy Hacking – 04.03.2019

It’s been a few weeks since the last newsletter, but that doesn’t mean nothing has been happening.

Progress on eye tracking

First, I did some 3D printing with Moritz to retrofit an existing headset with mounts to hold our endoscopic cameras, this worked well…

https://www.eyeskills.org/another-iteration-of-our-eye-observation-open-hardware/

In parallel, Rene has managed to get the cameras working quite reliably in various versions of Android – quite a relief!

At the Hackademy it became clear to me, however, that we should and could start at an even lower and more accessible price point – by hacking the standard Google Cardboard V2 design to include a new “component” at its heart! https://www.eyeskills.org/hackademy-hacking-google-cardboard-v2/

Progress on UX

Our ultimate goal is to make EyeSkills usable and affordable for the world. It should educate and inform just as much as promoting experience and training.   We made considerable progress identifying where to start… and I hope, we will be able to start getting bi-weekly prototypes out to those of you who have registered for beta-testing, to get your feedback in these critical early stages.

Development

Even if it was only a single line, it is a milestone.  The first code merge has happened!  Thanks for your input Rene!

Funding

At a personal level, I’m still unsure how best to move EyeSkills towards an economically self-sustaining global existence without sacrificing its principles to short-term thinking.  My last round of contacts with traditional funders have convinced me that we will have to do things very differently indeed.

As the community continues to show an interest, and I look at the skills we have, I begin to think that we really could do this on our own.

At this point, I would like to give a heartfelt shout out and a thank you to  Bradley Lane who donated 50 EUR towards our MoneyPool:

https://www.paypal.com/pools/c/8byPUuuQ1D

I am very grateful for your gesture of support.

Interesting meetings and new people

We have a big data specialist waiting for our go-ahead to get his hands dirty as a volunteer! With decades of experience at the coal-face of an internationally renowned cloud company, I think his input will be invaluable. More on this in the coming months….! 🙂

We also met with a leading neuroscientist from the Charite in Berlin – which led to some very interesting discussions on how to handle issues such as detached retinas.

… welcome to other new members of the community, I hope I’ll be introducing you soon 😉

Have a lovely day!

 

Hackademy – hacking Google Cardboard V2

Rather than implementing a new VR Headset for EyeSkills, could we push the price-point down even further, by hacking the standard Google Cardboard V2 design?  That’s one of the questions we explored last weekend at Careable’s first Hackademy!

The standard design has a capacitive button which touches the screen as user input when the user depresses an “origami” lever on the right upper side:

In the manufacturer’s schematics it becomes clear that this is a separate unit…  so after pulling it apart…

…we figured, why not replace this with a new core capable of supporting on-device eye tracking?  Here’s the first mockup.  The tabs at the bottom represent the cameras, the fuzzy tubes are cables, the yellow foam blocks are the USB connectors, and the purple block is the USB hub (all to correct size).

This could be folded back into the existing headset to give us just what we need!

I’m now moving towards programming a parametric model of this interior component which we can prototype using the Hacker Space’s laser cutter.  Very much looking forward to going back there this weekend!!!

At the same time, other members of the group have been exploring how to modify a standard v2 cardboard to improve both the ergonomics and reduce stray light entering the viewing area (after all, people’s foreheads and noses vary more than you might realise until you start looking closely!!!).

Here we are in Potsdam’s wonderful little Maker Space…

And here are some of the wonderful team explaining what we’re doing to members of other teams at the Hackademy….

It was an inspiring weekend, but 7am starts and getting back to bed at midnight after solid work three days in a row have taken their toll a little!

Are we Eyetracktive enough?

Super busy day! It started with the kick off of the prototype fund second round:

Followed by a journey down to Potsdam for the kick off of the Hackademy!  Go team Eyetracktive!

Another iteration of our “eye observation” open-hardware

Our goal is to get objective feedback about eye position and behaviour, which requires some sort of eye tracking.  A decent eye-tracking headset costs anywhere between $400  and $10,000 dollars…  which is just too much for the majority of the world who earn less than $10 a day. So, lets make it affordable!

Continue reading “Another iteration of our “eye observation” open-hardware”

EyeSkills Newsletter/Update 06.02.2019

Hi there!

The last couple of weeks have been really busy (as always).

We’re in love

It started off positively with a official Letter of Recommendation from the international Open Knowledge Foundation  arriving by snail mail.  They are are supporting EyeSkills both morally, and in helping open doors.

…and we’re getting creative

Another great piece of news is that we were officially accepted into the first Berlin HACKademy, which starts at the beginning of March.  They are assembling volunteer engineers and design students to help us work on …

…building an ultra-low-cost eye observing VR headset

It’s going to be crucial to have some form of camera objectively observing what the eye is doing while training.  There is nothing out there which can do this at a price most people can afford. The price span is roughly $400-$10,000+ dollars for existing solutions.

We’ve spent months (and things are hotting up now) working on a patent-free design which can deliver the simple kind of eye observations we need, for well under $100, including assembly.   I think we’ve figured out a really excellent way to go ultra-low cost using off the shelf endoscopic camera components, larger VR lenses, and using the participant’s phone for power and vision processing.

At the moment, with the help of Moritz, Lukas and Fabian, we’re making and testing prototypes, trying out different cameras, and all the while Rene is stubbornly and heroically hammering away at making android recognise and play with USB UVC OTG cameras – something Google has failed miserably to implement in a sensible way.

Here are some pictures of early prototype:

…while we start to look for more money to keep this thing going!

I applied for the FutureSAX “Innovation in der Gesundheitswirtschaft” event in Dresden and was accepted to give a short pitch next week.

Community research should stay free and creative, but packaging, advertising, delivering and supporting the results for millions of people requires a financially self-supporting organisation.   If you are interested in helping build or fund that organisation, get in touch.

Some video tutorials

Alternating strabism is a really interesting category of Lazy Eye which needs more exploration.  First I put together a quick video explaining how to add new assets to better explore what happens within the “conflict zone” of somebody with alternating strabismus.

After feedback from Fabian (join the discussion here https://chat.eyeskills.org/channel/alternatingstrabismusapp) , I realised that we need to go further, to redesign the asset and add dynamic head tracking to control the position of something like a luminance panel… I’m just working on implementing this in code. Here’s a video explaining the way I wish to proceed :

…welcome to…

Randy (from BioCity) has joined us in the #legal-medical channel on RocketChat.   Hi Randy! It’ll be good to have you around 🙂

…some interesting meetings have taken place…

Thanks to the Vision Therapy center in Gohlis (Leipzig).  We sat together for a few hours last week and I took them through EyeSkills, discussed some of their edge-cases, and generally swapped ideas an insights.  They are excited about the project, particularly home-training, and will support us where they can (particularly in the realms of providing external and independent validation/measurement of process/non-progress made by participants).

…some interesting meetings coming up…

I’m particularly looking forward to travelling up to Berlin with Moritz next week to meet Craig Garner. He’s a neuroscientist with very interesting ideas and connections, whose son has a lazy eye.  It should be a stimulating discussion.

…whilst some things needed clearing up…

The hardware development was starting to become bogged down in misunderstanding and miscommunication, so we settled on an initial “universe of discourse” for describing all the different parts of a headset, you can take a look here:

Universe of Discource – VR Headset

…and finally a call for help and advice…

Firstly, it’s getting urgent that we refactor the code to use a command pattern.  This would allow us to efficiently record what exactly the user did, and when, so that we can (for instance) record an audio-overlay of their perceptual experiences and reconstruct it by playing back the command stream for the given build of the system they were using.  Recording video from screen capture is just impossible on lower-end devices, and requires far too much bandwidth to retrieve over poor connections and our limited server capacity.  Is anybody up for getting involved in this refactoring? Get in touch 🙂

Secondly, is there a different way we can fund the “delivery” side of the idea?  What’s the point of making something amazing, if it doesn’t reach anybody, or they can’t afford to use it?

Crowd-funding is alright to make some initial production ready prototypes (and we may do this), and it can solve the chicken-egg problem, but it’s not a long-term solution to the “discovery” problem.

Going the institutional charity route is another option – but I fear that makes the entire setup extremely vulnerable to the whims of politics.

I know we can raise enough capital from classical VC, but I worry about being strong-armed into delivering the system to the wrong people and at the wrong price to really make a difference.

My models suggest we’ll need at least six months to a year of runway (for at least four people) to get the production, advertising, delivery and support infrastructure for the headsets and app optimised to a point that such an organisation can support itself. That’s a need for capital with a solid five 0s on the end.

One solution might be to raise capital the regular way, but with a group of investors (with a board level representative) who themselves have lazy-eye.  I feel this would help block any board-level tendencies to short-termism and dishonest business development.  Does this resonate with you?

At around 5 independent investors putting in between 20-30k€ per person this is actually interesting to many of the investors I have spoken to. More than this number becomes unmanageable.

Any other ideas?  Why not join the discussion  !

…onwards and upwards

Of course, this is only a selection of what’s happened in the last couple of weeks – so thanks to everybody that’s contributed their time and energy… lets see what happens in the next couple of weeks!

Finally, just in case you are feeling generous : you can support us by donating a few quid here https://www.eyeskills.org/donate/.  We’re buying quite a lot of bits and pieces for experimenting which it would be good to have some help with (not to mention I’m also working full-time without pay so long as I can continue to afford to 😉 ).

Bye bye!

EyeSkills at the HACKademy

I’m very happy that EyeSkills has been selected as one of the four projects which will be represented in Berlin at the world’s first HACKademy!  A team of volunteer specialists will be working to develop our ideas for a VR open-hardware prototype for EyeSkills this March  Here’s the flyer!

Quick note on debugging UVC cameras

If your external camera is attached and you haven’t got the right kind of hub, you’ll need to debug what’s happening from an android perspective, wirelessly.  How does this work?

with the usb cable attached

adb tcpip 5555

adb connect phoneip:5555

you can now disconnect the cable and attach your camera… the connection is maintained wirelessly

adb logcat -c && adb logcat

Nice!