Are we Eyetracktive enough?

Super busy day! It started with the kick off of the prototype fund second round:

Followed by a journey down to Potsdam for the kick off of the Hackademy!  Go team Eyetracktive!

EyeSkills Newsletter – Prototyping Madness 20.03.2019

Hiya!

So much has happened in the last two weeks that I barely know where to begin…. but I must begin with a wave of gratitude.

First of all, I would like to express my heartfelt gratitude to Holger Hahn and Andreas Freund, who have both donated to the project (https://www.paypal.com/pools/c/8byPUuuQ1D).  I’m utterly blown away.

Thanks to these donations I have built up a cheap Ender 3D Pro (cheap, but with quite astounding print quality) which has already been massively helpful in speeding up prototyping.

The emerging hardware team has also been spending a fair amount privately buying and testing different endoscopic cameras and nano/micro usb hubs (more on that in a bit) so this support will help us cover those costs (and upcoming costs). Again, thank you.  It’s so inspiring to receive energy coming back into the project.

Secondly, I would like to express my deepest respect and thanks to (left to right in the picture below) Johann, Rene, (Iana, who sadly became ill on the first weekend), Flo, Andre, Asieh and Cong… and the main organisers Cong, Isabelle and Daniel, for their incredible efforts over the last three weekends of the Berlin Hackademy.  When I got back on Sunday from the final weekend, I didn’t get back out of bed until Tuesday – there was just nothing left in the tank. It’s been intense, but worth every Joule.  The team feels like family, and what we’ve built in such a short space of time is really something to be proud of.

The headset is something, in and of itself, that deserves a crowd funding.  The world needs an ultra-low cost eye tracking solution which isn’t just on paper, but which is actually being used and developed.  I hope we can do this from within the EyeSkills project, as eye tracking is critical to enabling us to operate safely and effectively, whilst generating the quantitive evidence we need to modernise medical approaches to Lazy Eye.

Here are some more notes on the second weekend if you’re interested 🙂

Here is also a quick look at a video we put together covering the output of the project, and a website (which I’m still trying to complete as I find an hour here or there) with the open-source open-hardware designs available for you to download – https://eyetracktive.org.

Thirdly, I would like to welcome four new volunteers to https://chat.eyeskills.org.  To handle data, balancing respect for privacy and security with benefits for the whole community, is at the core of the project.  Our new volunteer Martin lives and breaths these concerns.  I’m very happy to welcome his voice to the community.

Making EyeSkills really usable (moving it away from an experimental platform which is hard to understand at first) is my main focus for the coming few months, and that requires input on the User Experience and User Interaction side of the system.  Flo (from the Hackademy) has offered to keep an eye on the process, while Guneet from India and Ant from the UK are both getting more actively involved.  I’m very grateful for their more expert input.  Rework is so time consuming, I hope we can make less mistakes and get to a really good experience more quickly than otherwise possible.

On the privacy front – as some of you may know, I made a big effort at the start to host almost everything we use ourselves – from RocketChat, GitLab, the Website to Sendy and so on.  When the CCC talk suddenly generated the resonance it did, however, I needed to respond quickly to setup some kind of volunteering form and a project specific email address (I just wasn’t prepared at all!).  I did this quickly with a Google Form and setup a Google email address. These are both quite secure, from everybody except Google – and the question arises, how much do you trust Google?

Hosting our own mail server for the core team (i.e. email addresses which end in eyeskills.org), for instance – is a non-trivial thing to do.  I have had offers from within the community to do this, but I worry about maintenance and all the associated potential problems with blacklisting/spam etc.

As far as I see it, we have three choices : Keep using Google, self-host, or use a secure email provider.  If we use a secure email provider then it needs to be paid for each month.  I think this is a question which I would like *you* to answer.  Please indicate what you would prefer for now :

https://app.tomvote.com/answer/41a930ba1552a7f6fda6ffa65c8dbd15

You will be asked whether you would prefer us to use a paid email service, switch to self-hosting, or keep using google email. Remember, this is about what the core team will use to communicate with you, not about what you have to use personally.

Right now I’m busy refactoring the Framework for a better experience (thanks again to the amazing prototypefund.de for their support!) although I’m *very* sorry that I don’t have a build ready to show yet.  I ran into a few technical blockers (like this one) and over-estimated how much time I’d have with the Hackademy running in parallel 🙁  Nevertheless, there will be something soon, progress is being made one step at a time.  Before you know it 🙂 we’ll need input from the EyeTracking cameras in the headset… but there is still a lot to do there.

Our amazing electronics expert Moritz (who it turns out has a super powers in soldering things so small you can barely see them with the naked eye):

…is taking charge of harassing Chinese camera manufacturers for more detailed camera specifications and quotes for parts, because the amazing Rene, Andre and Johann have discovered in their deep dives into the android usb layer, that the supported image formats of different chips are critical as to whether or not we can get two cameras working simultaneously.

I also want to give a special shoutout to Rene for putting aside three weekends back-to-back, away from his family, on top of an incredibly stressful managerial day job.  He’s seriously determined.

When we get far enough to have the first working prototypes, I will call out to you for TEN alpha-testers.  We will ask you to cover the raw costs per headset (around 100EUR, as they are based on “samples” with high shipping and unit cost) and if you would like to offer us something for our time that would also be appreciated.  HOWEVER, if you do sign up for this, it will be on condition that you take it really seriously – that you use the system every day for at least a month, and give the most detailed and considered feedback you can about the performance of the system, where it is weak, and what you think we could improve.  We want *real* testers 🙂

We still have a way to go, but I’ll circle back around to this when the time is right.

I’m sure there are other things I’ve forgotten,  but I cannot resist the urge to get coding any longer, so ciao for now… and again, thank you for being here!

Ben

EyeSkills Newsletter – Design. Code. Refactor. Rinse. Repeat. 09.04.2019

Thanks!

Thank you to Gregory Taschuk for his 20EUR donation to the EyeSkills PayPal Pool!  The pool has already paid for a simple 3D printer to help prototype parts for the eyetracktive.org headset.

If we can get another 150 EUR in the pool in the next six days (to bring it to 500 EUR)  I would get this dirt cheap (but robust) laser diode cutter which is on special offer until the 15th of April.  It is only 2.5W, but that’s plenty for cutting card – which is what we need to work with to prototype improved designs for the eyetracktive.org core which folds up to hold the eye tracking cameras and usb hub.  It would be a major motivator and speed-up to have that here “in-house”.

Continue reading “EyeSkills Newsletter – Design. Code. Refactor. Rinse. Repeat. 09.04.2019”

Unity style/VrActivityTheme not found in AndroidManifest.xml bug?!?

If you run into the obscure and painful problem that Unity tells you the “style/VrActivityTheme not found in AndroidManifest.xml” then fear not.  You do not need to spent half a day trying to track down the problem.  At some point you probably (temporarily) added Google’s DayDream as a supported VR Device in the Unity Player Settings, then removed it. Unfortunately, Unity didn’t fully clear up all the dependencies on DayDream, so it assumes that it still needs this VrActivityTheme (https://developers.google.com/vr/reference/vr-manifest) .  It doesn’t.  The simplest solution? Just add support for DayDream back into your player settings (that’s not a bad thing), Unity will correctly resolve the dependencies, and it will build and run once more. Sigh.

Please “like” EyeTracktive and help us Win!

We need your support!

Please “like” us – https://hackaday.io/project/164944-eyetracktive Points means prizes!

The excellent hackaday.io has organised a global :

“hardware design contest focused on product development. DesignLab connects you to engineers, expert mentors, and other powerful resources to take your product from concept to DFM.”

If everybody on this list takes five minutes to support us, we’ll be at position 5 on the leaderboard and far more visible to the world (thus attracting more likes). At the moment we’re at position 28 with 14 likes.

This might be what we need to help us take EyeTracktive to a finished product.  Eyetracktive is the ultra-low cost open-hardware eye tracking headset we’re working on to complement EyeSkills (so we can see objectively what the eyes are actually doing, enabling “at home” training with precision and safety).

If you register at hackaday.io you can “like” the project (https://hackaday.io/project/164944-eyetracktive) and help propel it up the leaderboard (https://prize.supplyframe.com/).

Thanks to an anonymous donation we’ve got enough together that I could buy a discounted (super super cheap) A3 laser cutter which will allow us to continue prototyping and even producing the inner core of the EyeTracktive headset, but when it arrives I’ll still need to build an enclosure and setup a ventilation system.  If nothing else, every “like” wins us $3 which will help cover those materials! 🙂

Thanks!

Ben

 

 

Exploring an API for generating custom VR headset designs

Another target I’d like to accomplish is to provide an api for generating custom parameterised EyeTracktive pupil tracking headsets.

OpenSCAD can theoretically produce .stl or .png images via the cmd line, but it requires a lot of cruft to get this to work (an XServer or xvfb) – so wouldn’t it be nice if there was a pre-configured docker container?

After a little look around I found this :

https://hub.docker.com/r/wtnb75/openscad

Theoretically, the simplest way to handle this would be to exec/run openscad from inside the container to output a file mapped to a directory on the host accessible from the API which is part of (e.g.) EyeTracktive.org.

It looks, however, like I’ll have to see if I can cook up a docker file based on https://github.com/wtnb75/dockerfiles/tree/master/openscad-fedora first.

 

Notes on Unity Animation

This wasn’t a bad starting point : https://www.youtube.com/watch?v=vPgS6RsLIjk

It’s important to remember, when you’ve created a sprite, that you need to add a SpriteSkin. Sometimes it fails to automatically detect the bones in your sprite, but so far, that’s been simple to solve by making a few minor changes to the sprite, reapplying, and then the “CreateBones” button in the SpriteSkin successfully works.  If you have an existing animation, you can drag and drop it onto the sprite. Next step – animation transitions.

In the Animation pane you can create a new animation from the drop-down, but to create links between those elements you’ll need to make sure the Animator window is visible (Window->Animation->Animator). There you can make links between the various states (https://www.youtube.com/watch?v=HVCsg_62xYw).  How can we have those state transitions occur without scripting? It turns out that the transitions already happen, but you need to “Play” a scene containing the model.

Where the ordering of limbs is incorrect, go into the SpriteEditor>SkinningEditor and set the individual bone depth, by selecting the relevant bones.

The next issue will be transitioning sprite parts (rather than just animating their position).  My best guess is that we’ll end up animating enable/disable/active on alternative game objects inside the Animator (I hope).  Yep. That was quite intuitive.  Place the object you want somewhere inside the bone hierarchy of the sprite (inside the scene editor) and then, in the Animation pane, add a property for that object for “enabled” and animate it.

I suspect that, to enable the pupil to move freely around, I’ll have to add a mask around the “white” of the eye.

This is quite exciting.  A lot of opportunities for clearer communication and more interesting and interactive scenes have just opened up 🙂

Ultimately, I’d like to create a 3D representation (mesh) of the mascot, and a toon shader to go with it, which would be the most flexible approach but for now I’ll create the basic poses I need to start with as .SVG, then export to sprites and animate.

I seems that one can create too many bones.  The issue I’ve run into is that slicing the sprite prevents the unity editor from allowing me to create bones which span the different sprite parts (surprise, it’s still buggy).  However, using autogeometry to split up the sprite, makes it almost impossible to control when the bones overlay each other (e.g. around the eye) and control over things like mouth expression is currently beyond me using the inbuilt approach.

I suspect the way to do this is, is to create a completely separate multi-sprite for the eye and another for the mouth (with multiple expressions in the multi-sprite), and then to place these inside the bone object hierarchy.

A potential problem with this approach is that alterations to the bone structures seem to invalidate the sprite skin / bone editor in the scene – requiring it to be destroyed and recreated, which will lose all my setup 🙁

So, that worked well (I think).

There are eight sprites along the top, and only the collection of body parts below are skinned.  On the left in the scene hierarchy, you can see the other parts are placed under the body bone – with each game object having a “Sprite Renderer” added. Is there are better way.  The different parts of the multi-sprite are always visible in the object panel beneath the scene hierarchy.

An afternoon with Marco Schätzing

Hello there,
Lukas here. I am a new member of eyeskills and today I decided to put out my first facebook post!
Ben and I had the great chance to meet Marco Schätzing in his natural habitat. He is a studied optometrist and works as a visual trainer. Today, Marco showed us the “Maddox Test” again. The “Maddox Test” is a procedure which is used to actually measure strabismus for near and distance. The “Maddox Rod”, which is actually a red parallel plano convex cylinder lens, is placed in front of one eye, whilst the other one looks at a numbered horizontal axis, with the zero in its center. A light shines from the direction of the centre, to be broken in the “Maddox Rod”, so Ben could only see a vertical ray of red light, with the eye behind the rod. So Ben could see, where on the axis he was seeing the ray from the one eye, in relation to where the other eye was looking. If he had seen it realy closely to the center, it would have been what you would expect from a person without strabism. At a 6m distance, the misalignment was only 1.8 degrees, which is not crazy, whilst up close (40cm) the divergence was 8 degrees.

This was an interesting insight of how misalignment can vary with the distance of the fixing object. We were interested in that because we think we can recreate a test like this in the app – but in a more interesting way.

But this is not all we learned within our time with Marco:

We discussed the recent version of the app, which Marco could lay his eyes on today. It led to an interesting discussion about a lot of different shapes and forms of strabism. We ended up discussing more about the plight of people, who have acquired their strabism through a stroke. We know that this is a huge phenomenon we should consider way more in our work.
And we will!