Are we Eyetracktive enough?

Super busy day! It started with the kick off of the prototype fund second round:

Followed by a journey down to Potsdam for the kick off of the Hackademy!  Go team Eyetracktive!

EyeSkills Newsletter – Prototyping Madness 20.03.2019

Hiya!

So much has happened in the last two weeks that I barely know where to begin…. but I must begin with a wave of gratitude.

First of all, I would like to express my heartfelt gratitude to Holger Hahn and Andreas Freund, who have both donated to the project (https://www.paypal.com/pools/c/8byPUuuQ1D).  I’m utterly blown away.

Thanks to these donations I have built up a cheap Ender 3D Pro (cheap, but with quite astounding print quality) which has already been massively helpful in speeding up prototyping.

The emerging hardware team has also been spending a fair amount privately buying and testing different endoscopic cameras and nano/micro usb hubs (more on that in a bit) so this support will help us cover those costs (and upcoming costs). Again, thank you.  It’s so inspiring to receive energy coming back into the project.

Secondly, I would like to express my deepest respect and thanks to (left to right in the picture below) Johann, Rene, (Iana, who sadly became ill on the first weekend), Flo, Andre, Asieh and Cong… and the main organisers Cong, Isabelle and Daniel, for their incredible efforts over the last three weekends of the Berlin Hackademy.  When I got back on Sunday from the final weekend, I didn’t get back out of bed until Tuesday – there was just nothing left in the tank. It’s been intense, but worth every Joule.  The team feels like family, and what we’ve built in such a short space of time is really something to be proud of.

The headset is something, in and of itself, that deserves a crowd funding.  The world needs an ultra-low cost eye tracking solution which isn’t just on paper, but which is actually being used and developed.  I hope we can do this from within the EyeSkills project, as eye tracking is critical to enabling us to operate safely and effectively, whilst generating the quantitive evidence we need to modernise medical approaches to Lazy Eye.

Here are some more notes on the second weekend if you’re interested 🙂

Here is also a quick look at a video we put together covering the output of the project, and a website (which I’m still trying to complete as I find an hour here or there) with the open-source open-hardware designs available for you to download – https://eyetracktive.org.

Thirdly, I would like to welcome four new volunteers to https://chat.eyeskills.org.  To handle data, balancing respect for privacy and security with benefits for the whole community, is at the core of the project.  Our new volunteer Martin lives and breaths these concerns.  I’m very happy to welcome his voice to the community.

Making EyeSkills really usable (moving it away from an experimental platform which is hard to understand at first) is my main focus for the coming few months, and that requires input on the User Experience and User Interaction side of the system.  Flo (from the Hackademy) has offered to keep an eye on the process, while Guneet from India and Ant from the UK are both getting more actively involved.  I’m very grateful for their more expert input.  Rework is so time consuming, I hope we can make less mistakes and get to a really good experience more quickly than otherwise possible.

On the privacy front – as some of you may know, I made a big effort at the start to host almost everything we use ourselves – from RocketChat, GitLab, the Website to Sendy and so on.  When the CCC talk suddenly generated the resonance it did, however, I needed to respond quickly to setup some kind of volunteering form and a project specific email address (I just wasn’t prepared at all!).  I did this quickly with a Google Form and setup a Google email address. These are both quite secure, from everybody except Google – and the question arises, how much do you trust Google?

Hosting our own mail server for the core team (i.e. email addresses which end in eyeskills.org), for instance – is a non-trivial thing to do.  I have had offers from within the community to do this, but I worry about maintenance and all the associated potential problems with blacklisting/spam etc.

As far as I see it, we have three choices : Keep using Google, self-host, or use a secure email provider.  If we use a secure email provider then it needs to be paid for each month.  I think this is a question which I would like *you* to answer.  Please indicate what you would prefer for now :

https://app.tomvote.com/answer/41a930ba1552a7f6fda6ffa65c8dbd15

You will be asked whether you would prefer us to use a paid email service, switch to self-hosting, or keep using google email. Remember, this is about what the core team will use to communicate with you, not about what you have to use personally.

Right now I’m busy refactoring the Framework for a better experience (thanks again to the amazing prototypefund.de for their support!) although I’m *very* sorry that I don’t have a build ready to show yet.  I ran into a few technical blockers (like this one) and over-estimated how much time I’d have with the Hackademy running in parallel 🙁  Nevertheless, there will be something soon, progress is being made one step at a time.  Before you know it 🙂 we’ll need input from the EyeTracking cameras in the headset… but there is still a lot to do there.

Our amazing electronics expert Moritz (who it turns out has a super powers in soldering things so small you can barely see them with the naked eye):

…is taking charge of harassing Chinese camera manufacturers for more detailed camera specifications and quotes for parts, because the amazing Rene, Andre and Johann have discovered in their deep dives into the android usb layer, that the supported image formats of different chips are critical as to whether or not we can get two cameras working simultaneously.

I also want to give a special shoutout to Rene for putting aside three weekends back-to-back, away from his family, on top of an incredibly stressful managerial day job.  He’s seriously determined.

When we get far enough to have the first working prototypes, I will call out to you for TEN alpha-testers.  We will ask you to cover the raw costs per headset (around 100EUR, as they are based on “samples” with high shipping and unit cost) and if you would like to offer us something for our time that would also be appreciated.  HOWEVER, if you do sign up for this, it will be on condition that you take it really seriously – that you use the system every day for at least a month, and give the most detailed and considered feedback you can about the performance of the system, where it is weak, and what you think we could improve.  We want *real* testers 🙂

We still have a way to go, but I’ll circle back around to this when the time is right.

I’m sure there are other things I’ve forgotten,  but I cannot resist the urge to get coding any longer, so ciao for now… and again, thank you for being here!

Ben

EyeSkills Newsletter – Design. Code. Refactor. Rinse. Repeat. 09.04.2019

Thanks!

Thank you to Gregory Taschuk for his 20EUR donation to the EyeSkills PayPal Pool!  The pool has already paid for a simple 3D printer to help prototype parts for the eyetracktive.org headset.

If we can get another 150 EUR in the pool in the next six days (to bring it to 500 EUR)  I would get this dirt cheap (but robust) laser diode cutter which is on special offer until the 15th of April.  It is only 2.5W, but that’s plenty for cutting card – which is what we need to work with to prototype improved designs for the eyetracktive.org core which folds up to hold the eye tracking cameras and usb hub.  It would be a major motivator and speed-up to have that here “in-house”.

Continue reading “EyeSkills Newsletter – Design. Code. Refactor. Rinse. Repeat. 09.04.2019”

Unity style/VrActivityTheme not found in AndroidManifest.xml bug?!?

If you run into the obscure and painful problem that Unity tells you the “style/VrActivityTheme not found in AndroidManifest.xml” then fear not.  You do not need to spent half a day trying to track down the problem.  At some point you probably (temporarily) added Google’s DayDream as a supported VR Device in the Unity Player Settings, then removed it. Unfortunately, Unity didn’t fully clear up all the dependencies on DayDream, so it assumes that it still needs this VrActivityTheme (https://developers.google.com/vr/reference/vr-manifest) .  It doesn’t.  The simplest solution? Just add support for DayDream back into your player settings (that’s not a bad thing), Unity will correctly resolve the dependencies, and it will build and run once more. Sigh.

Please “like” EyeTracktive and help us Win!

We need your support!

Please “like” us – https://hackaday.io/project/164944-eyetracktive Points means prizes!

The excellent hackaday.io has organised a global :

“hardware design contest focused on product development. DesignLab connects you to engineers, expert mentors, and other powerful resources to take your product from concept to DFM.”

If everybody on this list takes five minutes to support us, we’ll be at position 5 on the leaderboard and far more visible to the world (thus attracting more likes). At the moment we’re at position 28 with 14 likes.

This might be what we need to help us take EyeTracktive to a finished product.  Eyetracktive is the ultra-low cost open-hardware eye tracking headset we’re working on to complement EyeSkills (so we can see objectively what the eyes are actually doing, enabling “at home” training with precision and safety).

If you register at hackaday.io you can “like” the project (https://hackaday.io/project/164944-eyetracktive) and help propel it up the leaderboard (https://prize.supplyframe.com/).

Thanks to an anonymous donation we’ve got enough together that I could buy a discounted (super super cheap) A3 laser cutter which will allow us to continue prototyping and even producing the inner core of the EyeTracktive headset, but when it arrives I’ll still need to build an enclosure and setup a ventilation system.  If nothing else, every “like” wins us $3 which will help cover those materials! 🙂

Thanks!

Ben

 

 

Exploring an API for generating custom VR headset designs

Another target I’d like to accomplish is to provide an api for generating custom parameterised EyeTracktive pupil tracking headsets.

OpenSCAD can theoretically produce .stl or .png images via the cmd line, but it requires a lot of cruft to get this to work (an XServer or xvfb) – so wouldn’t it be nice if there was a pre-configured docker container?

After a little look around I found this :

https://hub.docker.com/r/wtnb75/openscad

Theoretically, the simplest way to handle this would be to exec/run openscad from inside the container to output a file mapped to a directory on the host accessible from the API which is part of (e.g.) EyeTracktive.org.

It looks, however, like I’ll have to see if I can cook up a docker file based on https://github.com/wtnb75/dockerfiles/tree/master/openscad-fedora first.

 

Notes on Unity Animation

This wasn’t a bad starting point : https://www.youtube.com/watch?v=vPgS6RsLIjk

It’s important to remember, when you’ve created a sprite, that you need to add a SpriteSkin. Sometimes it fails to automatically detect the bones in your sprite, but so far, that’s been simple to solve by making a few minor changes to the sprite, reapplying, and then the “CreateBones” button in the SpriteSkin successfully works.  If you have an existing animation, you can drag and drop it onto the sprite. Next step – animation transitions.

In the Animation pane you can create a new animation from the drop-down, but to create links between those elements you’ll need to make sure the Animator window is visible (Window->Animation->Animator). There you can make links between the various states (https://www.youtube.com/watch?v=HVCsg_62xYw).  How can we have those state transitions occur without scripting? It turns out that the transitions already happen, but you need to “Play” a scene containing the model.

Where the ordering of limbs is incorrect, go into the SpriteEditor>SkinningEditor and set the individual bone depth, by selecting the relevant bones.

The next issue will be transitioning sprite parts (rather than just animating their position).  My best guess is that we’ll end up animating enable/disable/active on alternative game objects inside the Animator (I hope).  Yep. That was quite intuitive.  Place the object you want somewhere inside the bone hierarchy of the sprite (inside the scene editor) and then, in the Animation pane, add a property for that object for “enabled” and animate it.

I suspect that, to enable the pupil to move freely around, I’ll have to add a mask around the “white” of the eye.

This is quite exciting.  A lot of opportunities for clearer communication and more interesting and interactive scenes have just opened up 🙂

Ultimately, I’d like to create a 3D representation (mesh) of the mascot, and a toon shader to go with it, which would be the most flexible approach but for now I’ll create the basic poses I need to start with as .SVG, then export to sprites and animate.

I seems that one can create too many bones.  The issue I’ve run into is that slicing the sprite prevents the unity editor from allowing me to create bones which span the different sprite parts (surprise, it’s still buggy).  However, using autogeometry to split up the sprite, makes it almost impossible to control when the bones overlay each other (e.g. around the eye) and control over things like mouth expression is currently beyond me using the inbuilt approach.

I suspect the way to do this is, is to create a completely separate multi-sprite for the eye and another for the mouth (with multiple expressions in the multi-sprite), and then to place these inside the bone object hierarchy.

A potential problem with this approach is that alterations to the bone structures seem to invalidate the sprite skin / bone editor in the scene – requiring it to be destroyed and recreated, which will lose all my setup 🙁

So, that worked well (I think).

There are eight sprites along the top, and only the collection of body parts below are skinned.  On the left in the scene hierarchy, you can see the other parts are placed under the body bone – with each game object having a “Sprite Renderer” added. Is there are better way.  The different parts of the multi-sprite are always visible in the object panel beneath the scene hierarchy.

An afternoon with Marco Schätzing

Hello there,
Lukas here. I am a new member of eyeskills and today I decided to put out my first facebook post!
Ben and I had the great chance to meet Marco Schätzing in his natural habitat. He is a studied optometrist and works as a visual trainer. Today, Marco showed us the “Maddox Test” again. The “Maddox Test” is a procedure which is used to actually measure strabismus for near and distance. The “Maddox Rod”, which is actually a red parallel plano convex cylinder lens, is placed in front of one eye, whilst the other one looks at a numbered horizontal axis, with the zero in its center. A light shines from the direction of the centre, to be broken in the “Maddox Rod”, so Ben could only see a vertical ray of red light, with the eye behind the rod. So Ben could see, where on the axis he was seeing the ray from the one eye, in relation to where the other eye was looking. If he had seen it realy closely to the center, it would have been what you would expect from a person without strabism. At a 6m distance, the misalignment was only 1.8 degrees, which is not crazy, whilst up close (40cm) the divergence was 8 degrees.

This was an interesting insight of how misalignment can vary with the distance of the fixing object. We were interested in that because we think we can recreate a test like this in the app – but in a more interesting way.

But this is not all we learned within our time with Marco:

We discussed the recent version of the app, which Marco could lay his eyes on today. It led to an interesting discussion about a lot of different shapes and forms of strabism. We ended up discussing more about the plight of people, who have acquired their strabism through a stroke. We know that this is a huge phenomenon we should consider way more in our work.
And we will!

Lets laser cut some headsets!

Over at eyetracktive.org you can see the results of some early experiments in creating the world’s most affordable eye tracking headset. The idea is to make this compatible with off the shelf google cardboard headsets.

One constantly underestimated problem, however, is that people have quite differently shaped heads, and eye positions. I find it obnoxious when we’re all forced to use a one-size-fits-all solution.

What I’ve been tinkering with for a while is an approach which uses OpenSCAD to “mathematically” define the headset parts (only the core which contains the eye tracking hardware so far, but soon, also the surrounding google cardboard design). The advantage of doing this in OpenSCAD is that all the critical dimensions can be defined as variables, with functions relating them in sensible ways to produce things like lines to cut! The even greater advantage of using OpenSCAD is that it can be called from the command line.

The idea that’s been waiting patiently for attention, for some time, is to setup a web service which takes customisation requests, passes them into the OpenSCAD model, and thereby produces an .svg as output . An SVG snippet might look something like this:

Some SVG defining a document and the beginnings of a very long line!

This scaleable vector graphics (SVG) file isn’t anything useful on its own – to have it turned into a laser cut piece of cardboard, we need to turn it into gcode. GCode is a simple language which tells motors where to move, and things like laser beams to turn on or off at a given power. Here’s a simple snippet of GCode with some comments as an example:

G28                //Move the head to home position
G1 Z0.0            //Move in a straight line to depth 0.0
M05                //Turn off the spindle (laser)
G4 P0.2            //Pause for little moment doing nothing
G0 X43.1 Y74.4     //Move rapidly to X/Y position
M03                //Turn on spindle (laser)
G0 X43.1 Y81.4     //Move rapidly to X/Y position
...and so on for hundreds and hundreds of lines

So, we want to get from an OpenSCAD description, to SVG, to GCode, and eventually, send that to a printer.

How hard can that be?!? Let’s knock up a prototype!

In practice, I have no idea what machine I will ultimately connect to my super cheap Eleksmaker A3 Pro laser cutter -> but I know that it’ll either be one of the several linux or osx machines I have knocking around- so lets pick an approach which will work just as well on any of them. One approach which will do for us is called Docker.

Docker basically packages everything an application needs to run into what they call a container. From the point of view of the application, it feels and looks like it is running in an operating system on a computer dedicated to nothing but keeping it happy and running perfectly. In actual fact Docker is just using smoke and mirror to make it look this way – but it’s a trick which Docker have perfected on pretty much every computing platform, so it’s really containerise once, run anywhere 😉

First we knock up a Dockerfile (thanks to @mbt for getting the ball rolling!) containing what we want in our environment. The most important parts are openscad and py-svg2gcode (which does the .svg to .gcode conversion). We then start the container :

docker run -i -t -v [host path]:/tmp/[something] openscad bash

This starts an interactive container, dumping us into bash, with [host path] mapped from the host machine to /tmp/[something] inside the docker container.

When we try to run py-svg2gcode, the first thing we notice are a bunch of errors :

Traceback (most recent call last):<br>   File "", line 1, in <br>   File "svg2gcode.py", line 78, in generate_gcode<br>     scale_x = bed_max_x / float(width)<br> ValueError: invalid literal for float(): 210mm

Yippee. Nothing ever works first time. Actually, this isn’t so bad. “ValueError: invalid literal for float(): 210mm” is perhaps a little cryptic at first sight, but it’s actually probably indicating that it is expecting a floating point number, where it is receiving a string containing “mm”. Low and behold, if you look at the snippet of .svg above, you’ll see this is precisely what is happening.

Before we run svg2gcode, let’s always replace any occurrences of “mm” in incoming .svg files! Perhaps we’ll call svg2code from a bash script which preprocesses with :

sed -i ‘s/mm//g’ $1

This takes the input argument to the script ($1) as a filename to process, then uses the unix command sed to find all instances of mm (the g) and replace them in the same file (-i) with, well nothing!

Great. Our next inevitable problem is that we get a load of these statements in the output :

pt: (-17.952574468085107, 97.97540425531915)<br>         --POINT NOT PRINTED (150,150)

Perhaps the point -17/97 is somehow outside the bounds of the printer? It turns out that svg2gcode uses a config.py to define constants such as the area of the printer. Indeed, bed_max_x and bed_max_y are both set to 150. We’ll have to change that, and do it in a way that Docker remembers between restarts. We’ll also have to worry about why we’re getting negative values in just a moment. Is the problem that the point is negative, or that the cumulative y position to date exceeds 150?

First of all, in our Dockerfile we can tell it to take a file from our local file system and add it to the Dockerfile:

COPY config.py /py-svg2gcode/config.py

Now we have :

pt: (-28.0079, 217.75119999999998)
–POINT NOT PRINTED (400,300)

So more points got printed, but the negative numbers are clearly a problem. This may mean we need to be careful in generating our coordinate space, or we cheat, setting the origin of the laser cutter to the middle of its area and defining the available space as -200 to +200 and so on…

Looking in the OpenSCAD file I am trying to convert, and there we are… the headset is centered around the origin.

For now I shall apply a transform to shift it off origin before we attempt the gcode generation:

Transformed model

When we inspect this in Inkscape, it also looks good:

..and in the raw .svg we see that the width and height are within the bounds of our machine:

Raw .svg output from the transformed model

None of the points in the line description exceed either width or height. You’d think this would be fine for svg2gcode, right?

Sigh. Pages and pages of “point not printed” :

After then looking into svg2gcode.py I’ve realised it looks quite incomplete, and makes many strange assumptions about scaling etc. which don’t fit our use case. Time to try a different approach….

OpenSCAD supports DXF export, and there appears to be a more mature dxf2gcode library out there – so let’s go with the flow and try taking that approach instead!.. only after updating my container to install it, it turns out that this isn’t a library, it’s a program that requires a window manager… and so it goes on. This is the reality of prototyping, as you cast around for tools to do the job in the vain hope that you’re not going to have to end up implementing too much yourself. :-/

It feels like options are running out – should I take a look at https://sourceforge.net/projects/codeg/? – a project that stopped doing anything back in 2006. All in all this is terribly sad for such a basic and common (?) need. Perhaps first it’s time to look more deeply at svg2gcode.py and see if we can just strip out its weird scaling code.

First off, it actually looks like there are many branches of the original code – for example – https://github.com/SebKuzminsky/svg2gcode is more up to date than most. Let’s update our Dockerfile in a way you never would in production to try things out quickly:

FROM debian:unstable<br> RUN apt-get update &amp;&amp; apt-get install -y openscad &amp;&amp; apt-get install -y python<br> RUN apt-get install -y git &amp;&amp; git clone https://github.com/SebKuzminsky/svg2gcode.git<br> RUN cd /svg2gcode &amp;&amp; git submodule init &amp;&amp; git submodule update &amp;&amp; apt-get install -y python python-svgwrite python-numpy python-jsonschema asciidoc docbook-xml docbook-xsl xsltproc<br> RUN apt-get install -y vim<br> COPY config.py /svg2gcode/config.new<br> CMD ["echo","Image created"]

Now we’ll build our new container with :

docker build -t openscad .

Now let’s have an initial nose around. At first glance, this looks way more intense – there’s a lot of code in the svg2gcode.py specifically for milling. It’s a different kind of beast to the last python script! Taking a look in the README.md (why don’t I ever think to start there) says that it can handle engraving. How can we specify that type of operation?

“python svg2gcode” actually gives us some sensible feedback/potential instructions. I’m already liking this – although I see no options for “engraving/offset/pocket/mill” and so on. Let’s take another look in the .py file.

So – I’m not a python guy *at all* but :

this looks promising

…but what are these op operations it speaks of? It looks like there is some sort of json formatted job descriptions. Not sure I like the way *that* is going. Also, some interesting leads in https://github.com/SebKuzminsky/svg2gcode/blob/master/TODO.adoc.

Yep. A quick “grep -R engrave *” reveals a bunch of unit tests (it’s back in my good books again) which show a job format xxx.s2g in json that looks like :

…sorry, I have no idea why copy/paste has stopped working from my docker terminal :-/ Not a rabbit hole I’m going down this instant.

so, it looks like we need a job description, next to our svg – and we can see what comes out of it!

Here’s the output:

Hmmm. “not even close”.

So, it doesn’t like the paths in the .svg. Why? That’s the next thing to explore. I’ll output some simple primitive shapes in OpenSCAD, then some using operations like “difference” and see when/where the conversion process breaks…. or this is a deeper problem? Do I need to create many jobs composed of much lower level “closed” lines?

Yep. Looking at some of the more advanced examples with multiple shapes in the .svg the job description defines how each individual path needs to be handled :

{<br>     "jobs": [<br>         {<br>             "paths": [ 6, 7, 8, 9, 10, 11, 12 ],<br>             "operations": [<br>                 {<br>                     "drill": { }<br>                 }<br>             ]<br>         },<br>         {<br>             "paths": [ 1, 2, 3, 4, 5 ],<br>             "operations": [<br>         {<br>             "offset": {<br>             "distance": 4,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>         },<br>                 {<br>             "offset": {<br>             "distance": 3.175,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>                 }<br>         ]<br>     },<br>         {<br>             "paths": [ 0 ],<br>             "operations": [<br>         {<br>             "offset": {<br>             "distance": -4,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>         },<br>                 {<br>             "offset": {<br>             "distance": -3.175,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>                 }<br>         ]<br>     }<br>     ]<br> }

This is well beyond what we want or need. It is a general artefact of abstraction that the more generic a tool becomes, the harder it is to get it to do anything specific. A theory of the universe just wraps the universe in a plastic bag and you’re no closer to understanding any of it.

My gut feeling tells me, go back to the cruder and simpler original svg2gcode.py and modify. Our needs are simple.

Well, that’s another 40minutes an evening gone. Back at it at the next opportunity!