Data Provenance

It really matters that we know exactly what system setup generated any data we collect – the idea being that experiments can be re-run (and thus independently verified) in precisely the same technical context as the originals.

To achieve this we store the following information underneath anything else captured by a researcher:

Basic information capture

You might notice the buildVersion variable. This is actually a build id from .git – so that a researcher can re-create precisely the same environment from a prior experiment (e.g. from a different researcher). It assumes for now that all builds are stored in the same repository.

How does this buildVersion actually get there? In /[projectPath]/Assets/Plugins/Android/mainTemplate.gradle (around line 82) you’ll find a little hack that creates a custom class called Version during the Unity build process:

buildTypes {
debug {…
}
release {
//We want to provide information to the App about the .git version that the app is built from.
//For repeatable open-science, and to handle changing schemas, we need to be able to pin an InfoBase item to a specific build
def stdout = new ByteArrayOutputStream()
exec{
//Ask git
commandLine ‘git’,’log’,’–pretty=format:”%h”‘,’-n 1′
standardOutput = stdout;
}
//Write it out to a class (Version) which our InfoBase can then instantiate to extract the “version”
new File(“DIR_UNITYPROJECT/Assets/Scripts/Version.cs”).text = “namespace EyeSkills { public class Version { public string version = $stdout; }}”

That’s why and how we can reference Version.cs to get the buildVersion. However, we warned that refactoring things can cause that Version.cs to move in it’s location. You will then get uncomfortable gradle build fails errors from Unity. Check the path to Version.cs if you see problems with gradle… it might just be the cause!

Plotter friendly design

OpenSCAD (and my use of the wrong primitives for the dashed lines) didn’t do a great job converting lines and text in the original EyeTracktive Core design, into a format that works well for a plotter pen marking out the headset.

I just tried to manually create a better overlay in Inkscape (as a stop-gap) and ran into a few problems.

  1. Text doesn’t naturally convert to single stroke paths (obviously, if you think about it). In Inkscape Extensions->Render->Hershey Text… comes to the rescue. This creates plotter friendly text.
    1. However, it dumps this text somewhere fairly random outside the page border (on my machine at least) so you need to hunt it down.
    2. It then turns out that it uses the SVG transform element (matrix transform) to position the text you’ve just created in the correct position (as you drag-drop). This is incompatible with Cricut’s crappy DesignSpace software, which doesn’t understand transformation matrices, so it just ignores them.
    3. Solution : Ungroup each text element individually, as Inkscape seems to only wrap groups in transform matrices.

Diamond Depth Perception – a user reports

Hi everyone,

I’ll start with a bit of background. I’m Ben’s younger brother, Nick, and I was very keen to see how his software works. Other than mild short-sightedness, I do not think there is anything wrong with my eyes. Sitting in Ben’s office, I noticed a 3D magic eye picture on the wall, and commented that I have never been able to see the 3D image, even though I had a book full of pictures as a child. I tried it again; no success.

We started with the triangles [Ben : the standard conflict scene] and they were overlaid as expected. When we got to the cyan and yellow semicircles within white circles, I noted the horizontal offsets between the semicircles, but everything was flat. There was no depth, and the outer white diamond that encloses the entire image was square.

This image has an empty alt attribute; its file name is image8-1024x569.png

I noticed that I could consciously fade either the cyan or yellow semicircles to being invisible (and control the brightness at levels in between) but I really didn’t understand the instructions, which were to select the circle that was ‘closest’. Everything was still flat and I ended up choosing the semicircles that matched closest. This was not right and I kept selecting the wrong circle!

I then tried moving my focal distance (the point at which both eyes converged) closer then further away and noticed that I could move the semicircles sideways to get them to join into a full circle. By doing this several times for each pair, I could figure out which circle was ‘closest’ entirely by feeling my eye muscles. Did this circle ‘feel’ closer because my eyes were more cross-eyed?… But this wasn’t particularly sensitive and sometimes circles were very similar and I had to adjust my eyes several times on each circle to feel a difference. Still flat though!

Then I looked into the black space in the middle of the screen, relaxed my eyes (I was probably staring into the distance) and BAM, I could see the four circles were distorted and had depth. Also, the outer diamond was bowed toward me. Keeping my eyes on this black space, it was easy to judge the relative distances of the circles. After a few rounds, I was able to look around the image and maintain the 3-dimensional illusion at all times.

I guess the purpose of this post is that I want to share my discovery that seeing the image in 3D is not necessarily automatic. It took me a few stages for it to happen: manually varying my distance of focus and figuring out this distance based on my eye muscles, then staring into the middle of the screen before the 3D became apparent. Perhaps I have been unable to separate the two process of focussing (which is done with the lens in each eye) and depth perception (which uses the two eyes together). Certainly, when trying the 3D magic eye pictures my eyes have been staring into the distance (correct!) but my focussing was too (incorrect!) so the image was just a blur. Time to find that old book! I hope this helps someone. Best of luck.

Nick

When Unity UI Goes bad – The perils of DontDestroyOnLoad

This is quick note about a really obscure problem which can bite you quite hard in unity. If you switch between scene you may need to use DontDestroyOnLoad to keep certain objects hanging around during and after the transition. If, however, you have stored collections of scripts under empty game objects – and there should be an overlap between the set of scripts in a GameObject in Scene1, and Scene2, that DoNotDestroyOnLoad will prevent the GameObject containing the clashing script in Scene2 from being loaded- AT ALL! This will break things in the most unexpected and non-obvious way. For instance, RayCasters may suddenly disappear because they were never loaded (but ONLY when Scene2 is loaded via Scene1) causing User Interface elements to no longer pick up user input etc.

Bewarned.

Lets laser cut some headsets!

Over at eyetracktive.org you can see the results of some early experiments in creating the world’s most affordable eye tracking headset. The idea is to make this compatible with off the shelf google cardboard headsets.

One constantly underestimated problem, however, is that people have quite differently shaped heads, and eye positions. I find it obnoxious when we’re all forced to use a one-size-fits-all solution.

What I’ve been tinkering with for a while is an approach which uses OpenSCAD to “mathematically” define the headset parts (only the core which contains the eye tracking hardware so far, but soon, also the surrounding google cardboard design). The advantage of doing this in OpenSCAD is that all the critical dimensions can be defined as variables, with functions relating them in sensible ways to produce things like lines to cut! The even greater advantage of using OpenSCAD is that it can be called from the command line.

The idea that’s been waiting patiently for attention, for some time, is to setup a web service which takes customisation requests, passes them into the OpenSCAD model, and thereby produces an .svg as output . An SVG snippet might look something like this:

Some SVG defining a document and the beginnings of a very long line!

This scaleable vector graphics (SVG) file isn’t anything useful on its own – to have it turned into a laser cut piece of cardboard, we need to turn it into gcode. GCode is a simple language which tells motors where to move, and things like laser beams to turn on or off at a given power. Here’s a simple snippet of GCode with some comments as an example:

G28                //Move the head to home position
G1 Z0.0            //Move in a straight line to depth 0.0
M05                //Turn off the spindle (laser)
G4 P0.2            //Pause for little moment doing nothing
G0 X43.1 Y74.4     //Move rapidly to X/Y position
M03                //Turn on spindle (laser)
G0 X43.1 Y81.4     //Move rapidly to X/Y position
...and so on for hundreds and hundreds of lines

So, we want to get from an OpenSCAD description, to SVG, to GCode, and eventually, send that to a printer.

How hard can that be?!? Let’s knock up a prototype!

In practice, I have no idea what machine I will ultimately connect to my super cheap Eleksmaker A3 Pro laser cutter -> but I know that it’ll either be one of the several linux or osx machines I have knocking around- so lets pick an approach which will work just as well on any of them. One approach which will do for us is called Docker.

Docker basically packages everything an application needs to run into what they call a container. From the point of view of the application, it feels and looks like it is running in an operating system on a computer dedicated to nothing but keeping it happy and running perfectly. In actual fact Docker is just using smoke and mirror to make it look this way – but it’s a trick which Docker have perfected on pretty much every computing platform, so it’s really containerise once, run anywhere 😉

First we knock up a Dockerfile (thanks to @mbt for getting the ball rolling!) containing what we want in our environment. The most important parts are openscad and py-svg2gcode (which does the .svg to .gcode conversion). We then start the container :

docker run -i -t -v [host path]:/tmp/[something] openscad bash

This starts an interactive container, dumping us into bash, with [host path] mapped from the host machine to /tmp/[something] inside the docker container.

When we try to run py-svg2gcode, the first thing we notice are a bunch of errors :

Traceback (most recent call last):<br>   File "", line 1, in <br>   File "svg2gcode.py", line 78, in generate_gcode<br>     scale_x = bed_max_x / float(width)<br> ValueError: invalid literal for float(): 210mm

Yippee. Nothing ever works first time. Actually, this isn’t so bad. “ValueError: invalid literal for float(): 210mm” is perhaps a little cryptic at first sight, but it’s actually probably indicating that it is expecting a floating point number, where it is receiving a string containing “mm”. Low and behold, if you look at the snippet of .svg above, you’ll see this is precisely what is happening.

Before we run svg2gcode, let’s always replace any occurrences of “mm” in incoming .svg files! Perhaps we’ll call svg2code from a bash script which preprocesses with :

sed -i ‘s/mm//g’ $1

This takes the input argument to the script ($1) as a filename to process, then uses the unix command sed to find all instances of mm (the g) and replace them in the same file (-i) with, well nothing!

Great. Our next inevitable problem is that we get a load of these statements in the output :

pt: (-17.952574468085107, 97.97540425531915)<br>         --POINT NOT PRINTED (150,150)

Perhaps the point -17/97 is somehow outside the bounds of the printer? It turns out that svg2gcode uses a config.py to define constants such as the area of the printer. Indeed, bed_max_x and bed_max_y are both set to 150. We’ll have to change that, and do it in a way that Docker remembers between restarts. We’ll also have to worry about why we’re getting negative values in just a moment. Is the problem that the point is negative, or that the cumulative y position to date exceeds 150?

First of all, in our Dockerfile we can tell it to take a file from our local file system and add it to the Dockerfile:

COPY config.py /py-svg2gcode/config.py

Now we have :

pt: (-28.0079, 217.75119999999998)
–POINT NOT PRINTED (400,300)

So more points got printed, but the negative numbers are clearly a problem. This may mean we need to be careful in generating our coordinate space, or we cheat, setting the origin of the laser cutter to the middle of its area and defining the available space as -200 to +200 and so on…

Looking in the OpenSCAD file I am trying to convert, and there we are… the headset is centered around the origin.

For now I shall apply a transform to shift it off origin before we attempt the gcode generation:

Transformed model

When we inspect this in Inkscape, it also looks good:

..and in the raw .svg we see that the width and height are within the bounds of our machine:

Raw .svg output from the transformed model

None of the points in the line description exceed either width or height. You’d think this would be fine for svg2gcode, right?

Sigh. Pages and pages of “point not printed” :

After then looking into svg2gcode.py I’ve realised it looks quite incomplete, and makes many strange assumptions about scaling etc. which don’t fit our use case. Time to try a different approach….

OpenSCAD supports DXF export, and there appears to be a more mature dxf2gcode library out there – so let’s go with the flow and try taking that approach instead!.. only after updating my container to install it, it turns out that this isn’t a library, it’s a program that requires a window manager… and so it goes on. This is the reality of prototyping, as you cast around for tools to do the job in the vain hope that you’re not going to have to end up implementing too much yourself. :-/

It feels like options are running out – should I take a look at https://sourceforge.net/projects/codeg/? – a project that stopped doing anything back in 2006. All in all this is terribly sad for such a basic and common (?) need. Perhaps first it’s time to look more deeply at svg2gcode.py and see if we can just strip out its weird scaling code.

First off, it actually looks like there are many branches of the original code – for example – https://github.com/SebKuzminsky/svg2gcode is more up to date than most. Let’s update our Dockerfile in a way you never would in production to try things out quickly:

FROM debian:unstable<br> RUN apt-get update &amp;&amp; apt-get install -y openscad &amp;&amp; apt-get install -y python<br> RUN apt-get install -y git &amp;&amp; git clone https://github.com/SebKuzminsky/svg2gcode.git<br> RUN cd /svg2gcode &amp;&amp; git submodule init &amp;&amp; git submodule update &amp;&amp; apt-get install -y python python-svgwrite python-numpy python-jsonschema asciidoc docbook-xml docbook-xsl xsltproc<br> RUN apt-get install -y vim<br> COPY config.py /svg2gcode/config.new<br> CMD ["echo","Image created"]

Now we’ll build our new container with :

docker build -t openscad .

Now let’s have an initial nose around. At first glance, this looks way more intense – there’s a lot of code in the svg2gcode.py specifically for milling. It’s a different kind of beast to the last python script! Taking a look in the README.md (why don’t I ever think to start there) says that it can handle engraving. How can we specify that type of operation?

“python svg2gcode” actually gives us some sensible feedback/potential instructions. I’m already liking this – although I see no options for “engraving/offset/pocket/mill” and so on. Let’s take another look in the .py file.

So – I’m not a python guy *at all* but :

this looks promising

…but what are these op operations it speaks of? It looks like there is some sort of json formatted job descriptions. Not sure I like the way *that* is going. Also, some interesting leads in https://github.com/SebKuzminsky/svg2gcode/blob/master/TODO.adoc.

Yep. A quick “grep -R engrave *” reveals a bunch of unit tests (it’s back in my good books again) which show a job format xxx.s2g in json that looks like :

…sorry, I have no idea why copy/paste has stopped working from my docker terminal :-/ Not a rabbit hole I’m going down this instant.

so, it looks like we need a job description, next to our svg – and we can see what comes out of it!

Here’s the output:

Hmmm. “not even close”.

So, it doesn’t like the paths in the .svg. Why? That’s the next thing to explore. I’ll output some simple primitive shapes in OpenSCAD, then some using operations like “difference” and see when/where the conversion process breaks…. or this is a deeper problem? Do I need to create many jobs composed of much lower level “closed” lines?

Yep. Looking at some of the more advanced examples with multiple shapes in the .svg the job description defines how each individual path needs to be handled :

{<br>     "jobs": [<br>         {<br>             "paths": [ 6, 7, 8, 9, 10, 11, 12 ],<br>             "operations": [<br>                 {<br>                     "drill": { }<br>                 }<br>             ]<br>         },<br>         {<br>             "paths": [ 1, 2, 3, 4, 5 ],<br>             "operations": [<br>         {<br>             "offset": {<br>             "distance": 4,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>         },<br>                 {<br>             "offset": {<br>             "distance": 3.175,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>                 }<br>         ]<br>     },<br>         {<br>             "paths": [ 0 ],<br>             "operations": [<br>         {<br>             "offset": {<br>             "distance": -4,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>         },<br>                 {<br>             "offset": {<br>             "distance": -3.175,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>                 }<br>         ]<br>     }<br>     ]<br> }

This is well beyond what we want or need. It is a general artefact of abstraction that the more generic a tool becomes, the harder it is to get it to do anything specific. A theory of the universe just wraps the universe in a plastic bag and you’re no closer to understanding any of it.

My gut feeling tells me, go back to the cruder and simpler original svg2gcode.py and modify. Our needs are simple.

Well, that’s another 40minutes an evening gone. Back at it at the next opportunity!

An afternoon with Marco Schätzing

Hello there,
Lukas here. I am a new member of eyeskills and today I decided to put out my first facebook post!
Ben and I had the great chance to meet Marco Schätzing in his natural habitat. He is a studied optometrist and works as a visual trainer. Today, Marco showed us the “Maddox Test” again. The “Maddox Test” is a procedure which is used to actually measure strabismus for near and distance. The “Maddox Rod”, which is actually a red parallel plano convex cylinder lens, is placed in front of one eye, whilst the other one looks at a numbered horizontal axis, with the zero in its center. A light shines from the direction of the centre, to be broken in the “Maddox Rod”, so Ben could only see a vertical ray of red light, with the eye behind the rod. So Ben could see, where on the axis he was seeing the ray from the one eye, in relation to where the other eye was looking. If he had seen it realy closely to the center, it would have been what you would expect from a person without strabism. At a 6m distance, the misalignment was only 1.8 degrees, which is not crazy, whilst up close (40cm) the divergence was 8 degrees.

This was an interesting insight of how misalignment can vary with the distance of the fixing object. We were interested in that because we think we can recreate a test like this in the app – but in a more interesting way.

But this is not all we learned within our time with Marco:

We discussed the recent version of the app, which Marco could lay his eyes on today. It led to an interesting discussion about a lot of different shapes and forms of strabism. We ended up discussing more about the plight of people, who have acquired their strabism through a stroke. We know that this is a huge phenomenon we should consider way more in our work.
And we will!

Blender Outlining [SOLVED] – Cannot find Backface Culling option under Viewport Shading.

Helping people with Lazy Eye learn to use both eyes again requires a long term commitment on their part.  That shouldn’t be a drag, so I am hoping to inject a little humour through two animated characters which accompany the participant on their journey, helping to motivate people to continue.

A part of the look and feel that I am striving for is to use a Toon Shader / Cell Shader to create an Outline image similar to that shown in this short tutorial :

Unfortunately, the current release of Blender 2.8 just doesn’t work the same way.  You cannot select Backface Culling from the shader menu!

It does exist in the “solid” view as shown below, but that isn’t very useful:

In the rendering modes, the option just isn’t there anymore:

Now, you need to look inside the material you chose to be your outline, and under settings you’ll find the “Backface Culling” option.

 I hope this helps somebody!

Notes on Using Blender 2.8

One thing I discovered when testing EyeSkills at the beginning, is that people who have a lazy eye often have a particular psychological relationship to the condition.  They would like to understand and perhaps change it, but they have generally come to accept it as “inevitable” and found ways to compensate in everyday life.  It is regarded as an unknowable mystery.

The consequence of this, is that initial enthusiasm does not continue into daily practice where there are too many hurdles- made worse by the mountains of technical jargon associated with amblyopia and strabism!  To make EyeSkills accessible, I believe it needs a simple metaphor, expressed visually, and humorously. This needs to run on a human platform, after all!

I have taught myself 2D animation over the last week, but it is a slow and inefficient process. I can see that I need to move to 3D.  To do that I’m now teaching myself Blender as quickly as possible, so that I can go from from character modelling, to rigging (with humanoid bones and correct standard names) and skinning, to shape keys, to UV mapping and drawing textures, to animation, to importing into Unity and finally creating a cell shader to give the animations the cartoon like character I’m seeking.

As I run into issues with Blender, I’ll make some notes here.

Metaballs

I’m using Metaballs to model Domi/Supti before creating their meshes.  It turns out that naming is very important.  If the metaballs stop interacting, be sure to read this: https://docs.blender.org/manual/en/latest/modeling/metas/editing.html#object-families

Unfortunately, having got to the kind of base model I wanted :

I couldn’t find a way to convert it to a mesh without it suddenly becoming “overweight”!

All in all, after speaking with a friend, metaballs may not be the way to start the character modelling. I’ll try again building with meshes from the ground up.

This involves much more direct manipulation of vertices and edge/face creation:

You will probably want to half your object, enabling the mirror modifier, but only apply the mirror modifier when finished. It’s also extremely likely that you’ll want to be able to select vertices behind those visible:

https://blender.stackexchange.com/questions/119472/blender-2-8-how-to-select-edges-behind-other-edges-when-in-numpad-1-mode

Copying modifiers

Shift-Select the items you want to receive the modifier from, and finally the item with the modifier you want to copy.  You then use Cntrl-L to link the modifiers, BUT you MUST do this with the mouse cursor over the modelling viewport – otherwise nothing will happen!!!

A problem with this approach is that, if Armature modifiers or mirror modifiers are already setup, you’ll end up breaking your animations and causing new limb parts to appear everywhere!!!  To make use of this approach, try to model the mesh first (only on one side), then the materials, then things like the outliner, mirror modifiers, and rigging last.

Notes on Unity Animation

This wasn’t a bad starting point : https://www.youtube.com/watch?v=vPgS6RsLIjk

It’s important to remember, when you’ve created a sprite, that you need to add a SpriteSkin. Sometimes it fails to automatically detect the bones in your sprite, but so far, that’s been simple to solve by making a few minor changes to the sprite, reapplying, and then the “CreateBones” button in the SpriteSkin successfully works.  If you have an existing animation, you can drag and drop it onto the sprite. Next step – animation transitions.

In the Animation pane you can create a new animation from the drop-down, but to create links between those elements you’ll need to make sure the Animator window is visible (Window->Animation->Animator). There you can make links between the various states (https://www.youtube.com/watch?v=HVCsg_62xYw).  How can we have those state transitions occur without scripting? It turns out that the transitions already happen, but you need to “Play” a scene containing the model.

Where the ordering of limbs is incorrect, go into the SpriteEditor>SkinningEditor and set the individual bone depth, by selecting the relevant bones.

The next issue will be transitioning sprite parts (rather than just animating their position).  My best guess is that we’ll end up animating enable/disable/active on alternative game objects inside the Animator (I hope).  Yep. That was quite intuitive.  Place the object you want somewhere inside the bone hierarchy of the sprite (inside the scene editor) and then, in the Animation pane, add a property for that object for “enabled” and animate it.

I suspect that, to enable the pupil to move freely around, I’ll have to add a mask around the “white” of the eye.

This is quite exciting.  A lot of opportunities for clearer communication and more interesting and interactive scenes have just opened up 🙂

Ultimately, I’d like to create a 3D representation (mesh) of the mascot, and a toon shader to go with it, which would be the most flexible approach but for now I’ll create the basic poses I need to start with as .SVG, then export to sprites and animate.

I seems that one can create too many bones.  The issue I’ve run into is that slicing the sprite prevents the unity editor from allowing me to create bones which span the different sprite parts (surprise, it’s still buggy).  However, using autogeometry to split up the sprite, makes it almost impossible to control when the bones overlay each other (e.g. around the eye) and control over things like mouth expression is currently beyond me using the inbuilt approach.

I suspect the way to do this is, is to create a completely separate multi-sprite for the eye and another for the mouth (with multiple expressions in the multi-sprite), and then to place these inside the bone object hierarchy.

A potential problem with this approach is that alterations to the bone structures seem to invalidate the sprite skin / bone editor in the scene – requiring it to be destroyed and recreated, which will lose all my setup 🙁

So, that worked well (I think).

There are eight sprites along the top, and only the collection of body parts below are skinned.  On the left in the scene hierarchy, you can see the other parts are placed under the body bone – with each game object having a “Sprite Renderer” added. Is there are better way.  The different parts of the multi-sprite are always visible in the object panel beneath the scene hierarchy.