Diamond Depth Perception – a user reports

Hi everyone,

I’ll start with a bit of background. I’m Ben’s younger brother, Nick, and I was very keen to see how his software works. Other than mild short-sightedness, I do not think there is anything wrong with my eyes. Sitting in Ben’s office, I noticed a 3D magic eye picture on the wall, and commented that I have never been able to see the 3D image, even though I had a book full of pictures as a child. I tried it again; no success.

We started with the triangles [Ben : the standard conflict scene] and they were overlaid as expected. When we got to the cyan and yellow semicircles within white circles, I noted the horizontal offsets between the semicircles, but everything was flat. There was no depth, and the outer white diamond that encloses the entire image was square.

This image has an empty alt attribute; its file name is image8-1024x569.png

I noticed that I could consciously fade either the cyan or yellow semicircles to being invisible (and control the brightness at levels in between) but I really didn’t understand the instructions, which were to select the circle that was ‘closest’. Everything was still flat and I ended up choosing the semicircles that matched closest. This was not right and I kept selecting the wrong circle!

I then tried moving my focal distance (the point at which both eyes converged) closer then further away and noticed that I could move the semicircles sideways to get them to join into a full circle. By doing this several times for each pair, I could figure out which circle was ‘closest’ entirely by feeling my eye muscles. Did this circle ‘feel’ closer because my eyes were more cross-eyed?… But this wasn’t particularly sensitive and sometimes circles were very similar and I had to adjust my eyes several times on each circle to feel a difference. Still flat though!

Then I looked into the black space in the middle of the screen, relaxed my eyes (I was probably staring into the distance) and BAM, I could see the four circles were distorted and had depth. Also, the outer diamond was bowed toward me. Keeping my eyes on this black space, it was easy to judge the relative distances of the circles. After a few rounds, I was able to look around the image and maintain the 3-dimensional illusion at all times.

I guess the purpose of this post is that I want to share my discovery that seeing the image in 3D is not necessarily automatic. It took me a few stages for it to happen: manually varying my distance of focus and figuring out this distance based on my eye muscles, then staring into the middle of the screen before the 3D became apparent. Perhaps I have been unable to separate the two process of focussing (which is done with the lens in each eye) and depth perception (which uses the two eyes together). Certainly, when trying the 3D magic eye pictures my eyes have been staring into the distance (correct!) but my focussing was too (incorrect!) so the image was just a blur. Time to find that old book! I hope this helps someone. Best of luck.

Nick

When Unity UI Goes bad – The perils of DontDestroyOnLoad

This is quick note about a really obscure problem which can bite you quite hard in unity. If you switch between scene you may need to use DontDestroyOnLoad to keep certain objects hanging around during and after the transition. If, however, you have stored collections of scripts under empty game objects – and there should be an overlap between the set of scripts in a GameObject in Scene1, and Scene2, that DoNotDestroyOnLoad will prevent the GameObject containing the clashing script in Scene2 from being loaded- AT ALL! This will break things in the most unexpected and non-obvious way. For instance, RayCasters may suddenly disappear because they were never loaded (but ONLY when Scene2 is loaded via Scene1) causing User Interface elements to no longer pick up user input etc.

Bewarned.

Lets laser cut some headsets!

Over at eyetracktive.org you can see the results of some early experiments in creating the world’s most affordable eye tracking headset. The idea is to make this compatible with off the shelf google cardboard headsets.

One constantly underestimated problem, however, is that people have quite differently shaped heads, and eye positions. I find it obnoxious when we’re all forced to use a one-size-fits-all solution.

What I’ve been tinkering with for a while is an approach which uses OpenSCAD to “mathematically” define the headset parts (only the core which contains the eye tracking hardware so far, but soon, also the surrounding google cardboard design). The advantage of doing this in OpenSCAD is that all the critical dimensions can be defined as variables, with functions relating them in sensible ways to produce things like lines to cut! The even greater advantage of using OpenSCAD is that it can be called from the command line.

The idea that’s been waiting patiently for attention, for some time, is to setup a web service which takes customisation requests, passes them into the OpenSCAD model, and thereby produces an .svg as output . An SVG snippet might look something like this:

Some SVG defining a document and the beginnings of a very long line!

This scaleable vector graphics (SVG) file isn’t anything useful on its own – to have it turned into a laser cut piece of cardboard, we need to turn it into gcode. GCode is a simple language which tells motors where to move, and things like laser beams to turn on or off at a given power. Here’s a simple snippet of GCode with some comments as an example:

G28                //Move the head to home position
G1 Z0.0            //Move in a straight line to depth 0.0
M05                //Turn off the spindle (laser)
G4 P0.2            //Pause for little moment doing nothing
G0 X43.1 Y74.4     //Move rapidly to X/Y position
M03                //Turn on spindle (laser)
G0 X43.1 Y81.4     //Move rapidly to X/Y position
...and so on for hundreds and hundreds of lines

So, we want to get from an OpenSCAD description, to SVG, to GCode, and eventually, send that to a printer.

How hard can that be?!? Let’s knock up a prototype!

In practice, I have no idea what machine I will ultimately connect to my super cheap Eleksmaker A3 Pro laser cutter -> but I know that it’ll either be one of the several linux or osx machines I have knocking around- so lets pick an approach which will work just as well on any of them. One approach which will do for us is called Docker.

Docker basically packages everything an application needs to run into what they call a container. From the point of view of the application, it feels and looks like it is running in an operating system on a computer dedicated to nothing but keeping it happy and running perfectly. In actual fact Docker is just using smoke and mirror to make it look this way – but it’s a trick which Docker have perfected on pretty much every computing platform, so it’s really containerise once, run anywhere 😉

First we knock up a Dockerfile (thanks to @mbt for getting the ball rolling!) containing what we want in our environment. The most important parts are openscad and py-svg2gcode (which does the .svg to .gcode conversion). We then start the container :

docker run -i -t -v [host path]:/tmp/[something] openscad bash

This starts an interactive container, dumping us into bash, with [host path] mapped from the host machine to /tmp/[something] inside the docker container.

When we try to run py-svg2gcode, the first thing we notice are a bunch of errors :

Traceback (most recent call last):<br>   File "", line 1, in <br>   File "svg2gcode.py", line 78, in generate_gcode<br>     scale_x = bed_max_x / float(width)<br> ValueError: invalid literal for float(): 210mm

Yippee. Nothing ever works first time. Actually, this isn’t so bad. “ValueError: invalid literal for float(): 210mm” is perhaps a little cryptic at first sight, but it’s actually probably indicating that it is expecting a floating point number, where it is receiving a string containing “mm”. Low and behold, if you look at the snippet of .svg above, you’ll see this is precisely what is happening.

Before we run svg2gcode, let’s always replace any occurrences of “mm” in incoming .svg files! Perhaps we’ll call svg2code from a bash script which preprocesses with :

sed -i ‘s/mm//g’ $1

This takes the input argument to the script ($1) as a filename to process, then uses the unix command sed to find all instances of mm (the g) and replace them in the same file (-i) with, well nothing!

Great. Our next inevitable problem is that we get a load of these statements in the output :

pt: (-17.952574468085107, 97.97540425531915)<br>         --POINT NOT PRINTED (150,150)

Perhaps the point -17/97 is somehow outside the bounds of the printer? It turns out that svg2gcode uses a config.py to define constants such as the area of the printer. Indeed, bed_max_x and bed_max_y are both set to 150. We’ll have to change that, and do it in a way that Docker remembers between restarts. We’ll also have to worry about why we’re getting negative values in just a moment. Is the problem that the point is negative, or that the cumulative y position to date exceeds 150?

First of all, in our Dockerfile we can tell it to take a file from our local file system and add it to the Dockerfile:

COPY config.py /py-svg2gcode/config.py

Now we have :

pt: (-28.0079, 217.75119999999998)
–POINT NOT PRINTED (400,300)

So more points got printed, but the negative numbers are clearly a problem. This may mean we need to be careful in generating our coordinate space, or we cheat, setting the origin of the laser cutter to the middle of its area and defining the available space as -200 to +200 and so on…

Looking in the OpenSCAD file I am trying to convert, and there we are… the headset is centered around the origin.

For now I shall apply a transform to shift it off origin before we attempt the gcode generation:

Transformed model

When we inspect this in Inkscape, it also looks good:

..and in the raw .svg we see that the width and height are within the bounds of our machine:

Raw .svg output from the transformed model

None of the points in the line description exceed either width or height. You’d think this would be fine for svg2gcode, right?

Sigh. Pages and pages of “point not printed” :

After then looking into svg2gcode.py I’ve realised it looks quite incomplete, and makes many strange assumptions about scaling etc. which don’t fit our use case. Time to try a different approach….

OpenSCAD supports DXF export, and there appears to be a more mature dxf2gcode library out there – so let’s go with the flow and try taking that approach instead!.. only after updating my container to install it, it turns out that this isn’t a library, it’s a program that requires a window manager… and so it goes on. This is the reality of prototyping, as you cast around for tools to do the job in the vain hope that you’re not going to have to end up implementing too much yourself. :-/

It feels like options are running out – should I take a look at https://sourceforge.net/projects/codeg/? – a project that stopped doing anything back in 2006. All in all this is terribly sad for such a basic and common (?) need. Perhaps first it’s time to look more deeply at svg2gcode.py and see if we can just strip out its weird scaling code.

First off, it actually looks like there are many branches of the original code – for example – https://github.com/SebKuzminsky/svg2gcode is more up to date than most. Let’s update our Dockerfile in a way you never would in production to try things out quickly:

FROM debian:unstable<br> RUN apt-get update &amp;&amp; apt-get install -y openscad &amp;&amp; apt-get install -y python<br> RUN apt-get install -y git &amp;&amp; git clone https://github.com/SebKuzminsky/svg2gcode.git<br> RUN cd /svg2gcode &amp;&amp; git submodule init &amp;&amp; git submodule update &amp;&amp; apt-get install -y python python-svgwrite python-numpy python-jsonschema asciidoc docbook-xml docbook-xsl xsltproc<br> RUN apt-get install -y vim<br> COPY config.py /svg2gcode/config.new<br> CMD ["echo","Image created"]

Now we’ll build our new container with :

docker build -t openscad .

Now let’s have an initial nose around. At first glance, this looks way more intense – there’s a lot of code in the svg2gcode.py specifically for milling. It’s a different kind of beast to the last python script! Taking a look in the README.md (why don’t I ever think to start there) says that it can handle engraving. How can we specify that type of operation?

“python svg2gcode” actually gives us some sensible feedback/potential instructions. I’m already liking this – although I see no options for “engraving/offset/pocket/mill” and so on. Let’s take another look in the .py file.

So – I’m not a python guy *at all* but :

this looks promising

…but what are these op operations it speaks of? It looks like there is some sort of json formatted job descriptions. Not sure I like the way *that* is going. Also, some interesting leads in https://github.com/SebKuzminsky/svg2gcode/blob/master/TODO.adoc.

Yep. A quick “grep -R engrave *” reveals a bunch of unit tests (it’s back in my good books again) which show a job format xxx.s2g in json that looks like :

…sorry, I have no idea why copy/paste has stopped working from my docker terminal :-/ Not a rabbit hole I’m going down this instant.

so, it looks like we need a job description, next to our svg – and we can see what comes out of it!

Here’s the output:

Hmmm. “not even close”.

So, it doesn’t like the paths in the .svg. Why? That’s the next thing to explore. I’ll output some simple primitive shapes in OpenSCAD, then some using operations like “difference” and see when/where the conversion process breaks…. or this is a deeper problem? Do I need to create many jobs composed of much lower level “closed” lines?

Yep. Looking at some of the more advanced examples with multiple shapes in the .svg the job description defines how each individual path needs to be handled :

{<br>     "jobs": [<br>         {<br>             "paths": [ 6, 7, 8, 9, 10, 11, 12 ],<br>             "operations": [<br>                 {<br>                     "drill": { }<br>                 }<br>             ]<br>         },<br>         {<br>             "paths": [ 1, 2, 3, 4, 5 ],<br>             "operations": [<br>         {<br>             "offset": {<br>             "distance": 4,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>         },<br>                 {<br>             "offset": {<br>             "distance": 3.175,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>                 }<br>         ]<br>     },<br>         {<br>             "paths": [ 0 ],<br>             "operations": [<br>         {<br>             "offset": {<br>             "distance": -4,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>         },<br>                 {<br>             "offset": {<br>             "distance": -3.175,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>                 }<br>         ]<br>     }<br>     ]<br> }

This is well beyond what we want or need. It is a general artefact of abstraction that the more generic a tool becomes, the harder it is to get it to do anything specific. A theory of the universe just wraps the universe in a plastic bag and you’re no closer to understanding any of it.

My gut feeling tells me, go back to the cruder and simpler original svg2gcode.py and modify. Our needs are simple.

Well, that’s another 40minutes an evening gone. Back at it at the next opportunity!

Blender Outlining [SOLVED] – Cannot find Backface Culling option under Viewport Shading.

Helping people with Lazy Eye learn to use both eyes again requires a long term commitment on their part.  That shouldn’t be a drag, so I am hoping to inject a little humour through two animated characters which accompany the participant on their journey, helping to motivate people to continue.

A part of the look and feel that I am striving for is to use a Toon Shader / Cell Shader to create an Outline image similar to that shown in this short tutorial :

Unfortunately, the current release of Blender 2.8 just doesn’t work the same way.  You cannot select Backface Culling from the shader menu!

It does exist in the “solid” view as shown below, but that isn’t very useful:

In the rendering modes, the option just isn’t there anymore:

Now, you need to look inside the material you chose to be your outline, and under settings you’ll find the “Backface Culling” option.

 I hope this helps somebody!

Notes on Using Blender 2.8

One thing I discovered when testing EyeSkills at the beginning, is that people who have a lazy eye often have a particular psychological relationship to the condition.  They would like to understand and perhaps change it, but they have generally come to accept it as “inevitable” and found ways to compensate in everyday life.  It is regarded as an unknowable mystery.

The consequence of this, is that initial enthusiasm does not continue into daily practice where there are too many hurdles- made worse by the mountains of technical jargon associated with amblyopia and strabism!  To make EyeSkills accessible, I believe it needs a simple metaphor, expressed visually, and humorously. This needs to run on a human platform, after all!

I have taught myself 2D animation over the last week, but it is a slow and inefficient process. I can see that I need to move to 3D.  To do that I’m now teaching myself Blender as quickly as possible, so that I can go from from character modelling, to rigging (with humanoid bones and correct standard names) and skinning, to shape keys, to UV mapping and drawing textures, to animation, to importing into Unity and finally creating a cell shader to give the animations the cartoon like character I’m seeking.

As I run into issues with Blender, I’ll make some notes here.

Metaballs

I’m using Metaballs to model Domi/Supti before creating their meshes.  It turns out that naming is very important.  If the metaballs stop interacting, be sure to read this: https://docs.blender.org/manual/en/latest/modeling/metas/editing.html#object-families

Unfortunately, having got to the kind of base model I wanted :

I couldn’t find a way to convert it to a mesh without it suddenly becoming “overweight”!

All in all, after speaking with a friend, metaballs may not be the way to start the character modelling. I’ll try again building with meshes from the ground up.

This involves much more direct manipulation of vertices and edge/face creation:

You will probably want to half your object, enabling the mirror modifier, but only apply the mirror modifier when finished. It’s also extremely likely that you’ll want to be able to select vertices behind those visible:

https://blender.stackexchange.com/questions/119472/blender-2-8-how-to-select-edges-behind-other-edges-when-in-numpad-1-mode

Copying modifiers

Shift-Select the items you want to receive the modifier from, and finally the item with the modifier you want to copy.  You then use Cntrl-L to link the modifiers, BUT you MUST do this with the mouse cursor over the modelling viewport – otherwise nothing will happen!!!

A problem with this approach is that, if Armature modifiers or mirror modifiers are already setup, you’ll end up breaking your animations and causing new limb parts to appear everywhere!!!  To make use of this approach, try to model the mesh first (only on one side), then the materials, then things like the outliner, mirror modifiers, and rigging last.

Notes on Unity Animation

This wasn’t a bad starting point : https://www.youtube.com/watch?v=vPgS6RsLIjk

It’s important to remember, when you’ve created a sprite, that you need to add a SpriteSkin. Sometimes it fails to automatically detect the bones in your sprite, but so far, that’s been simple to solve by making a few minor changes to the sprite, reapplying, and then the “CreateBones” button in the SpriteSkin successfully works.  If you have an existing animation, you can drag and drop it onto the sprite. Next step – animation transitions.

In the Animation pane you can create a new animation from the drop-down, but to create links between those elements you’ll need to make sure the Animator window is visible (Window->Animation->Animator). There you can make links between the various states (https://www.youtube.com/watch?v=HVCsg_62xYw).  How can we have those state transitions occur without scripting? It turns out that the transitions already happen, but you need to “Play” a scene containing the model.

Where the ordering of limbs is incorrect, go into the SpriteEditor>SkinningEditor and set the individual bone depth, by selecting the relevant bones.

The next issue will be transitioning sprite parts (rather than just animating their position).  My best guess is that we’ll end up animating enable/disable/active on alternative game objects inside the Animator (I hope).  Yep. That was quite intuitive.  Place the object you want somewhere inside the bone hierarchy of the sprite (inside the scene editor) and then, in the Animation pane, add a property for that object for “enabled” and animate it.

I suspect that, to enable the pupil to move freely around, I’ll have to add a mask around the “white” of the eye.

This is quite exciting.  A lot of opportunities for clearer communication and more interesting and interactive scenes have just opened up 🙂

Ultimately, I’d like to create a 3D representation (mesh) of the mascot, and a toon shader to go with it, which would be the most flexible approach but for now I’ll create the basic poses I need to start with as .SVG, then export to sprites and animate.

I seems that one can create too many bones.  The issue I’ve run into is that slicing the sprite prevents the unity editor from allowing me to create bones which span the different sprite parts (surprise, it’s still buggy).  However, using autogeometry to split up the sprite, makes it almost impossible to control when the bones overlay each other (e.g. around the eye) and control over things like mouth expression is currently beyond me using the inbuilt approach.

I suspect the way to do this is, is to create a completely separate multi-sprite for the eye and another for the mouth (with multiple expressions in the multi-sprite), and then to place these inside the bone object hierarchy.

A potential problem with this approach is that alterations to the bone structures seem to invalidate the sprite skin / bone editor in the scene – requiring it to be destroyed and recreated, which will lose all my setup 🙁

So, that worked well (I think).

There are eight sprites along the top, and only the collection of body parts below are skinned.  On the left in the scene hierarchy, you can see the other parts are placed under the body bone – with each game object having a “Sprite Renderer” added. Is there are better way.  The different parts of the multi-sprite are always visible in the object panel beneath the scene hierarchy.

A general post on Unity Pain Points and Setup

Error

After switching XR device to “cardboard”, any TrackedPoseDriver script which is active will have lost its connection (changes to its state make no noticeable changes to the scene).

Solution

Destroy the TPD before the device switch, then re-create and re-initialise it afterwards.

Error

[EGL] Unable to acquire context: EGL_BAD_ALLOC: EGL failed to allocate resources for the requested operation.

Solution

Disable multi-threaded rendering in player settings.

(from https://gist.github.com/unitycoder/6b6410800727548afaf9b0043121a164)

 

Eleksmaker A3 Laser Cutter

So, I’m still trying to get the Eleksmaker A3 laser cutter to work as I need.

So far, I’ve come to the conclusion that the original version of GRBL installed on the cheap Arduino nano board the “Mana” board comes with, is just no good.  I’ve flashed it to v1.1 :

brew install avrdude

avrdude -c arduino -b 57600 -P /dev/cu.wchusbserial1420 -p atmega328p -vv -U flash:w:grbl_v1.1f.20170801.hex

Then I found LaserWeb for OSX, configured it, and had the print head slamming against the side of the case.  It took me a while to work out that all my axes were inverted. After swapping the X-Axis carriage around, I was still left with an  inverted y-axis, which I solved by sending $3=2 as GCODE to the board via the console in LaserWeb ($3=2 means invert the y-axis, where $3=1 would mean inverting the x-axis).

The next problem was that the “cuts” were three times larger than they ought to be.

The next secret is hopefully to send this set of gcode instructions to the board to configure the number of steps/mm correctly….

$0=10 ;Step pulse, microseconds
$1=100 ;Step idle delay, milliseconds
$3=2 ;Y axis direction inverted
$10=0 ;send work coordinates in statusReport
$30=255 ;max. S-value for Laser-PWM (is referenced to the LaserWeb PWM MAX S VALUE)
$31=0 ;min. S-value
$32=1 ;Laser Mode on
$100=80 ;steps/mm in X, depending on your pulleys and microsteps
$101=80 ;steps/mm in Y, depending on your pulleys and microsteps
$102=80 ;steps/mm in Z, depending on your pulleys and microsteps
$110=5000 ;max. rate mm/min in X, depending on your system
$111=5000 ;max. rate mm/min in Y, depending on your system
$112=2000 ;max. rate mm/min in Z, depending on your system
$120=400 ;acceleration mm/s^2 in X, depending on your system
$121=400 ;acceleration mm/s^2 in Y, depending on your system
$122=400 ;acceleration mm/s^2 in Z, depending on your system
$130=390 ;max. travel mm in X, depending on your system
$131=297 ;max. travel mm in Y, depending on your system
$132=200 ;max. travel mm in Z, depending on your system
$$ ;to check the actual settings

If I type $$ I see that $100 is currently set to 250, which is certainly close to the 3X I’m seeing… So, let’s see how far this takes us…

Yes. Lovely. $110 and $111 are the pivotal instructions.

http://itink.it/wiki/doku.php?id=en:tinkering:laser:eleksmakera3pro

Sadly… The next problem seems to be that 2.5w just isn’t enough.  I can’t get through 1mm card at 100% power only cutting 250mm per minute!

I’ll have to organise some test files to see which speeds/power ratios/repetitions work best.

Exploring an API for generating custom VR headset designs

Another target I’d like to accomplish is to provide an api for generating custom parameterised EyeTracktive pupil tracking headsets.

OpenSCAD can theoretically produce .stl or .png images via the cmd line, but it requires a lot of cruft to get this to work (an XServer or xvfb) – so wouldn’t it be nice if there was a pre-configured docker container?

After a little look around I found this :

https://hub.docker.com/r/wtnb75/openscad

Theoretically, the simplest way to handle this would be to exec/run openscad from inside the container to output a file mapped to a directory on the host accessible from the API which is part of (e.g.) EyeTracktive.org.

It looks, however, like I’ll have to see if I can cook up a docker file based on https://github.com/wtnb75/dockerfiles/tree/master/openscad-fedora first.