Ho Ho Ho. Oh.

I had a plan. I wanted to get the original EyeSkills Community edition back up and running for Christmas. I wanted to integrate the changes I’d been making over the last months. Working on the software really isn’t a priority at the moment (getting the eye tracking hardware ready is more important, without which we’re “flying blind”) but nevertheless…

Obstacles

Nothing has been working out the way I planned. It’s been a litany of blockers. The people we’ve been working with to get our custom eye tracking hardware have delayed the project for a couple of months… just as my Macbook died forcing me to get a new machine a few weeks ago. When my Macbook died, I decided that it was time for a breath of fresh air so I did something I have repeatedly been burnt doing before, and got a regular laptop to run Linux. As it happens, after less pain and suffering than I expected, everything’s working better than I expected with a stock Ubuntu (for Unity compatibility). Then it was time to install Unity.

Well, let the pain commence. I’ll keep up dating this as I go for others in the same boat.

Installing Unity 2019.2.17f1 on Ubuntu 19.10

Immediately, we have blank compilation errors in Unity. Looking in the Editor.log hidden away in ~/.config/unity3d/” there’s a whiff of a clue that this is caused by :

-----CompilerOutput:-stdout--exitcode: 134--compilationhadfailure: True--outfile: Temp/UnityEngine.TestRunner.dll<br> -----CompilerOutput:-stderr----------<br> No usable version of the libssl was found

…which implies I either have to downgrade ssl from 1.1.1 to 1.0.0 or find some way to get two simultaneous installations happily co-existing (and Unity knowing which to use). This could take time.

UPDATE: Seems to be working out ok to install a version of libssl1.0.0 – it exists, but doesn’t seem to be used by the system by default ‘m guessing the first set of blank compiler errors are:

wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu6_amd64.deb<br> sudo dpkg -i libssl1.0.0_1.0.2n-1ubuntu6_amd64.deb

After that the next issues awaits: “Unspecified error during import of AudioClip”… I’ve tried

sudo apt install lib32stdc++6 -y

but that’s only going to be the beginning of the battle…

UPDATE : After a restart, that problem went away. I suspect installing lib32stdc++6 did the job, but Unity just needed to the restart to recognise that it existed.

Now I have straightforward “missing namespace” issues – which probably require me to specify some csharp assemblies. We’ll see…

Well, I shall try to keep plodding on while I focus on how to make sure that when this project really gets going again, it’ll really be setup to generate the cycle of rapid feedback and evidence that will speed the creation of approaches that really work.

If you are also in the world of pain that is Unity + Ubuntu 19.10+, I’d love it if you got in touch 🙂

Merry Christmas!

FOLLOW UP:

So, the next step is probably to get Visual Studio Code (VSC) working. Why not Atom/Sublime/OtherFlavourOfTheMonthEditor? Getting anything working reliably in Unity is hard enough without wondering far from the tree. The ability to debug running apps on the phone has been exceptionally useful when I was working on OSX with VSC so let’s try to set it up here.

The most hopeful looking instructions I’ve found so far are here : https://stackoverflow.com/questions/52807397/how-do-i-use-visual-studio-code-to-develop-unity3d-projects-in-ubuntu and here https://medium.com/@sami1592/set-up-visual-studio-code-for-unity-in-linux-69b7f4352e0b

These instructions are a bit out of date so here’s an overview of the steps I took :

In VCE view the installed extensions (Ctrl+Shift+X) and add the C# and Debugger for Unity. It might be interesting to take a look at “Unity Game Dev Bundle” as well. You will want to have VCE also create a launch.json from the debug pane, so that you can then start the debugger for unity (use the green arrow).

Close VCE and in Unity set the default editor in the preferences to be /snap/bin/code

All of this works nicely for a fresh, small project, but it appears that it doesn’t work for the EyeSkills community edition because something is broken in VCE regarding multi-projects (i.e. we have some Unity Tests inside ours). Another hour spent getting a step closer to a working setup.

I think the next step is to install a sample project like this https://learn.unity.com/project/tower-defense-template?signup=true and check I can get the whole thing running, before I start to retrofit everything from EyeSkills into a fresh project…. and that seems to work well. So far so good. Does deployment to the phone work?

No. ”

UnityException: Android SDK not found
“. That should just be a question of : ”
sudo apt install android-sdk
“. Just setting up these environments must have already cost 3-4Gb of space. Well, I don’t seem to be able to run the sdkmanager which I would have imagined to have been installed with the android-sdk. The sdkmanager is important for managing which version of Android your Unity builds can support – and I really don’t want to install all of Android Studio just to manage a few SDK installations.

Running ”

apt-file list android-sdk
” shows me that the included tools are in /usr/lib/android-sdk/tools. These tools do not include the sdkmanager 🙁 It looks like the android sdk command line tool are available here : https://developer.android.com/studio, specifically,
<a href="https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip">https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip</a>
.

We can get it with

wget <a href="https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip">https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip</a>
and then unzip it. I’m going to dump the contents of the resulting tools directory into
/usr/lib/android-sdk/tools
with an “rsync -rv /.tools/ /usr/lib/android-sdk/tools/” and hope for the best that this doesn’t cause additional problems. Ah, no, that was a mistake. They are my own user tools, and should live in my own home directory with myself as owner. Undo. Repeat. Oh no. It looks like it’s a Java version problem. Could it be that it won’t run on the version of Java that’s installed by default (11) but requires the old open java version 8?!?

This is where I start to feel really queezy. ”

sudo apt install openjdk-8-jdk
“. What exactly is that going to do? How well will it manage two versions of java co-existing? How will programs select between them? Well, it turns out that it doesn’t. I really, really, want to run Unity in some sort of containerized fashion at arms length behind a cordon sanitaire – but it’s one of the most resource intensive applications I will be running so I want it running close to the metal. It’s like DLL hell, do I sacrifice the modern install of Java just to get Unity working?

It turns out there is a little command called

update-java-alternatives --list
. You can select the version you want to use at a particular moment, so when we run the sdkmanager let’s try to first run “sudo update-java-alternatives -s java-1.8.0-openjdk-amd64”. This breaks a few things in mozilla’s Firefox, but the sdkmanager then works. Now I need to figure out where to position the unpacked tools, and how to hook them into the environment properly…. plus, just to add salt into the wounds, my installation of android-sdk seems to have become broken through something I’ve done thus far (symlinks in /usr/bin to tools like adb are broken as the platform dir is now missing from /usr/lib/android-sdk/tools). Honestly. It’s time to remove the apt package from android-sdk and start again from scratch there, perhaps taking a different path.

After removing and purging the apt package for the android-sdk by way of napalm I discovered that there are two sub-boxes in the Unity Hub’s module installer:

Seriously, how is anybody supposed to know?!? The Android Build Support box was ticked already. You’d think that would mean it would install all child options, otherwise it might show a “-” rather than a tick. I could cry sometimes. Before Unity has even managed to download the SDK/NDK/JDK I can already see problems ahead : https://issuetracker.unity3d.com/issues/ubuntu-unable-to-use-andriod-ndk-and-sdk-tool-installed-via-hub

Yep. What a surprise. Everything is now installed correctly according to Unity:

…unfortunately, trying to build their tutorial project results in :

Why is it always like this with Unity?! I literally needed the last six months to let the frustration of using Unity for this project over the course of 2018 wash out of my bones, and now it’s back again. It’s like trauma. It’s giving me PTSD and a desire to throttle somebody. Calm. Breath. Just another another another obstacle to climb over.

I have tried :

chmod +x Editor/Data/PlaybackEngines/AndroidPlayer/OpenJDK/jre/bin/*<br>chmod +x Editor/Data/PlaybackEngines/AndroidPlayer/OpenJDK/bin/*

That may or may not have changed something, but nothing that I notice.

In ~/.config/unity3d/Editor.log I can see a bit more detail:

Android PostProcess task “Detect Java Development Kit (JDK)” took 64.3625 ms
UnityException: Android SDK not found
Unable to locate Android SDK.

At least we know it’s finding the JDK. So, why not the SDK? As I currently understand it, Unity has installed everything in a self-contained manner. The SDK is in ~/Applications/UnityEditors/2019.2.17f1/Editor/Data/PlaybackEngines/AndroidPlayer/SDK. I don’t think that’s bad, and would approve – if it worked.

The aforementioned issue on the Unity lists points out that the “tools” directory is missing from …Editor/Data/PlaybackEngines/AndroidPlayer/SDK, and indeed it is. Let’s try once more to unzip a copy of the sdk tools and plonk them in there. Wow! It looks like the SDK is discovered by Unity, now it’s just complaining about an inadequate API level for VR (needs to be a minimum of 19 – check in Player Settings in the Unity IDE).

It’s a big compilation job… and my new shiny XPS 7150 is struggling. The CPU is stepping down continually as it gets hot enough to boil an egg… that’ll be another area for improvement, voltage limiting the CPU.

lscpu | grep MHz | awk ‘{print }’ – a nice way to monitor your CPU speed
My CPU is on fire!

Every 1,0s: lscpu | grep MHz | awk ‘{print }’ ben-XPS-15-7590: Sat Dec 28 20:24:19 2019

CPU MHz: 3252.126
CPU max MHz: 4500,0000
CPU min MHz: 800,0000

IT BUILT! Now lets’s try “Build and Run”! Oh, I’m looking forward to this so much. It’ll finally work and deploy straight to my phone! 😀

By this point, are you surprised? I’m regarding this as an inevitability. It looks, at first glance, like Unity has suddenly decided that I’m running windows, rather than Linux.

Every. Single. Step. Forward. Must. Be. Paid. For. In. Pain.

For starters, everything in the Unity installed SDK/platform-tools directory has permissions that prevent execution. Really?!? Let’s try “chmod +x *”. That seems to get us another step further forward. I then needed to setup developer mode on my new phone, whereupon I got… dadada….


Doh!

Will this ever end? Let’s hope this is going to be simple and do what it suggests. I guess it means this?!?

FINALLY!!! The .apk is deploying to my phone, and is playable.

Now that is working it’ll be time to check that the setup can handle a basic VR game from https://blogs.unity3d.com/2015/12/09/get-started-with-vr-sample-pack-learning-articles/

If THAT works (which at this point really shouldn’t be impossible) it’ll be the right move to take a bit more care to setup a Continuous Integration process and streamline a few things to help development, before pushing on with “retro-fitting” eyeskills. I think I’ll carry on the next steps in another post on another day.

IR retinal safety

At the moment all software work is paused until we have the hardware ability to track the eyes as we desire. Part of this process is to ensure, in our specification of the eye tracking cameras, that we do not produce too much infra-red radiation (in the LEDs we use to both illuminate and help track the position of the eyes) that it causes a health risk.

Given that this is open-hardware, let’s keep sharing information! One of the most helpful resources we’ve found is here https://www.renesas.com/eu/en/doc/application-note/an1737.pdf

Although the IR LED’s we are selecting fall well underneath the safety threshold required, we still need to be careful in considering the effect of using them in an array (well, three in an L shape) and also the impact of the focusing lens in the headset.

The relevant standard is IEC-62471. A summary of this standard is here https://smartvisionlights.com/wp-content/uploads/pdf/IEC_62471_summary.pdf

You may notice that the permissible exposure time for an except IR source on the eye is 1000 seconds (16.6minutes, pretty much exactly the length of time for which we want to restrict the use of the daily app). A Group 1 device would only allow us 100 seconds, which may still be adequate if use sparingly (pulsed, and restricted to critical moments of measurement), but it would be best to be sure our radiation levels are under the threshold of a Group 1 device.

Affordable EyeTracking – working towards a new camera PCB

We keep coming back to the basic fact that we need to know what the eyeballs are doing. This requires an eye tracker which does what we want (particularly, with a software API which doesn’t assume both eyes are coordinating in the usual way!) at a price we can afford. The corollary to this is – we’re going to have to do it ourselves.

Tobii style eye trackers are massively over powered and overpriced for what we are attempting, while we suspect it will take longer to get a collaboration going with PupilLabs (there are issues with their mobile software stack, API, licensing, and their choice of cameras) than we will need to roll our own solution.

In previous posts I introduced the PCB we found, which integrates two web cams as a single output, and mentioned that we’ll need to have it modified.

First, the cable length between the two cameras:

A suggested rough layout
  1. We move the second camera onto its own PCB (b).  That PCB serves two purposes, to hold the IR LEDs and provide a mechanical surface to be held in place reliably.
  2. We make the length of the cable some 12cm long (make a mockup to check necessary lengths with Moritz) where any excess can be hidden in the cavity under the eye piece “shelf”.

Looking more closely at the PCB:

We need to find a new layout which isn’t so radically different from the existing one. If it is too different, it is likely that the manufacturers charge us a fortune for a redesign, but we also *must* reach a point where we have at least an “L” shaped positional layout for IR LEDs (so we can judge pupil movement distances and orientations) plus (if it can be done without introducing another manufacturing step) a way of reliably physically fixing the unit. We also need to be careful to specify a slightly longer piece of cable without shielding so we can run the cabling up inside the holder effectively.

I consider our first step to be producing a simple physical prototype where we can temporarily affix some IR Leds and see how it performs/could be fixed to the headset. The next question is – should this prototype extend the existing actual PCB so we can experiment with the real unit, or just model the desired physical dimensions of the unit? Well, both are probably necessary, but counter intuitively it may be best to start with a quickly printable harness where we place our IR LEDS.. which we then separate out in a separate step into the new PCB design and the holder design. Let’s have a go at this!

So here’s a first super simple stab at it. The slot allows us to place the Camera PCB into it with a dab of hot glue (or just a snug fit), where we can also really simply modify the “holderFaceAngle” variable to find a good angle that allows the eyes to be monitored. Next we can try fixing IR LEDs to the top/bottom strip in different positions to see what works best.

Where could we go from here? Well, the existing PCB is actually on a flexible bases wrapped around a metal block for heat dissipation. I’ve set this up so that the manufacturer can add “ears” at the top and the bottom for the new IR LEDs. The question is whether or not they would extend the metal base (it might be a standardised part) . Obviously, it would be best if they were able to extend the metal base as we’d have the most reliable IR positioning .We’d then only have to increase the size of the slot to accommodate the new PCB. The second advantage would be that we could leave an area of naked metal at the rear of the unit, against which we could press the surface of the mount. This (when gluing for instance) would allow us to be sure the unit is really positioned flush (at the correct angle) with the holder. It might really come down to something this simple.

We would then extend the base of the holder with some feet that extend through pre-cut holes in the camera core (as in the previous instance) so that (with the help of a mounting tool) a monteur can be sure that the holder is correctly mounted each and every time.

Fits snugly, excuse the terrible 3d print – the filament I’m using seems to be ageing very badly!

Going back in time

This is a quick mental note : What would an anaglyphic app look like with a “randomised” and continually changing assignment of colour to each eye – or within a VR setting? Would this stop people “cheating” by blinking to assemble a complete image of what’s happening?

Perhaps Use PerlinNoise to generate a dynamic texture https://www.youtube.com/watch?v=bG0uEXV6aHQ which we then use in a custom shader to assign black to one eye/colour, white to the other, and anything intermediate in ratio to the appropriate eye/colour.

I think this might be worth trying out. What I have, however, realised – is that nothing is really worth exploring anymore without an eye tracking headset.

Another approach might be to use Voronoi style mappings https://www.youtube.com/watch?v=EDv69onIETk BUT it actually makes me wonder if the most simple and effective method would be to divide the space into cubes and just randomly assign each cube to one side or the other.

Data Provenance

It really matters that we know exactly what system setup generated any data we collect – the idea being that experiments can be re-run (and thus independently verified) in precisely the same technical context as the originals.

To achieve this we store the following information underneath anything else captured by a researcher:

Basic information capture

You might notice the buildVersion variable. This is actually a build id from .git – so that a researcher can re-create precisely the same environment from a prior experiment (e.g. from a different researcher). It assumes for now that all builds are stored in the same repository.

How does this buildVersion actually get there? In /[projectPath]/Assets/Plugins/Android/mainTemplate.gradle (around line 82) you’ll find a little hack that creates a custom class called Version during the Unity build process:

buildTypes {
debug {…
}
release {
//We want to provide information to the App about the .git version that the app is built from.
//For repeatable open-science, and to handle changing schemas, we need to be able to pin an InfoBase item to a specific build
def stdout = new ByteArrayOutputStream()
exec{
//Ask git
commandLine ‘git’,’log’,’–pretty=format:”%h”‘,’-n 1′
standardOutput = stdout;
}
//Write it out to a class (Version) which our InfoBase can then instantiate to extract the “version”
new File(“DIR_UNITYPROJECT/Assets/Scripts/Version.cs”).text = “namespace EyeSkills { public class Version { public string version = $stdout; }}”

That’s why and how we can reference Version.cs to get the buildVersion. However, we warned that refactoring things can cause that Version.cs to move in it’s location. You will then get uncomfortable gradle build fails errors from Unity. Check the path to Version.cs if you see problems with gradle… it might just be the cause!

Plotter friendly design

OpenSCAD (and my use of the wrong primitives for the dashed lines) didn’t do a great job converting lines and text in the original EyeTracktive Core design, into a format that works well for a plotter pen marking out the headset.

I just tried to manually create a better overlay in Inkscape (as a stop-gap) and ran into a few problems.

  1. Text doesn’t naturally convert to single stroke paths (obviously, if you think about it). In Inkscape Extensions->Render->Hershey Text… comes to the rescue. This creates plotter friendly text.
    1. However, it dumps this text somewhere fairly random outside the page border (on my machine at least) so you need to hunt it down.
    2. It then turns out that it uses the SVG transform element (matrix transform) to position the text you’ve just created in the correct position (as you drag-drop). This is incompatible with Cricut’s crappy DesignSpace software, which doesn’t understand transformation matrices, so it just ignores them.
    3. Solution : Ungroup each text element individually, as Inkscape seems to only wrap groups in transform matrices.

Diamond Depth Perception – a user reports

Hi everyone,

I’ll start with a bit of background. I’m Ben’s younger brother, Nick, and I was very keen to see how his software works. Other than mild short-sightedness, I do not think there is anything wrong with my eyes. Sitting in Ben’s office, I noticed a 3D magic eye picture on the wall, and commented that I have never been able to see the 3D image, even though I had a book full of pictures as a child. I tried it again; no success.

We started with the triangles [Ben : the standard conflict scene] and they were overlaid as expected. When we got to the cyan and yellow semicircles within white circles, I noted the horizontal offsets between the semicircles, but everything was flat. There was no depth, and the outer white diamond that encloses the entire image was square.

This image has an empty alt attribute; its file name is image8-1024x569.png

I noticed that I could consciously fade either the cyan or yellow semicircles to being invisible (and control the brightness at levels in between) but I really didn’t understand the instructions, which were to select the circle that was ‘closest’. Everything was still flat and I ended up choosing the semicircles that matched closest. This was not right and I kept selecting the wrong circle!

I then tried moving my focal distance (the point at which both eyes converged) closer then further away and noticed that I could move the semicircles sideways to get them to join into a full circle. By doing this several times for each pair, I could figure out which circle was ‘closest’ entirely by feeling my eye muscles. Did this circle ‘feel’ closer because my eyes were more cross-eyed?… But this wasn’t particularly sensitive and sometimes circles were very similar and I had to adjust my eyes several times on each circle to feel a difference. Still flat though!

Then I looked into the black space in the middle of the screen, relaxed my eyes (I was probably staring into the distance) and BAM, I could see the four circles were distorted and had depth. Also, the outer diamond was bowed toward me. Keeping my eyes on this black space, it was easy to judge the relative distances of the circles. After a few rounds, I was able to look around the image and maintain the 3-dimensional illusion at all times.

I guess the purpose of this post is that I want to share my discovery that seeing the image in 3D is not necessarily automatic. It took me a few stages for it to happen: manually varying my distance of focus and figuring out this distance based on my eye muscles, then staring into the middle of the screen before the 3D became apparent. Perhaps I have been unable to separate the two process of focussing (which is done with the lens in each eye) and depth perception (which uses the two eyes together). Certainly, when trying the 3D magic eye pictures my eyes have been staring into the distance (correct!) but my focussing was too (incorrect!) so the image was just a blur. Time to find that old book! I hope this helps someone. Best of luck.

Nick

When Unity UI Goes bad – The perils of DontDestroyOnLoad

This is quick note about a really obscure problem which can bite you quite hard in unity. If you switch between scene you may need to use DontDestroyOnLoad to keep certain objects hanging around during and after the transition. If, however, you have stored collections of scripts under empty game objects – and there should be an overlap between the set of scripts in a GameObject in Scene1, and Scene2, that DoNotDestroyOnLoad will prevent the GameObject containing the clashing script in Scene2 from being loaded- AT ALL! This will break things in the most unexpected and non-obvious way. For instance, RayCasters may suddenly disappear because they were never loaded (but ONLY when Scene2 is loaded via Scene1) causing User Interface elements to no longer pick up user input etc.

Bewarned.

Lets laser cut some headsets!

Over at eyetracktive.org you can see the results of some early experiments in creating the world’s most affordable eye tracking headset. The idea is to make this compatible with off the shelf google cardboard headsets.

One constantly underestimated problem, however, is that people have quite differently shaped heads, and eye positions. I find it obnoxious when we’re all forced to use a one-size-fits-all solution.

What I’ve been tinkering with for a while is an approach which uses OpenSCAD to “mathematically” define the headset parts (only the core which contains the eye tracking hardware so far, but soon, also the surrounding google cardboard design). The advantage of doing this in OpenSCAD is that all the critical dimensions can be defined as variables, with functions relating them in sensible ways to produce things like lines to cut! The even greater advantage of using OpenSCAD is that it can be called from the command line.

The idea that’s been waiting patiently for attention, for some time, is to setup a web service which takes customisation requests, passes them into the OpenSCAD model, and thereby produces an .svg as output . An SVG snippet might look something like this:

Some SVG defining a document and the beginnings of a very long line!

This scaleable vector graphics (SVG) file isn’t anything useful on its own – to have it turned into a laser cut piece of cardboard, we need to turn it into gcode. GCode is a simple language which tells motors where to move, and things like laser beams to turn on or off at a given power. Here’s a simple snippet of GCode with some comments as an example:

G28                //Move the head to home position
G1 Z0.0            //Move in a straight line to depth 0.0
M05                //Turn off the spindle (laser)
G4 P0.2            //Pause for little moment doing nothing
G0 X43.1 Y74.4     //Move rapidly to X/Y position
M03                //Turn on spindle (laser)
G0 X43.1 Y81.4     //Move rapidly to X/Y position
...and so on for hundreds and hundreds of lines

So, we want to get from an OpenSCAD description, to SVG, to GCode, and eventually, send that to a printer.

How hard can that be?!? Let’s knock up a prototype!

In practice, I have no idea what machine I will ultimately connect to my super cheap Eleksmaker A3 Pro laser cutter -> but I know that it’ll either be one of the several linux or osx machines I have knocking around- so lets pick an approach which will work just as well on any of them. One approach which will do for us is called Docker.

Docker basically packages everything an application needs to run into what they call a container. From the point of view of the application, it feels and looks like it is running in an operating system on a computer dedicated to nothing but keeping it happy and running perfectly. In actual fact Docker is just using smoke and mirror to make it look this way – but it’s a trick which Docker have perfected on pretty much every computing platform, so it’s really containerise once, run anywhere 😉

First we knock up a Dockerfile (thanks to @mbt for getting the ball rolling!) containing what we want in our environment. The most important parts are openscad and py-svg2gcode (which does the .svg to .gcode conversion). We then start the container :

docker run -i -t -v [host path]:/tmp/[something] openscad bash

This starts an interactive container, dumping us into bash, with [host path] mapped from the host machine to /tmp/[something] inside the docker container.

When we try to run py-svg2gcode, the first thing we notice are a bunch of errors :

Traceback (most recent call last):<br>   File "", line 1, in <br>   File "svg2gcode.py", line 78, in generate_gcode<br>     scale_x = bed_max_x / float(width)<br> ValueError: invalid literal for float(): 210mm

Yippee. Nothing ever works first time. Actually, this isn’t so bad. “ValueError: invalid literal for float(): 210mm” is perhaps a little cryptic at first sight, but it’s actually probably indicating that it is expecting a floating point number, where it is receiving a string containing “mm”. Low and behold, if you look at the snippet of .svg above, you’ll see this is precisely what is happening.

Before we run svg2gcode, let’s always replace any occurrences of “mm” in incoming .svg files! Perhaps we’ll call svg2code from a bash script which preprocesses with :

sed -i ‘s/mm//g’ $1

This takes the input argument to the script ($1) as a filename to process, then uses the unix command sed to find all instances of mm (the g) and replace them in the same file (-i) with, well nothing!

Great. Our next inevitable problem is that we get a load of these statements in the output :

pt: (-17.952574468085107, 97.97540425531915)<br>         --POINT NOT PRINTED (150,150)

Perhaps the point -17/97 is somehow outside the bounds of the printer? It turns out that svg2gcode uses a config.py to define constants such as the area of the printer. Indeed, bed_max_x and bed_max_y are both set to 150. We’ll have to change that, and do it in a way that Docker remembers between restarts. We’ll also have to worry about why we’re getting negative values in just a moment. Is the problem that the point is negative, or that the cumulative y position to date exceeds 150?

First of all, in our Dockerfile we can tell it to take a file from our local file system and add it to the Dockerfile:

COPY config.py /py-svg2gcode/config.py

Now we have :

pt: (-28.0079, 217.75119999999998)
–POINT NOT PRINTED (400,300)

So more points got printed, but the negative numbers are clearly a problem. This may mean we need to be careful in generating our coordinate space, or we cheat, setting the origin of the laser cutter to the middle of its area and defining the available space as -200 to +200 and so on…

Looking in the OpenSCAD file I am trying to convert, and there we are… the headset is centered around the origin.

For now I shall apply a transform to shift it off origin before we attempt the gcode generation:

Transformed model

When we inspect this in Inkscape, it also looks good:

..and in the raw .svg we see that the width and height are within the bounds of our machine:

Raw .svg output from the transformed model

None of the points in the line description exceed either width or height. You’d think this would be fine for svg2gcode, right?

Sigh. Pages and pages of “point not printed” :

After then looking into svg2gcode.py I’ve realised it looks quite incomplete, and makes many strange assumptions about scaling etc. which don’t fit our use case. Time to try a different approach….

OpenSCAD supports DXF export, and there appears to be a more mature dxf2gcode library out there – so let’s go with the flow and try taking that approach instead!.. only after updating my container to install it, it turns out that this isn’t a library, it’s a program that requires a window manager… and so it goes on. This is the reality of prototyping, as you cast around for tools to do the job in the vain hope that you’re not going to have to end up implementing too much yourself. :-/

It feels like options are running out – should I take a look at https://sourceforge.net/projects/codeg/? – a project that stopped doing anything back in 2006. All in all this is terribly sad for such a basic and common (?) need. Perhaps first it’s time to look more deeply at svg2gcode.py and see if we can just strip out its weird scaling code.

First off, it actually looks like there are many branches of the original code – for example – https://github.com/SebKuzminsky/svg2gcode is more up to date than most. Let’s update our Dockerfile in a way you never would in production to try things out quickly:

FROM debian:unstable<br> RUN apt-get update &amp;&amp; apt-get install -y openscad &amp;&amp; apt-get install -y python<br> RUN apt-get install -y git &amp;&amp; git clone https://github.com/SebKuzminsky/svg2gcode.git<br> RUN cd /svg2gcode &amp;&amp; git submodule init &amp;&amp; git submodule update &amp;&amp; apt-get install -y python python-svgwrite python-numpy python-jsonschema asciidoc docbook-xml docbook-xsl xsltproc<br> RUN apt-get install -y vim<br> COPY config.py /svg2gcode/config.new<br> CMD ["echo","Image created"]

Now we’ll build our new container with :

docker build -t openscad .

Now let’s have an initial nose around. At first glance, this looks way more intense – there’s a lot of code in the svg2gcode.py specifically for milling. It’s a different kind of beast to the last python script! Taking a look in the README.md (why don’t I ever think to start there) says that it can handle engraving. How can we specify that type of operation?

“python svg2gcode” actually gives us some sensible feedback/potential instructions. I’m already liking this – although I see no options for “engraving/offset/pocket/mill” and so on. Let’s take another look in the .py file.

So – I’m not a python guy *at all* but :

this looks promising

…but what are these op operations it speaks of? It looks like there is some sort of json formatted job descriptions. Not sure I like the way *that* is going. Also, some interesting leads in https://github.com/SebKuzminsky/svg2gcode/blob/master/TODO.adoc.

Yep. A quick “grep -R engrave *” reveals a bunch of unit tests (it’s back in my good books again) which show a job format xxx.s2g in json that looks like :

…sorry, I have no idea why copy/paste has stopped working from my docker terminal :-/ Not a rabbit hole I’m going down this instant.

so, it looks like we need a job description, next to our svg – and we can see what comes out of it!

Here’s the output:

Hmmm. “not even close”.

So, it doesn’t like the paths in the .svg. Why? That’s the next thing to explore. I’ll output some simple primitive shapes in OpenSCAD, then some using operations like “difference” and see when/where the conversion process breaks…. or this is a deeper problem? Do I need to create many jobs composed of much lower level “closed” lines?

Yep. Looking at some of the more advanced examples with multiple shapes in the .svg the job description defines how each individual path needs to be handled :

{<br>     "jobs": [<br>         {<br>             "paths": [ 6, 7, 8, 9, 10, 11, 12 ],<br>             "operations": [<br>                 {<br>                     "drill": { }<br>                 }<br>             ]<br>         },<br>         {<br>             "paths": [ 1, 2, 3, 4, 5 ],<br>             "operations": [<br>         {<br>             "offset": {<br>             "distance": 4,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>         },<br>                 {<br>             "offset": {<br>             "distance": 3.175,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>                 }<br>         ]<br>     },<br>         {<br>             "paths": [ 0 ],<br>             "operations": [<br>         {<br>             "offset": {<br>             "distance": -4,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>         },<br>                 {<br>             "offset": {<br>             "distance": -3.175,<br>             "ramp-slope": 0.1,<br>             "max-depth-of-cut": 2.5<br>             }<br>                 }<br>         ]<br>     }<br>     ]<br> }

This is well beyond what we want or need. It is a general artefact of abstraction that the more generic a tool becomes, the harder it is to get it to do anything specific. A theory of the universe just wraps the universe in a plastic bag and you’re no closer to understanding any of it.

My gut feeling tells me, go back to the cruder and simpler original svg2gcode.py and modify. Our needs are simple.

Well, that’s another 40minutes an evening gone. Back at it at the next opportunity!