A working stereoscopic camera?

Before we progress any further with the software, we need to know what’s actually happening to the eyes. This means monitoring each eye with some pupil tracking, exposing something like a set of vectors for eye movement to Unity, so that we can measure the effect of our virtual environments on eye position.

This has been a PITA. For low-cost, we need an android OTG (On The Go) UVC (Universal Video Camera) compatible camera with a 6cm focal length, and best of all, one that doesn’t need a hub (which causes all manner of issues when it comes to trying to observe both eyes simultaneously) but instead one which combines images from two cameras into a single double width image which appears to android/linux as a single camera.

We thought we found a supplier in China who could produce the module we needed as a modification of a product they already had – but we’ve been going around in circles for half a year now… continual misunderstandings and miscommunication? Perhaps (I suspect) we just aren’t offering order sizes they are interested in.

In the meantime, Rene has stumbled across a really wonderful module. It’s almost an order of magnitude more expensive than we’ve been aiming at – but at only 80EUR it’s still massively affordable.

We aren’t using it as intended of course (to create a stereoscopic image) but instead, are going to use it to simply monitor each eye simultaneously. It’s very convenient that as a stereoscopic camera it has the correct (on average) horizontal distance between cameras.

Next, Rene will start looking at how to handle the vision processing, while I try to drag the software back out of its coffin and update it to the most recent version of Unity – while implementing a few new ideas I have.

Unfortunately, it’s going to be *very* part-time for the foreseeable future, but something is happening at least.

Ho Ho Ho. Oh.

I had a plan. I wanted to get the original EyeSkills Community edition back up and running for Christmas. I wanted to integrate the changes I’d been making over the last months. Working on the software really isn’t a priority at the moment (getting the eye tracking hardware ready is more important, without which we’re “flying blind”) but nevertheless…

Obstacles

Nothing has been working out the way I planned. It’s been a litany of blockers. The people we’ve been working with to get our custom eye tracking hardware have delayed the project for a couple of months… just as my Macbook died forcing me to get a new machine a few weeks ago. When my Macbook died, I decided that it was time for a breath of fresh air so I did something I have repeatedly been burnt doing before, and got a regular laptop to run Linux. As it happens, after less pain and suffering than I expected, everything’s working better than I expected with a stock Ubuntu (for Unity compatibility). Then it was time to install Unity.

Well, let the pain commence. I’ll keep up dating this as I go for others in the same boat.

Installing Unity 2019.2.17f1 on Ubuntu 19.10

Immediately, we have blank compilation errors in Unity. Looking in the Editor.log hidden away in ~/.config/unity3d/” there’s a whiff of a clue that this is caused by :

-----CompilerOutput:-stdout--exitcode: 134--compilationhadfailure: True--outfile: Temp/UnityEngine.TestRunner.dll<br> -----CompilerOutput:-stderr----------<br> No usable version of the libssl was found

…which implies I either have to downgrade ssl from 1.1.1 to 1.0.0 or find some way to get two simultaneous installations happily co-existing (and Unity knowing which to use). This could take time.

UPDATE: Seems to be working out ok to install a version of libssl1.0.0 – it exists, but doesn’t seem to be used by the system by default ‘m guessing the first set of blank compiler errors are:

wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu6_amd64.deb<br> sudo dpkg -i libssl1.0.0_1.0.2n-1ubuntu6_amd64.deb

After that the next issues awaits: “Unspecified error during import of AudioClip”… I’ve tried

sudo apt install lib32stdc++6 -y

but that’s only going to be the beginning of the battle…

UPDATE : After a restart, that problem went away. I suspect installing lib32stdc++6 did the job, but Unity just needed to the restart to recognise that it existed.

Now I have straightforward “missing namespace” issues – which probably require me to specify some csharp assemblies. We’ll see…

Well, I shall try to keep plodding on while I focus on how to make sure that when this project really gets going again, it’ll really be setup to generate the cycle of rapid feedback and evidence that will speed the creation of approaches that really work.

If you are also in the world of pain that is Unity + Ubuntu 19.10+, I’d love it if you got in touch 🙂

Merry Christmas!

FOLLOW UP:

So, the next step is probably to get Visual Studio Code (VSC) working. Why not Atom/Sublime/OtherFlavourOfTheMonthEditor? Getting anything working reliably in Unity is hard enough without wondering far from the tree. The ability to debug running apps on the phone has been exceptionally useful when I was working on OSX with VSC so let’s try to set it up here.

The most hopeful looking instructions I’ve found so far are here : https://stackoverflow.com/questions/52807397/how-do-i-use-visual-studio-code-to-develop-unity3d-projects-in-ubuntu and here https://medium.com/@sami1592/set-up-visual-studio-code-for-unity-in-linux-69b7f4352e0b

These instructions are a bit out of date so here’s an overview of the steps I took :

In VCE view the installed extensions (Ctrl+Shift+X) and add the C# and Debugger for Unity. It might be interesting to take a look at “Unity Game Dev Bundle” as well. You will want to have VCE also create a launch.json from the debug pane, so that you can then start the debugger for unity (use the green arrow).

Close VCE and in Unity set the default editor in the preferences to be /snap/bin/code

All of this works nicely for a fresh, small project, but it appears that it doesn’t work for the EyeSkills community edition because something is broken in VCE regarding multi-projects (i.e. we have some Unity Tests inside ours). Another hour spent getting a step closer to a working setup.

I think the next step is to install a sample project like this https://learn.unity.com/project/tower-defense-template?signup=true and check I can get the whole thing running, before I start to retrofit everything from EyeSkills into a fresh project…. and that seems to work well. So far so good. Does deployment to the phone work?

No. ”

UnityException: Android SDK not found
“. That should just be a question of : ”
sudo apt install android-sdk
“. Just setting up these environments must have already cost 3-4Gb of space. Well, I don’t seem to be able to run the sdkmanager which I would have imagined to have been installed with the android-sdk. The sdkmanager is important for managing which version of Android your Unity builds can support – and I really don’t want to install all of Android Studio just to manage a few SDK installations.

Running ”

apt-file list android-sdk
” shows me that the included tools are in /usr/lib/android-sdk/tools. These tools do not include the sdkmanager 🙁 It looks like the android sdk command line tool are available here : https://developer.android.com/studio, specifically,
<a href="https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip">https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip</a>
.

We can get it with

wget <a href="https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip">https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip</a>
and then unzip it. I’m going to dump the contents of the resulting tools directory into
/usr/lib/android-sdk/tools
with an “rsync -rv /.tools/ /usr/lib/android-sdk/tools/” and hope for the best that this doesn’t cause additional problems. Ah, no, that was a mistake. They are my own user tools, and should live in my own home directory with myself as owner. Undo. Repeat. Oh no. It looks like it’s a Java version problem. Could it be that it won’t run on the version of Java that’s installed by default (11) but requires the old open java version 8?!?

This is where I start to feel really queezy. ”

sudo apt install openjdk-8-jdk
“. What exactly is that going to do? How well will it manage two versions of java co-existing? How will programs select between them? Well, it turns out that it doesn’t. I really, really, want to run Unity in some sort of containerized fashion at arms length behind a cordon sanitaire – but it’s one of the most resource intensive applications I will be running so I want it running close to the metal. It’s like DLL hell, do I sacrifice the modern install of Java just to get Unity working?

It turns out there is a little command called

update-java-alternatives --list
. You can select the version you want to use at a particular moment, so when we run the sdkmanager let’s try to first run “sudo update-java-alternatives -s java-1.8.0-openjdk-amd64”. This breaks a few things in mozilla’s Firefox, but the sdkmanager then works. Now I need to figure out where to position the unpacked tools, and how to hook them into the environment properly…. plus, just to add salt into the wounds, my installation of android-sdk seems to have become broken through something I’ve done thus far (symlinks in /usr/bin to tools like adb are broken as the platform dir is now missing from /usr/lib/android-sdk/tools). Honestly. It’s time to remove the apt package from android-sdk and start again from scratch there, perhaps taking a different path.

After removing and purging the apt package for the android-sdk by way of napalm I discovered that there are two sub-boxes in the Unity Hub’s module installer:

Seriously, how is anybody supposed to know?!? The Android Build Support box was ticked already. You’d think that would mean it would install all child options, otherwise it might show a “-” rather than a tick. I could cry sometimes. Before Unity has even managed to download the SDK/NDK/JDK I can already see problems ahead : https://issuetracker.unity3d.com/issues/ubuntu-unable-to-use-andriod-ndk-and-sdk-tool-installed-via-hub

Yep. What a surprise. Everything is now installed correctly according to Unity:

…unfortunately, trying to build their tutorial project results in :

Why is it always like this with Unity?! I literally needed the last six months to let the frustration of using Unity for this project over the course of 2018 wash out of my bones, and now it’s back again. It’s like trauma. It’s giving me PTSD and a desire to throttle somebody. Calm. Breath. Just another another another obstacle to climb over.

I have tried :

chmod +x Editor/Data/PlaybackEngines/AndroidPlayer/OpenJDK/jre/bin/*<br>chmod +x Editor/Data/PlaybackEngines/AndroidPlayer/OpenJDK/bin/*

That may or may not have changed something, but nothing that I notice.

In ~/.config/unity3d/Editor.log I can see a bit more detail:

Android PostProcess task “Detect Java Development Kit (JDK)” took 64.3625 ms
UnityException: Android SDK not found
Unable to locate Android SDK.

At least we know it’s finding the JDK. So, why not the SDK? As I currently understand it, Unity has installed everything in a self-contained manner. The SDK is in ~/Applications/UnityEditors/2019.2.17f1/Editor/Data/PlaybackEngines/AndroidPlayer/SDK. I don’t think that’s bad, and would approve – if it worked.

The aforementioned issue on the Unity lists points out that the “tools” directory is missing from …Editor/Data/PlaybackEngines/AndroidPlayer/SDK, and indeed it is. Let’s try once more to unzip a copy of the sdk tools and plonk them in there. Wow! It looks like the SDK is discovered by Unity, now it’s just complaining about an inadequate API level for VR (needs to be a minimum of 19 – check in Player Settings in the Unity IDE).

It’s a big compilation job… and my new shiny XPS 7150 is struggling. The CPU is stepping down continually as it gets hot enough to boil an egg… that’ll be another area for improvement, voltage limiting the CPU.

lscpu | grep MHz | awk ‘{print }’ – a nice way to monitor your CPU speed
My CPU is on fire!

Every 1,0s: lscpu | grep MHz | awk ‘{print }’ ben-XPS-15-7590: Sat Dec 28 20:24:19 2019

CPU MHz: 3252.126
CPU max MHz: 4500,0000
CPU min MHz: 800,0000

IT BUILT! Now lets’s try “Build and Run”! Oh, I’m looking forward to this so much. It’ll finally work and deploy straight to my phone! 😀

By this point, are you surprised? I’m regarding this as an inevitability. It looks, at first glance, like Unity has suddenly decided that I’m running windows, rather than Linux.

Every. Single. Step. Forward. Must. Be. Paid. For. In. Pain.

For starters, everything in the Unity installed SDK/platform-tools directory has permissions that prevent execution. Really?!? Let’s try “chmod +x *”. That seems to get us another step further forward. I then needed to setup developer mode on my new phone, whereupon I got… dadada….


Doh!

Will this ever end? Let’s hope this is going to be simple and do what it suggests. I guess it means this?!?

FINALLY!!! The .apk is deploying to my phone, and is playable.

Now that is working it’ll be time to check that the setup can handle a basic VR game from https://blogs.unity3d.com/2015/12/09/get-started-with-vr-sample-pack-learning-articles/

If THAT works (which at this point really shouldn’t be impossible) it’ll be the right move to take a bit more care to setup a Continuous Integration process and streamline a few things to help development, before pushing on with “retro-fitting” eyeskills. I think I’ll carry on the next steps in another post on another day.

IR retinal safety

At the moment all software work is paused until we have the hardware ability to track the eyes as we desire. Part of this process is to ensure, in our specification of the eye tracking cameras, that we do not produce too much infra-red radiation (in the LEDs we use to both illuminate and help track the position of the eyes) that it causes a health risk.

Given that this is open-hardware, let’s keep sharing information! One of the most helpful resources we’ve found is here https://www.renesas.com/eu/en/doc/application-note/an1737.pdf

Although the IR LED’s we are selecting fall well underneath the safety threshold required, we still need to be careful in considering the effect of using them in an array (well, three in an L shape) and also the impact of the focusing lens in the headset.

The relevant standard is IEC-62471. A summary of this standard is here https://smartvisionlights.com/wp-content/uploads/pdf/IEC_62471_summary.pdf

You may notice that the permissible exposure time for an except IR source on the eye is 1000 seconds (16.6minutes, pretty much exactly the length of time for which we want to restrict the use of the daily app). A Group 1 device would only allow us 100 seconds, which may still be adequate if use sparingly (pulsed, and restricted to critical moments of measurement), but it would be best to be sure our radiation levels are under the threshold of a Group 1 device.

Affordable EyeTracking – working towards a new camera PCB

We keep coming back to the basic fact that we need to know what the eyeballs are doing. This requires an eye tracker which does what we want (particularly, with a software API which doesn’t assume both eyes are coordinating in the usual way!) at a price we can afford. The corollary to this is – we’re going to have to do it ourselves.

Tobii style eye trackers are massively over powered and overpriced for what we are attempting, while we suspect it will take longer to get a collaboration going with PupilLabs (there are issues with their mobile software stack, API, licensing, and their choice of cameras) than we will need to roll our own solution.

In previous posts I introduced the PCB we found, which integrates two web cams as a single output, and mentioned that we’ll need to have it modified.

First, the cable length between the two cameras:

A suggested rough layout
  1. We move the second camera onto its own PCB (b).  That PCB serves two purposes, to hold the IR LEDs and provide a mechanical surface to be held in place reliably.
  2. We make the length of the cable some 12cm long (make a mockup to check necessary lengths with Moritz) where any excess can be hidden in the cavity under the eye piece “shelf”.

Looking more closely at the PCB:

We need to find a new layout which isn’t so radically different from the existing one. If it is too different, it is likely that the manufacturers charge us a fortune for a redesign, but we also *must* reach a point where we have at least an “L” shaped positional layout for IR LEDs (so we can judge pupil movement distances and orientations) plus (if it can be done without introducing another manufacturing step) a way of reliably physically fixing the unit. We also need to be careful to specify a slightly longer piece of cable without shielding so we can run the cabling up inside the holder effectively.

I consider our first step to be producing a simple physical prototype where we can temporarily affix some IR Leds and see how it performs/could be fixed to the headset. The next question is – should this prototype extend the existing actual PCB so we can experiment with the real unit, or just model the desired physical dimensions of the unit? Well, both are probably necessary, but counter intuitively it may be best to start with a quickly printable harness where we place our IR LEDS.. which we then separate out in a separate step into the new PCB design and the holder design. Let’s have a go at this!

So here’s a first super simple stab at it. The slot allows us to place the Camera PCB into it with a dab of hot glue (or just a snug fit), where we can also really simply modify the “holderFaceAngle” variable to find a good angle that allows the eyes to be monitored. Next we can try fixing IR LEDs to the top/bottom strip in different positions to see what works best.

Where could we go from here? Well, the existing PCB is actually on a flexible bases wrapped around a metal block for heat dissipation. I’ve set this up so that the manufacturer can add “ears” at the top and the bottom for the new IR LEDs. The question is whether or not they would extend the metal base (it might be a standardised part) . Obviously, it would be best if they were able to extend the metal base as we’d have the most reliable IR positioning .We’d then only have to increase the size of the slot to accommodate the new PCB. The second advantage would be that we could leave an area of naked metal at the rear of the unit, against which we could press the surface of the mount. This (when gluing for instance) would allow us to be sure the unit is really positioned flush (at the correct angle) with the holder. It might really come down to something this simple.

We would then extend the base of the holder with some feet that extend through pre-cut holes in the camera core (as in the previous instance) so that (with the help of a mounting tool) a monteur can be sure that the holder is correctly mounted each and every time.

Fits snugly, excuse the terrible 3d print – the filament I’m using seems to be ageing very badly!

Going back in time

This is a quick mental note : What would an anaglyphic app look like with a “randomised” and continually changing assignment of colour to each eye – or within a VR setting? Would this stop people “cheating” by blinking to assemble a complete image of what’s happening?

Perhaps Use PerlinNoise to generate a dynamic texture https://www.youtube.com/watch?v=bG0uEXV6aHQ which we then use in a custom shader to assign black to one eye/colour, white to the other, and anything intermediate in ratio to the appropriate eye/colour.

I think this might be worth trying out. What I have, however, realised – is that nothing is really worth exploring anymore without an eye tracking headset.

Another approach might be to use Voronoi style mappings https://www.youtube.com/watch?v=EDv69onIETk BUT it actually makes me wonder if the most simple and effective method would be to divide the space into cubes and just randomly assign each cube to one side or the other.

Data Provenance

It really matters that we know exactly what system setup generated any data we collect – the idea being that experiments can be re-run (and thus independently verified) in precisely the same technical context as the originals.

To achieve this we store the following information underneath anything else captured by a researcher:

Basic information capture

You might notice the buildVersion variable. This is actually a build id from .git – so that a researcher can re-create precisely the same environment from a prior experiment (e.g. from a different researcher). It assumes for now that all builds are stored in the same repository.

How does this buildVersion actually get there? In /[projectPath]/Assets/Plugins/Android/mainTemplate.gradle (around line 82) you’ll find a little hack that creates a custom class called Version during the Unity build process:

buildTypes {
debug {…
}
release {
//We want to provide information to the App about the .git version that the app is built from.
//For repeatable open-science, and to handle changing schemas, we need to be able to pin an InfoBase item to a specific build
def stdout = new ByteArrayOutputStream()
exec{
//Ask git
commandLine ‘git’,’log’,’–pretty=format:”%h”‘,’-n 1′
standardOutput = stdout;
}
//Write it out to a class (Version) which our InfoBase can then instantiate to extract the “version”
new File(“DIR_UNITYPROJECT/Assets/Scripts/Version.cs”).text = “namespace EyeSkills { public class Version { public string version = $stdout; }}”

That’s why and how we can reference Version.cs to get the buildVersion. However, we warned that refactoring things can cause that Version.cs to move in it’s location. You will then get uncomfortable gradle build fails errors from Unity. Check the path to Version.cs if you see problems with gradle… it might just be the cause!

Plotter friendly design

OpenSCAD (and my use of the wrong primitives for the dashed lines) didn’t do a great job converting lines and text in the original EyeTracktive Core design, into a format that works well for a plotter pen marking out the headset.

I just tried to manually create a better overlay in Inkscape (as a stop-gap) and ran into a few problems.

  1. Text doesn’t naturally convert to single stroke paths (obviously, if you think about it). In Inkscape Extensions->Render->Hershey Text… comes to the rescue. This creates plotter friendly text.
    1. However, it dumps this text somewhere fairly random outside the page border (on my machine at least) so you need to hunt it down.
    2. It then turns out that it uses the SVG transform element (matrix transform) to position the text you’ve just created in the correct position (as you drag-drop). This is incompatible with Cricut’s crappy DesignSpace software, which doesn’t understand transformation matrices, so it just ignores them.
    3. Solution : Ungroup each text element individually, as Inkscape seems to only wrap groups in transform matrices.

Diamond Depth Perception – a user reports

Hi everyone,

I’ll start with a bit of background. I’m Ben’s younger brother, Nick, and I was very keen to see how his software works. Other than mild short-sightedness, I do not think there is anything wrong with my eyes. Sitting in Ben’s office, I noticed a 3D magic eye picture on the wall, and commented that I have never been able to see the 3D image, even though I had a book full of pictures as a child. I tried it again; no success.

We started with the triangles [Ben : the standard conflict scene] and they were overlaid as expected. When we got to the cyan and yellow semicircles within white circles, I noted the horizontal offsets between the semicircles, but everything was flat. There was no depth, and the outer white diamond that encloses the entire image was square.

This image has an empty alt attribute; its file name is image8-1024x569.png

I noticed that I could consciously fade either the cyan or yellow semicircles to being invisible (and control the brightness at levels in between) but I really didn’t understand the instructions, which were to select the circle that was ‘closest’. Everything was still flat and I ended up choosing the semicircles that matched closest. This was not right and I kept selecting the wrong circle!

I then tried moving my focal distance (the point at which both eyes converged) closer then further away and noticed that I could move the semicircles sideways to get them to join into a full circle. By doing this several times for each pair, I could figure out which circle was ‘closest’ entirely by feeling my eye muscles. Did this circle ‘feel’ closer because my eyes were more cross-eyed?… But this wasn’t particularly sensitive and sometimes circles were very similar and I had to adjust my eyes several times on each circle to feel a difference. Still flat though!

Then I looked into the black space in the middle of the screen, relaxed my eyes (I was probably staring into the distance) and BAM, I could see the four circles were distorted and had depth. Also, the outer diamond was bowed toward me. Keeping my eyes on this black space, it was easy to judge the relative distances of the circles. After a few rounds, I was able to look around the image and maintain the 3-dimensional illusion at all times.

I guess the purpose of this post is that I want to share my discovery that seeing the image in 3D is not necessarily automatic. It took me a few stages for it to happen: manually varying my distance of focus and figuring out this distance based on my eye muscles, then staring into the middle of the screen before the 3D became apparent. Perhaps I have been unable to separate the two process of focussing (which is done with the lens in each eye) and depth perception (which uses the two eyes together). Certainly, when trying the 3D magic eye pictures my eyes have been staring into the distance (correct!) but my focussing was too (incorrect!) so the image was just a blur. Time to find that old book! I hope this helps someone. Best of luck.

Nick

When Unity UI Goes bad – The perils of DontDestroyOnLoad

This is quick note about a really obscure problem which can bite you quite hard in unity. If you switch between scene you may need to use DontDestroyOnLoad to keep certain objects hanging around during and after the transition. If, however, you have stored collections of scripts under empty game objects – and there should be an overlap between the set of scripts in a GameObject in Scene1, and Scene2, that DoNotDestroyOnLoad will prevent the GameObject containing the clashing script in Scene2 from being loaded- AT ALL! This will break things in the most unexpected and non-obvious way. For instance, RayCasters may suddenly disappear because they were never loaded (but ONLY when Scene2 is loaded via Scene1) causing User Interface elements to no longer pick up user input etc.

Bewarned.