The headset prototype graveyard

I’m still following the dream of a super-lightweight non-environmentally shitty, shippable by envelope headset design based on Google Cardboard. The other approaches to google cardboard that I’ve seen rely on glue or really lame tabs. It’s not stable enough to support a camera without using very thick card, and it makes self assembly irritating.

After many, many dead ends trying ideas from Katagami, all sorts of ideas with longer tabs and inserts, I’m coming back to an early idea of trying to merge all the parts into two fixation points which are held together, really securely, with just a pair of paper clips.

Normally I’d do something like this mathematically, but the GC design is really odd when it comes to decisions about where the folds and offsets are so I just decided to go with Inkscape and .svg layout. This makes positioning those slots for the paper clips a process of trial and error basically – iterating to find good positions which work well structurally and during assembly. Getting there!

Inverting screen colours on Ubuntu 20.04 LTS under Wayland

So, I finally gave up trying to use XOrg on my 4K laptop/external monitor setup. The fractional scaling was hopelessly broken. I switched to using Wayland instead – and joy of joys, everything works out of the box!

Unfortunately, this broke the ability I had setup to invert my screen colours. That’s not a “nice to have” – it is essential. Particularly later in the day, a stark white screen gives me unbearable eye strain after just a few minutes. It may be related to my latent strabismus, or not… but what I know is that inverting the screen colours feels like somebody has just poured warm soothing milk over my eyes (that sounds better in my head that it probably would be in real life).

Trying to find out how to enable this in Wayland was a PITA. In the end, after hours of messing about following one wild goose chase after another, I arrived at Gnome Tweaks. Install it. https://linuxconfig.org/how-to-install-tweak-tool-on-ubuntu-20-04-lts-focal-fossa-linux

Next install the Invert window color plugin from gnome extensions (https://extensions.gnome.org/extension/1041/invert-window-color/).

Now, in tweaks, enable the extensions and the invert window color plugin in particular… and off you go! Cmd-I will invert individual windows for you!

Exporting multiple layers in Inkscape – streamlining the headset design

Until now I had a very heavyweight process for specifying the headset for eyeskills, using OpenScad and lots of geometry. Now that I’m gradually working my way back into the project with new ideas and some fresh motivation, I want to streamline this dramatically so I don’t get bogged down in tool chain hell again.

I think the solution is to just work in straight .svg files using Inkscape. The issue that arises however, is that (for the laser cutter at least) there are several different types of passes that one wants to achieve per headset.

Some passes must be strong enough and slow enough to slice right through the card. Other passes must be weak enough and fast enough (to save time) to simply mark the card with instructions for the assembler/user. Yet other passes must score the card deeply enough to help it fold, but not so deeply that the pass becomes a cut.

The only open source software I can find that works reliably with the laser cutter I have is LaserWeb. In laser web one can superimpose multiple .svg files and assign them different pass properties (speed/strength etc.). Unfortunately, the workflow to create these multiple .svgs with all the image parts perfectly aligned is a pain. From a single .svg it’s a process of manually cutting/pasting parts each time and using save as…. there must be a better way.

The obvious solution would be to use multiple layers in an .svg and export each layer to an individual .svg. That should be built in functionality right? Obviously. Obviously it isn’t. Thankfully there is a fork of an export plugin which might just do the job!

Let’s take a look at what it can do.

If you’re using Inkscape 1.0 on Ubuntu 20.04LTS as I am, you’re probably using a snap package. You’ll want to put the extension files in ~/.config/inkscape/extensions/ with

git clone https://github.com/dmitry-t/inkscape-export-layers.git

Now restart Inkscape.

Naturally, the plugin doesn’t work – as I report here. Here we go again.

I stumbled across these notes on upgrading extensions for Inkscape 1.0 (which seems to break most of them). https://wiki.inkscape.org/wiki/index.php/Updating_your_Extension_for_1.0

After editing the export_layers.inx file appropriately it now looks like this :


<?xml version="1.0" encoding="UTF-8"?>
<inkscape-extension xmlns="http://www.inkscape.org/namespace/inkscape/extension">
    <_name>Export layers</_name>

    <id>com.over9000.export-layers-2</id>

    <dependency type="executable" location="inx">export_layers.py</dependency>

    <param name="output-dir" type="string" _gui-text="Directory to export to">~/</param>

    <param name="fit-contents" type="optiongroup" appearance="minimal" _gui-text="Area to export">
        <option value="false" selected="selected">Use document boundaries</option>
        <option value="true">Fit to document contents</option>
    </param>

    <param name="file-type" type="optiongroup" appearance="minimal" _gui-text="Output file type">
        <option value="png" selected="selected">PNG</option>
        <option value="svg">SVG</option>
        <option value="jpeg">JPEG (requires ImageMagick)</option>
    </param>

    <param name="dpi" type="int" min="1" max="1024" appearance="minimal" _gui-text="Export DPI">96</param>

    <param name="enumerate" type="boolean" _gui-text="Add number prefixes to exported filenames (001_, 002_ etc)">true
    </param>

    <param name="help" type="description">Every layer marked with the prefix "[export]" is exported into a separate
        file. All layers marked with the prefix "[fixed]" are additionally exported into every such file.
    </param>

    <effect needs-live-preview="false">
        <object-type>all</object-type>
        <effects-menu>
            <submenu _name="Export"/>
        </effects-menu>
    </effect>

    <script>
        <command location="inx" interpreter="python">export_layers.py</command>
    </script>
</inkscape-extension>

The important changes are around the export_layers.py elements.

The plugin now at least displays, and does it work?

No of course it doesn’t. It vomits a heap of warnings back :


export_layers.py:28: DeprecationWarning: Effect.OptionParser or `optparse` has been deprecated and replaced with `argparser`.You must change `self.OptionParser.add_option` to `self.arg_parser.add_argument`; the arguments are similar.
  self.OptionParser.add_option('-o', '--output-dir',
export_layers.py:34: DeprecationWarning: Effect.OptionParser or `optparse` has been deprecated and replaced with `argparser`.You must change `self.OptionParser.add_option` to `self.arg_parser.add_argument`; the arguments are similar.
  self.OptionParser.add_option('-f', '--file-type',
export_layers.py:41: DeprecationWarning: Effect.OptionParser or `optparse` has been deprecated and replaced with `argparser`.You must change `self.OptionParser.add_option` to `self.arg_parser.add_argument`; the arguments are similar.
  self.OptionParser.add_option('--fit-contents',
export_layers.py:47: DeprecationWarning: Effect.OptionParser or `optparse` has been deprecated and replaced with `argparser`.You must change `self.OptionParser.add_option` to `self.arg_parser.add_argument`; the arguments are similar.
  self.OptionParser.add_option('--dpi',
export_layers.py:53: DeprecationWarning: Effect.OptionParser or `optparse` has been deprecated and replaced with `argparser`.You must change `self.OptionParser.add_option` to `self.arg_parser.add_argument`; the arguments are similar.
  self.OptionParser.add_option('--enumerate',
export_layers.py:239: DeprecationWarning: Effect.affect is now `Effect.run()`. The `output` argument has changed.
  LayerExport().affect(output=False)
export_layers.py:75: ResourceWarning: unclosed file <_io.BufferedReader name=4>
  if not self.convert_svg_to_svg(svg_file, output_dir):
export_layers.py:75: ResourceWarning: unclosed file <_io.BufferedReader name=6>
  if not self.convert_svg_to_svg(svg_file, output_dir):
export_layers.py:75: ResourceWarning: unclosed file <_io.BufferedReader name=4>
  if not self.convert_svg_to_svg(svg_file, output_dir):
export_layers.py:75: ResourceWarning: unclosed file <_io.BufferedReader name=6>
  if not self.convert_svg_to_svg(svg_file, output_dir):

Python is one of the few languages I never made the effort to learn. I couldn’t stand the idea of semantically important indents… but is it time? 🙁

Perhaps it’s going to be worth trying https://gitlab.com/su-v/inx-exportobjects next. Despite it being old as the sands of time, relative to the rate code seems to rot these days.

UPDATE: I found a bit more time and the other plugin also can’t do what’s necessary. Along the way I discovered that inscape has built in capabilities to query layers and export them individually from the command line – well, it would if that functionality wasn’t also broken. It’s getting a bit silly isn’t it.

The GOOD news, is that I found the Filter By Stroke Color in LaserWeb – which allows you to specify passes based on the colour value of paths in the .svg. Problem solved. I export a single svg with different paths (e.g. for Cut, Mark, or Score) coloured uniquely.

You can find more here: https://laserweb.yurl.ch/documentation/cam-operations/63-creating-operations/4-laser-cutting-an-svg-file

A working stereoscopic camera?

Before we progress any further with the software, we need to know what’s actually happening to the eyes. This means monitoring each eye with some pupil tracking, exposing something like a set of vectors for eye movement to Unity, so that we can measure the effect of our virtual environments on eye position.

This has been a PITA. For low-cost, we need an android OTG (On The Go) UVC (Universal Video Camera) compatible camera with a 6cm focal length, and best of all, one that doesn’t need a hub (which causes all manner of issues when it comes to trying to observe both eyes simultaneously) but instead one which combines images from two cameras into a single double width image which appears to android/linux as a single camera.

We thought we found a supplier in China who could produce the module we needed as a modification of a product they already had – but we’ve been going around in circles for half a year now… continual misunderstandings and miscommunication? Perhaps (I suspect) we just aren’t offering order sizes they are interested in.

In the meantime, Rene has stumbled across a really wonderful module. It’s almost an order of magnitude more expensive than we’ve been aiming at – but at only 80EUR it’s still massively affordable.

We aren’t using it as intended of course (to create a stereoscopic image) but instead, are going to use it to simply monitor each eye simultaneously. It’s very convenient that as a stereoscopic camera it has the correct (on average) horizontal distance between cameras.

Next, Rene will start looking at how to handle the vision processing, while I try to drag the software back out of its coffin and update it to the most recent version of Unity – while implementing a few new ideas I have.

Unfortunately, it’s going to be *very* part-time for the foreseeable future, but something is happening at least.

Ho Ho Ho. Oh.

I had a plan. I wanted to get the original EyeSkills Community edition back up and running for Christmas. I wanted to integrate the changes I’d been making over the last months. Working on the software really isn’t a priority at the moment (getting the eye tracking hardware ready is more important, without which we’re “flying blind”) but nevertheless…

Obstacles

Nothing has been working out the way I planned. It’s been a litany of blockers. The people we’ve been working with to get our custom eye tracking hardware have delayed the project for a couple of months… just as my Macbook died forcing me to get a new machine a few weeks ago. When my Macbook died, I decided that it was time for a breath of fresh air so I did something I have repeatedly been burnt doing before, and got a regular laptop to run Linux. As it happens, after less pain and suffering than I expected, everything’s working better than I expected with a stock Ubuntu (for Unity compatibility). Then it was time to install Unity.

Well, let the pain commence. I’ll keep up dating this as I go for others in the same boat.

Installing Unity 2019.2.17f1 on Ubuntu 19.10

Immediately, we have blank compilation errors in Unity. Looking in the Editor.log hidden away in ~/.config/unity3d/” there’s a whiff of a clue that this is caused by :

-----CompilerOutput:-stdout--exitcode: 134--compilationhadfailure: True--outfile: Temp/UnityEngine.TestRunner.dll<br> -----CompilerOutput:-stderr----------<br> No usable version of the libssl was found

…which implies I either have to downgrade ssl from 1.1.1 to 1.0.0 or find some way to get two simultaneous installations happily co-existing (and Unity knowing which to use). This could take time.

UPDATE: Seems to be working out ok to install a version of libssl1.0.0 – it exists, but doesn’t seem to be used by the system by default ‘m guessing the first set of blank compiler errors are:

wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl1.0/libssl1.0.0_1.0.2n-1ubuntu6_amd64.deb<br> sudo dpkg -i libssl1.0.0_1.0.2n-1ubuntu6_amd64.deb

After that the next issues awaits: “Unspecified error during import of AudioClip”… I’ve tried

sudo apt install lib32stdc++6 -y

but that’s only going to be the beginning of the battle…

UPDATE : After a restart, that problem went away. I suspect installing lib32stdc++6 did the job, but Unity just needed to the restart to recognise that it existed.

Now I have straightforward “missing namespace” issues – which probably require me to specify some csharp assemblies. We’ll see…

Well, I shall try to keep plodding on while I focus on how to make sure that when this project really gets going again, it’ll really be setup to generate the cycle of rapid feedback and evidence that will speed the creation of approaches that really work.

If you are also in the world of pain that is Unity + Ubuntu 19.10+, I’d love it if you got in touch 🙂

Merry Christmas!

FOLLOW UP:

So, the next step is probably to get Visual Studio Code (VSC) working. Why not Atom/Sublime/OtherFlavourOfTheMonthEditor? Getting anything working reliably in Unity is hard enough without wondering far from the tree. The ability to debug running apps on the phone has been exceptionally useful when I was working on OSX with VSC so let’s try to set it up here.

The most hopeful looking instructions I’ve found so far are here : https://stackoverflow.com/questions/52807397/how-do-i-use-visual-studio-code-to-develop-unity3d-projects-in-ubuntu and here https://medium.com/@sami1592/set-up-visual-studio-code-for-unity-in-linux-69b7f4352e0b

These instructions are a bit out of date so here’s an overview of the steps I took :

In VCE view the installed extensions (Ctrl+Shift+X) and add the C# and Debugger for Unity. It might be interesting to take a look at “Unity Game Dev Bundle” as well. You will want to have VCE also create a launch.json from the debug pane, so that you can then start the debugger for unity (use the green arrow).

Close VCE and in Unity set the default editor in the preferences to be /snap/bin/code

All of this works nicely for a fresh, small project, but it appears that it doesn’t work for the EyeSkills community edition because something is broken in VCE regarding multi-projects (i.e. we have some Unity Tests inside ours). Another hour spent getting a step closer to a working setup.

I think the next step is to install a sample project like this https://learn.unity.com/project/tower-defense-template?signup=true and check I can get the whole thing running, before I start to retrofit everything from EyeSkills into a fresh project…. and that seems to work well. So far so good. Does deployment to the phone work?

No. ”

UnityException: Android SDK not found
“. That should just be a question of : ”
sudo apt install android-sdk
“. Just setting up these environments must have already cost 3-4Gb of space. Well, I don’t seem to be able to run the sdkmanager which I would have imagined to have been installed with the android-sdk. The sdkmanager is important for managing which version of Android your Unity builds can support – and I really don’t want to install all of Android Studio just to manage a few SDK installations.

Running ”

apt-file list android-sdk
” shows me that the included tools are in /usr/lib/android-sdk/tools. These tools do not include the sdkmanager 🙁 It looks like the android sdk command line tool are available here : https://developer.android.com/studio, specifically,
<a href="https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip">https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip</a>
.

We can get it with

wget <a href="https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip">https://dl.google.com/android/repository/sdk-tools-linux-4333796.zip</a>
and then unzip it. I’m going to dump the contents of the resulting tools directory into
/usr/lib/android-sdk/tools
with an “rsync -rv /.tools/ /usr/lib/android-sdk/tools/” and hope for the best that this doesn’t cause additional problems. Ah, no, that was a mistake. They are my own user tools, and should live in my own home directory with myself as owner. Undo. Repeat. Oh no. It looks like it’s a Java version problem. Could it be that it won’t run on the version of Java that’s installed by default (11) but requires the old open java version 8?!?

This is where I start to feel really queezy. ”

sudo apt install openjdk-8-jdk
“. What exactly is that going to do? How well will it manage two versions of java co-existing? How will programs select between them? Well, it turns out that it doesn’t. I really, really, want to run Unity in some sort of containerized fashion at arms length behind a cordon sanitaire – but it’s one of the most resource intensive applications I will be running so I want it running close to the metal. It’s like DLL hell, do I sacrifice the modern install of Java just to get Unity working?

It turns out there is a little command called

update-java-alternatives --list
. You can select the version you want to use at a particular moment, so when we run the sdkmanager let’s try to first run “sudo update-java-alternatives -s java-1.8.0-openjdk-amd64”. This breaks a few things in mozilla’s Firefox, but the sdkmanager then works. Now I need to figure out where to position the unpacked tools, and how to hook them into the environment properly…. plus, just to add salt into the wounds, my installation of android-sdk seems to have become broken through something I’ve done thus far (symlinks in /usr/bin to tools like adb are broken as the platform dir is now missing from /usr/lib/android-sdk/tools). Honestly. It’s time to remove the apt package from android-sdk and start again from scratch there, perhaps taking a different path.

After removing and purging the apt package for the android-sdk by way of napalm I discovered that there are two sub-boxes in the Unity Hub’s module installer:

Seriously, how is anybody supposed to know?!? The Android Build Support box was ticked already. You’d think that would mean it would install all child options, otherwise it might show a “-” rather than a tick. I could cry sometimes. Before Unity has even managed to download the SDK/NDK/JDK I can already see problems ahead : https://issuetracker.unity3d.com/issues/ubuntu-unable-to-use-andriod-ndk-and-sdk-tool-installed-via-hub

Yep. What a surprise. Everything is now installed correctly according to Unity:

…unfortunately, trying to build their tutorial project results in :

Why is it always like this with Unity?! I literally needed the last six months to let the frustration of using Unity for this project over the course of 2018 wash out of my bones, and now it’s back again. It’s like trauma. It’s giving me PTSD and a desire to throttle somebody. Calm. Breath. Just another another another obstacle to climb over.

I have tried :

chmod +x Editor/Data/PlaybackEngines/AndroidPlayer/OpenJDK/jre/bin/*<br>chmod +x Editor/Data/PlaybackEngines/AndroidPlayer/OpenJDK/bin/*

That may or may not have changed something, but nothing that I notice.

In ~/.config/unity3d/Editor.log I can see a bit more detail:

Android PostProcess task “Detect Java Development Kit (JDK)” took 64.3625 ms
UnityException: Android SDK not found
Unable to locate Android SDK.

At least we know it’s finding the JDK. So, why not the SDK? As I currently understand it, Unity has installed everything in a self-contained manner. The SDK is in ~/Applications/UnityEditors/2019.2.17f1/Editor/Data/PlaybackEngines/AndroidPlayer/SDK. I don’t think that’s bad, and would approve – if it worked.

The aforementioned issue on the Unity lists points out that the “tools” directory is missing from …Editor/Data/PlaybackEngines/AndroidPlayer/SDK, and indeed it is. Let’s try once more to unzip a copy of the sdk tools and plonk them in there. Wow! It looks like the SDK is discovered by Unity, now it’s just complaining about an inadequate API level for VR (needs to be a minimum of 19 – check in Player Settings in the Unity IDE).

It’s a big compilation job… and my new shiny XPS 7150 is struggling. The CPU is stepping down continually as it gets hot enough to boil an egg… that’ll be another area for improvement, voltage limiting the CPU.

lscpu | grep MHz | awk ‘{print }’ – a nice way to monitor your CPU speed
My CPU is on fire!

Every 1,0s: lscpu | grep MHz | awk ‘{print }’ ben-XPS-15-7590: Sat Dec 28 20:24:19 2019

CPU MHz: 3252.126
CPU max MHz: 4500,0000
CPU min MHz: 800,0000

IT BUILT! Now lets’s try “Build and Run”! Oh, I’m looking forward to this so much. It’ll finally work and deploy straight to my phone! 😀

By this point, are you surprised? I’m regarding this as an inevitability. It looks, at first glance, like Unity has suddenly decided that I’m running windows, rather than Linux.

Every. Single. Step. Forward. Must. Be. Paid. For. In. Pain.

For starters, everything in the Unity installed SDK/platform-tools directory has permissions that prevent execution. Really?!? Let’s try “chmod +x *”. That seems to get us another step further forward. I then needed to setup developer mode on my new phone, whereupon I got… dadada….


Doh!

Will this ever end? Let’s hope this is going to be simple and do what it suggests. I guess it means this?!?

FINALLY!!! The .apk is deploying to my phone, and is playable.

Now that is working it’ll be time to check that the setup can handle a basic VR game from https://blogs.unity3d.com/2015/12/09/get-started-with-vr-sample-pack-learning-articles/

If THAT works (which at this point really shouldn’t be impossible) it’ll be the right move to take a bit more care to setup a Continuous Integration process and streamline a few things to help development, before pushing on with “retro-fitting” eyeskills. I think I’ll carry on the next steps in another post on another day.

IR retinal safety

At the moment all software work is paused until we have the hardware ability to track the eyes as we desire. Part of this process is to ensure, in our specification of the eye tracking cameras, that we do not produce too much infra-red radiation (in the LEDs we use to both illuminate and help track the position of the eyes) that it causes a health risk.

Given that this is open-hardware, let’s keep sharing information! One of the most helpful resources we’ve found is here https://www.renesas.com/eu/en/doc/application-note/an1737.pdf

Although the IR LED’s we are selecting fall well underneath the safety threshold required, we still need to be careful in considering the effect of using them in an array (well, three in an L shape) and also the impact of the focusing lens in the headset.

The relevant standard is IEC-62471. A summary of this standard is here https://smartvisionlights.com/wp-content/uploads/pdf/IEC_62471_summary.pdf

You may notice that the permissible exposure time for an except IR source on the eye is 1000 seconds (16.6minutes, pretty much exactly the length of time for which we want to restrict the use of the daily app). A Group 1 device would only allow us 100 seconds, which may still be adequate if use sparingly (pulsed, and restricted to critical moments of measurement), but it would be best to be sure our radiation levels are under the threshold of a Group 1 device.

Affordable EyeTracking – working towards a new camera PCB

We keep coming back to the basic fact that we need to know what the eyeballs are doing. This requires an eye tracker which does what we want (particularly, with a software API which doesn’t assume both eyes are coordinating in the usual way!) at a price we can afford. The corollary to this is – we’re going to have to do it ourselves.

Tobii style eye trackers are massively over powered and overpriced for what we are attempting, while we suspect it will take longer to get a collaboration going with PupilLabs (there are issues with their mobile software stack, API, licensing, and their choice of cameras) than we will need to roll our own solution.

In previous posts I introduced the PCB we found, which integrates two web cams as a single output, and mentioned that we’ll need to have it modified.

First, the cable length between the two cameras:

A suggested rough layout
  1. We move the second camera onto its own PCB (b).  That PCB serves two purposes, to hold the IR LEDs and provide a mechanical surface to be held in place reliably.
  2. We make the length of the cable some 12cm long (make a mockup to check necessary lengths with Moritz) where any excess can be hidden in the cavity under the eye piece “shelf”.

Looking more closely at the PCB:

We need to find a new layout which isn’t so radically different from the existing one. If it is too different, it is likely that the manufacturers charge us a fortune for a redesign, but we also *must* reach a point where we have at least an “L” shaped positional layout for IR LEDs (so we can judge pupil movement distances and orientations) plus (if it can be done without introducing another manufacturing step) a way of reliably physically fixing the unit. We also need to be careful to specify a slightly longer piece of cable without shielding so we can run the cabling up inside the holder effectively.

I consider our first step to be producing a simple physical prototype where we can temporarily affix some IR Leds and see how it performs/could be fixed to the headset. The next question is – should this prototype extend the existing actual PCB so we can experiment with the real unit, or just model the desired physical dimensions of the unit? Well, both are probably necessary, but counter intuitively it may be best to start with a quickly printable harness where we place our IR LEDS.. which we then separate out in a separate step into the new PCB design and the holder design. Let’s have a go at this!

So here’s a first super simple stab at it. The slot allows us to place the Camera PCB into it with a dab of hot glue (or just a snug fit), where we can also really simply modify the “holderFaceAngle” variable to find a good angle that allows the eyes to be monitored. Next we can try fixing IR LEDs to the top/bottom strip in different positions to see what works best.

Where could we go from here? Well, the existing PCB is actually on a flexible bases wrapped around a metal block for heat dissipation. I’ve set this up so that the manufacturer can add “ears” at the top and the bottom for the new IR LEDs. The question is whether or not they would extend the metal base (it might be a standardised part) . Obviously, it would be best if they were able to extend the metal base as we’d have the most reliable IR positioning .We’d then only have to increase the size of the slot to accommodate the new PCB. The second advantage would be that we could leave an area of naked metal at the rear of the unit, against which we could press the surface of the mount. This (when gluing for instance) would allow us to be sure the unit is really positioned flush (at the correct angle) with the holder. It might really come down to something this simple.

We would then extend the base of the holder with some feet that extend through pre-cut holes in the camera core (as in the previous instance) so that (with the help of a mounting tool) a monteur can be sure that the holder is correctly mounted each and every time.

Fits snugly, excuse the terrible 3d print – the filament I’m using seems to be ageing very badly!

Going back in time

This is a quick mental note : What would an anaglyphic app look like with a “randomised” and continually changing assignment of colour to each eye – or within a VR setting? Would this stop people “cheating” by blinking to assemble a complete image of what’s happening?

Perhaps Use PerlinNoise to generate a dynamic texture https://www.youtube.com/watch?v=bG0uEXV6aHQ which we then use in a custom shader to assign black to one eye/colour, white to the other, and anything intermediate in ratio to the appropriate eye/colour.

I think this might be worth trying out. What I have, however, realised – is that nothing is really worth exploring anymore without an eye tracking headset.

Another approach might be to use Voronoi style mappings https://www.youtube.com/watch?v=EDv69onIETk BUT it actually makes me wonder if the most simple and effective method would be to divide the space into cubes and just randomly assign each cube to one side or the other.

Data Provenance

It really matters that we know exactly what system setup generated any data we collect – the idea being that experiments can be re-run (and thus independently verified) in precisely the same technical context as the originals.

To achieve this we store the following information underneath anything else captured by a researcher:

Basic information capture

You might notice the buildVersion variable. This is actually a build id from .git – so that a researcher can re-create precisely the same environment from a prior experiment (e.g. from a different researcher). It assumes for now that all builds are stored in the same repository.

How does this buildVersion actually get there? In /[projectPath]/Assets/Plugins/Android/mainTemplate.gradle (around line 82) you’ll find a little hack that creates a custom class called Version during the Unity build process:

buildTypes {
debug {…
}
release {
//We want to provide information to the App about the .git version that the app is built from.
//For repeatable open-science, and to handle changing schemas, we need to be able to pin an InfoBase item to a specific build
def stdout = new ByteArrayOutputStream()
exec{
//Ask git
commandLine ‘git’,’log’,’–pretty=format:”%h”‘,’-n 1′
standardOutput = stdout;
}
//Write it out to a class (Version) which our InfoBase can then instantiate to extract the “version”
new File(“DIR_UNITYPROJECT/Assets/Scripts/Version.cs”).text = “namespace EyeSkills { public class Version { public string version = $stdout; }}”

That’s why and how we can reference Version.cs to get the buildVersion. However, we warned that refactoring things can cause that Version.cs to move in it’s location. You will then get uncomfortable gradle build fails errors from Unity. Check the path to Version.cs if you see problems with gradle… it might just be the cause!