Our goal is to get objective feedback about eye position and behaviour, which requires some sort of eye tracking. A decent eye-tracking headset costs anywhere between $400 and $10,000 dollars… which is just too much for the majority of the world who earn less than $10 a day. So, lets make it affordable!
Rather than implementing a new VR Headset for EyeSkills, could we push the price-point down even further, by hacking the standard Google Cardboard V2 design? That’s one of the questions we explored last weekend at Careable’s first Hackademy!
The standard design has a capacitive button which touches the screen as user input when the user depresses an “origami” lever on the right upper side:
In the manufacturer’s schematics it becomes clear that this is a separate unit… so after pulling it apart…
…we figured, why not replace this with a new core capable of supporting on-device eye tracking? Here’s the first mockup. The tabs at the bottom represent the cameras, the fuzzy tubes are cables, the yellow foam blocks are the USB connectors, and the purple block is the USB hub (all to correct size).
This could be folded back into the existing headset to give us just what we need!
I’m now moving towards programming a parametric model of this interior component which we can prototype using the Hacker Space’s laser cutter. Very much looking forward to going back there this weekend!!!
At the same time, other members of the group have been exploring how to modify a standard v2 cardboard to improve both the ergonomics and reduce stray light entering the viewing area (after all, people’s foreheads and noses vary more than you might realise until you start looking closely!!!).
Here we are in Potsdam’s wonderful little Maker Space…
And here are some of the wonderful team explaining what we’re doing to members of other teams at the Hackademy….
It was an inspiring weekend, but 7am starts and getting back to bed at midnight after solid work three days in a row have taken their toll a little!
So, I’m still trying to get the Eleksmaker A3 laser cutter to work as I need.
So far, I’ve come to the conclusion that the original version of GRBL installed on the cheap Arduino nano board the “Mana” board comes with, is just no good. I’ve flashed it to v1.1 :
brew install avrdude avrdude -c arduino -b 57600 -P /dev/cu.wchusbserial1420 -p atmega328p -vv -U flash:w:grbl_v1.1f.20170801.hex
Then I found LaserWeb for OSX, configured it, and had the print head slamming against the side of the case. It took me a while to work out that all my axes were inverted. After swapping the X-Axis carriage around, I was still left with an inverted y-axis, which I solved by sending $3=2 as GCODE to the board via the console in LaserWeb ($3=2 means invert the y-axis, where $3=1 would mean inverting the x-axis).
The next problem was that the “cuts” were three times larger than they ought to be.
The next secret is hopefully to send this set of gcode instructions to the board to configure the number of steps/mm correctly….
$0=10 ;Step pulse, microseconds $1=100 ;Step idle delay, milliseconds $3=2 ;Y axis direction inverted $10=0 ;send work coordinates in statusReport $30=255 ;max. S-value for Laser-PWM (is referenced to the LaserWeb PWM MAX S VALUE) $31=0 ;min. S-value $32=1 ;Laser Mode on $100=80 ;steps/mm in X, depending on your pulleys and microsteps $101=80 ;steps/mm in Y, depending on your pulleys and microsteps $102=80 ;steps/mm in Z, depending on your pulleys and microsteps $110=5000 ;max. rate mm/min in X, depending on your system $111=5000 ;max. rate mm/min in Y, depending on your system $112=2000 ;max. rate mm/min in Z, depending on your system $120=400 ;acceleration mm/s^2 in X, depending on your system $121=400 ;acceleration mm/s^2 in Y, depending on your system $122=400 ;acceleration mm/s^2 in Z, depending on your system $130=390 ;max. travel mm in X, depending on your system $131=297 ;max. travel mm in Y, depending on your system $132=200 ;max. travel mm in Z, depending on your system $$ ;to check the actual settings
If I type $$ I see that $100 is currently set to 250, which is certainly close to the 3X I’m seeing… So, let’s see how far this takes us…
Yes. Lovely. $110 and $111 are the pivotal instructions.
Sadly… The next problem seems to be that 2.5w just isn’t enough. I can’t get through 1mm card at 100% power only cutting 250mm per minute!
I’ll have to organise some test files to see which speeds/power ratios/repetitions work best.
We keep coming back to the basic fact that we need to know what the eyeballs are doing. This requires an eye tracker which does what we want (particularly, with a software API which doesn’t assume both eyes are coordinating in the usual way!) at a price we can afford. The corollary to this is – we’re going to have to do it ourselves.
Tobii style eye trackers are massively over powered and overpriced for what we are attempting, while we suspect it will take longer to get a collaboration going with PupilLabs (there are issues with their mobile software stack, API, licensing, and their choice of cameras) than we will need to roll our own solution.
In previous posts I introduced the PCB we found, which integrates two web cams as a single output, and mentioned that we’ll need to have it modified.
First, the cable length between the two cameras:
- The average adult’s PD is between 54-74 mm; kids‘ are between 43-58 mm.
- We therefore need to span between 43 and say 75mm.
- We also need to be able to route that cable around the nose!
- We need a way of containing loops (for shorter lengths) that don’t damage the cable.
- We move the second camera onto its own PCB (b). That PCB serves two purposes, to hold the IR LEDs and provide a mechanical surface to be held in place reliably.
- We make the length of the cable some 12cm long (make a mockup to check necessary lengths with Moritz) where any excess can be hidden in the cavity under the eye piece “shelf”.
Looking more closely at the PCB:
We need to find a new layout which isn’t so radically different from the existing one. If it is too different, it is likely that the manufacturers charge us a fortune for a redesign, but we also *must* reach a point where we have at least an “L” shaped positional layout for IR LEDs (so we can judge pupil movement distances and orientations) plus (if it can be done without introducing another manufacturing step) a way of reliably physically fixing the unit. We also need to be careful to specify a slightly longer piece of cable without shielding so we can run the cabling up inside the holder effectively.
I consider our first step to be producing a simple physical prototype where we can temporarily affix some IR Leds and see how it performs/could be fixed to the headset. The next question is – should this prototype extend the existing actual PCB so we can experiment with the real unit, or just model the desired physical dimensions of the unit? Well, both are probably necessary, but counter intuitively it may be best to start with a quickly printable harness where we place our IR LEDS.. which we then separate out in a separate step into the new PCB design and the holder design. Let’s have a go at this!
So here’s a first super simple stab at it. The slot allows us to place the Camera PCB into it with a dab of hot glue (or just a snug fit), where we can also really simply modify the “holderFaceAngle” variable to find a good angle that allows the eyes to be monitored. Next we can try fixing IR LEDs to the top/bottom strip in different positions to see what works best.
Where could we go from here? Well, the existing PCB is actually on a flexible bases wrapped around a metal block for heat dissipation. I’ve set this up so that the manufacturer can add “ears” at the top and the bottom for the new IR LEDs. The question is whether or not they would extend the metal base (it might be a standardised part) . Obviously, it would be best if they were able to extend the metal base as we’d have the most reliable IR positioning .We’d then only have to increase the size of the slot to accommodate the new PCB. The second advantage would be that we could leave an area of naked metal at the rear of the unit, against which we could press the surface of the mount. This (when gluing for instance) would allow us to be sure the unit is really positioned flush (at the correct angle) with the holder. It might really come down to something this simple.
We would then extend the base of the holder with some feet that extend through pre-cut holes in the camera core (as in the previous instance) so that (with the help of a mounting tool) a monteur can be sure that the holder is correctly mounted each and every time.
I’ll just leave the link here for future reference 😉 https://journals.sagepub.com/doi/full/10.1177/2055668318773991
At the moment all software work is paused until we have the hardware ability to track the eyes as we desire. Part of this process is to ensure, in our specification of the eye tracking cameras, that we do not produce too much infra-red radiation (in the LEDs we use to both illuminate and help track the position of the eyes) that it causes a health risk.
Given that this is open-hardware, let’s keep sharing information! One of the most helpful resources we’ve found is here https://www.renesas.com/eu/en/doc/application-note/an1737.pdf
Although the IR LED’s we are selecting fall well underneath the safety threshold required, we still need to be careful in considering the effect of using them in an array (well, three in an L shape) and also the impact of the focusing lens in the headset.
The relevant standard is IEC-62471. A summary of this standard is here https://smartvisionlights.com/wp-content/uploads/pdf/IEC_62471_summary.pdf
You may notice that the permissible exposure time for an except IR source on the eye is 1000 seconds (16.6minutes, pretty much exactly the length of time for which we want to restrict the use of the daily app). A Group 1 device would only allow us 100 seconds, which may still be adequate if use sparingly (pulsed, and restricted to critical moments of measurement), but it would be best to be sure our radiation levels are under the threshold of a Group 1 device.
Before we progress any further with the software, we need to know what’s actually happening to the eyes. This means monitoring each eye with some pupil tracking, exposing something like a set of vectors for eye movement to Unity, so that we can measure the effect of our virtual environments on eye position.
This has been a PITA. For low-cost, we need an android OTG (On The Go) UVC (Universal Video Camera) compatible camera with a 6cm focal length, and best of all, one that doesn’t need a hub (which causes all manner of issues when it comes to trying to observe both eyes simultaneously) but instead one which combines images from two cameras into a single double width image which appears to android/linux as a single camera.
We thought we found a supplier in China who could produce the module we needed as a modification of a product they already had – but we’ve been going around in circles for half a year now… continual misunderstandings and miscommunication? Perhaps (I suspect) we just aren’t offering order sizes they are interested in.
In the meantime, Rene has stumbled across a really wonderful module. It’s almost an order of magnitude more expensive than we’ve been aiming at – but at only 80EUR it’s still massively affordable.
We aren’t using it as intended of course (to create a stereoscopic image) but instead, are going to use it to simply monitor each eye simultaneously. It’s very convenient that as a stereoscopic camera it has the correct (on average) horizontal distance between cameras.
Next, Rene will start looking at how to handle the vision processing, while I try to drag the software back out of its coffin and update it to the most recent version of Unity – while implementing a few new ideas I have.
Unfortunately, it’s going to be *very* part-time for the foreseeable future, but something is happening at least.