2 min read

Before we progress any further with the software, we need to know what’s actually happening to the eyes. This means monitoring each eye with some pupil tracking, exposing something like a set of vectors for eye movement to Unity, so that we can measure the effect of our virtual environments on eye position.

This has been a PITA. For low-cost, we need an android OTG (On The Go) UVC (Universal Video Camera) compatible camera with a 6cm focal length, and best of all, one that doesn’t need a hub (which causes all manner of issues when it comes to trying to observe both eyes simultaneously) but instead one which combines images from two cameras into a single double width image which appears to android/linux as a single camera.

We thought we found a supplier in China who could produce the module we needed as a modification of a product they already had – but we’ve been going around in circles for half a year now… continual misunderstandings and miscommunication? Perhaps (I suspect) we just aren’t offering order sizes they are interested in.

In the meantime, Rene has stumbled across a really wonderful module. It’s almost an order of magnitude more expensive than we’ve been aiming at – but at only 80EUR it’s still massively affordable.

We aren’t using it as intended of course (to create a stereoscopic image) but instead, are going to use it to simply monitor each eye simultaneously. It’s very convenient that as a stereoscopic camera it has the correct (on average) horizontal distance between cameras.

Next, Rene will start looking at how to handle the vision processing, while I try to drag the software back out of its coffin and update it to the most recent version of Unity – while implementing a few new ideas I have.

Unfortunately, it’s going to be *very* part-time for the foreseeable future, but something is happening at least.

Would you like to beta test EyeSkills* or just follow what we are doing?