Thanks to a great suggestion from Rafal in Poland, I have a first working version in which a participant can watch video in way taking into account both their eye misalignment and also the binocular suppression ratio needed to get both eyes working. When they click “ok”, the video screen on the strabismic eye starts to straighten up, very slowly and gradually.
A prototype is, well, a prototype. It breaths life into an idea, it takes an interactive form, and this generates new ideas and insights. Alongside functional ideas and insights (hey! wouldn’t it be cool if the user could do *that*!) – are engineering ideas and insights (hey! wouldn’t it be cool if we could make it do *that* more quickly/flexibly/reliably)!
Now that most of the functionality I wanted is in place, I’m literally losing sleep over those engineering insights…
So, I’m really looking forward to experimenting with these, finally arrived from China. How cheaply could we retrofit a vr headset to include outward facing stereoscopic cameras? How cheaply could we implement a different kind of inward facing eye tracker (I suspect there are far more cost effective and patent free approaches than those currently on the market). No time now, but as soon as possible!
Now that we’re increasingly certain that alternating strabismus is something in its own right, with different perceptual side-effects to “regular” fixed strabismus, we’re starting to think about how we could better understand/categorise/explore what those perceptions are in a way that can be reliable reported….
Here are some initial ideas one the very simplest first steps we could take:
I just did an improved job of integrating the new conflict backgrounds (alternative assets) for alternating strabismus.
To access them, go into binocular suppression, hold “up” (w) for about four seconds to get into the micro-controller menu, scroll (w) until you hit “asset swap” and select (e). Then press “up” (w) until you get this asset, then select (e). Proceed to describe your experiences 🙂
If your external camera is attached and you haven’t got the right kind of hub, you’ll need to debug what’s happening from an android perspective, wirelessly. How does this work?
with the usb cable attached
adb tcpip 5555
adb connect phoneip:5555
you can now disconnect the cable and attach your camera… the connection is maintained wirelessly
adb logcat -c && adb logcat
This is an example for how you can build dynamic assets in the EyeSkills Framework.
The code walk-through:
If you have alternating strabismus and want to play with this, take the “development” branch from the eyeskills git repository and report/describe your findings as carefully as you can.
I’ve added an .apk for self-install here.
The Unity/VR learning curve hadn’t left me space to tackle unit testing until now. Well, it had, but my initial encounter was so bad that I decided to leave it until I had a little more time to look again.
Well, at the moment I’m building out the basic structures for handling a more fluid and complete user experience, so a reliable structure and repeatable experience is essential – so it’s time for Test Driven Development.
I may extend this post as time goes by with tips and tricks as I encounter them, but first off – the unit testing won’t work unless you explicitly create assemblies for the relevant parts of a project. In this case:
I needed to create an assembly “Hackademy” for the spike (pre-prototype code) I’m developing, which then references the “EyeSkills” assembly (so it was able to find the relevant EyeSkills framework classes), so that the “Tests” Assembly could then reference the “Hackademy” assembly (so that the tests could find my scripts to test). It was also necessary to explicitly reference the “EyeSkills” assembly in the “Tests” assembly so I could create Mocks referencing interfaces within the Framework.
It’s also worth pointing out that, despite running from within the context of a unit test and within the EyeSkills namespace, any classes doing dynamic loading from within the Framework will fail to find classes only in the testing assembly. You need to move them into the same assembly which will be looking for them. A bit weak really.
Annoying. Clunky. Poorly documented.
As usual, Unity’s IDE also failed to keep track of alterations to the assembly files (I needed to delete and recreate the Framework folder) causing a terrible mess which was only fixed after semi-randomly deleting .meta files and several restarts of Unity. The IDE has now reached a level of software quality where it is almost inevitably a buy-out target for Microsoft.
For all my occasionally deep dissatisfaction, however, when Unity works it works well, handles every situation imaginable, and does get the job done. It’s not perfect, but then, perfect is the enemy of the good!
After switching XR device to “cardboard”, any TrackedPoseDriver script which is active will have lost its connection (changes to its state make no noticeable changes to the scene).
Destroy the TPD before the device switch, then re-create and re-initialise it afterwards.
[EGL] Unable to acquire context: EGL_BAD_ALLOC: EGL failed to allocate resources for the requested operation.
Disable multi-threaded rendering in player settings.
I’ll just leave the link here for future reference 😉 https://journals.sagepub.com/doi/full/10.1177/2055668318773991