7 min read

Motivation

We have identified four underlying motivations for the project. Whilst these build atop one another, we try to isolate them conceptually to help us manage our limited development resources.

Aspect 1 – Demonstrating the impossible is possible – Guided – Verifying abilities

We are using virtual reality to establish visual environments in which the person might be able to see/coordinate/fuse/perceive depth with both eyes, and any degree to which they may also be able to consciously straighten them.

At this point, we are not designing to discover exactly which combination of factors might prevent it (although this is critically important to understanding), nor the precise and minimum set of factors which might enable it – we are only trying to demonstrate that it is at all possible (to falsify the hypothesis that it is not possible).

At this stage we are comparing the experiences and measurements made within EyeSkills, against physical “standard” eye tests performed by a professional. Are we able to stimulate a wider range of visual ability than the person is able to use in everyday life?

This level is particularly relevant for the “hopeless cases” – the amblyopic/strabismic participants whose conditions are severe enough that they only see with one eye at a time. This has been the case for our first three user testers, and all three achieved varying levels of biocular and binocular vision – a demonstrated phenomena that many experts currently assume impossible.

Aspect 2 – Demonstrating the impossible is possible – Auto didactic – Sensitisation

If there’s one thing humans are good at learning, it’s the assumption that the way they personally experience reality reflects “the normal” reality experienced by everybody.  Once the “norms” have been established, it is rare to find people willing to push beyond them – after all, how should anybody even know what they are supposed to be looking for?  It is no different with vision.

Our participants have generally been intellectually aware that they see the world in a fundamentally different way to most people-  but they cannot imagine how that difference might look.  Because they have never seen with both eyes simultaneously, or fused the images from both eyes into a single image, or perceived depth – they still assume that, although others may be seeing something different – these ideas of “fusion” or “depth” are going to be somehow similar enough to be recognisable from within their existing “normal” experience.

Our experiences have shown us that the participant’s “missing” abilities are, in fact, simply sitting there waiting to be activated and accessed.  When these abilities are suddenly activated, however, there are no trumpets sounding from on-high.  No alarm bells or status updates flood the brain, screaming “hey! hey! Over here! Something big has happened!”.  In fact, it’s more of a feeling that  “well, that’s a bit odd”.

In the second aspect we build on the first by providing audio guidance in the app, and allowing the participant more time to explore their visual environment-  to control visual dimensions in such a way that they are able to find the thresholds at which abilities are suppressed or activate / impossible or possible.  We are trying to get the participants to see things they haven’t experienced before – and as such – suggesting what they may be about to experience is precisely the right approach in sensitising them for experiences we can assume are common across the human visual system.

This process of sensitisation is also a process of familiarisation. What we have experienced with our first participants is that they start to become aware of new sensations, which may at first be stressful, but which quickly become recognised as “non-threatening” by the mind.  There is a process of learning to become aware when both eyes are producing an image simultaneously.  There is a process to learning what it really looks like when two images overlap perfectly.  There is a process to learning what it feels like to switch the fixing eye (it is a muscular sensation).

In aspect two, the participant must build up a sensitivity, a familiarisation that their new sensations are ok, and a confidence that this is meaningful and can be built up with practice.

Aspect 3 – Falsifying and Quantifying

Several of our users have already admitted to regularly faking optical tests, because they have become used to “saying what people want to hear” as the path of least resistance, and also assume it may damage them if the world were to become aware of their disabilities (think  – driving licenses!).

In aspect 2 we are working with suggestive descriptions to help the participant understand what they are looking for, and allowing them to explore freely. The free (and private) exploration may remove some of the incentive or fears causing dishonest impulses, but that very suggestiveness increases the risk of dishonesty caused by hoping to see what is suggested.

Without a professional guide personally assisting the participant, how can we double-check that the participant is not fooling themselves as much as anybody else.  How can we falsify the suggestion that participant is simply what they want to see rather than what is actually there?

In the third aspect, once participants feel comfortable with their new abilities, we challenge them to use those abilities in a competitive environment. This environment may (initially) be limited to single-person challenges, but it is conceivable to open up multi-person challenges shared by participants with a near identical set of abilities.

By creating competitive challenges we can legitimately push the participant to demonstrate exactly what they are capable of, right up until the point of failure.  In this way, we are free to manipulate the visual dimensions underlying the experience until the point of failure – to help the participant actually map out the current limit of their abilities.

Aspect 4 – Collective research platform

The largest potential in this project lies in the sheer numbers of people who need it.  If participants choose to contribute their experiential data to a collective gathering of knowledge much becomes possible.  Such data may help researchers uncover more about brain plasticity,  the processes through which vision becomes consciously perceived,  and precisely which pathways lead to the fastest and most lasting reactivation of suppressed visual abilities.

For example, precisely which combinations of abilities are blocked/enabled by which combinations of visual factors (e.g. complexity/overlap/background visual noise/object motion etc.). The number of possible combinations and inter-relationships are daunting. An exhaustive exploration would require many (hundreds of) thousands of test-subjects and compartmentalised test runs of different experimental setups, which would be precisely our goal – to create the basic tools to enable one of the (potentially) largest open-science experiments of the decade.

What does this mean for the app development?

Aspect 1 is relevant to eye doctors – perhaps of particular relevant in exploring post-operative abilities. Because doctors time is so limited, Aspect 2 is relevant for a “take-home” system for the participant to practice (sensitive/familiarise/gain confidence) in their own time.  Aspect 3 can give the eye doctor (personal), and the wider research community (anonymised through aspect 4) access to real data about progress.

The implication for what we have already done is that we made an incorrect and cofounding decision in trying to migrate “calibrations” to “abilities”.  Having seen the need for giving participants the chance to discover their own abilities at their own pace, we discarded the need for the kind of diagnostic calibrations which are needed by professionals.  The “abilities” scenes may build on pre-fab elements from the “calibrations”, but they are a separate branch of the app, just as “challenges” are.

The app must have the ability to switch modes (what at the doctors or at home, for instance). This may have several ramifications – for example – with USB control and remote monitoring for aspect 1  (for the professional guiding the process), and only headset control for aspect 2 (to reduce cost and increase immersion).

In the next section we will consider the migrations to the code base necessary to match these motivations…

Would you like to beta test EyeSkills* or just follow what we are doing?