1 min read

I am really quite fascinated by when and what causes suppression.  We can see from our quite simple approaches to triggering conflict that the idea of conflict neurons seems to make sense – but there are so many ways we could potentially test the edges of when and why conflict occurs, and push those boundaries.

Simplistically, we could use “blinds” to show each eye a different half of a common image (e.g. the training videos or live camera stream), where we gradually expose more of the same image to each eye. At which point is conflict stimulated. Those blinds could be interleaved.  They could be composed of a circle of image in one eye, and the opposite surround in the other.

Perhaps even more interestingly, we could present different portions of the colour space to each eye, as these actually trigger different retinal neurons.  In the RGB colour space we could randomly switch between presenting different combinations to each eye e.g. R in the left eye, and GB in the right… or G in the left eye, B in the right eye, and 0.5R in both etc.

By continually running through such variants during training sessions over hundreds of thousands of people, we could begin to measure which generate a higher suppression sensitivity, and which less – by quantitively measuring when “resets” are more common (when the person loses part of the image).

Given the almost endless ways in which the framework can be used to generate suppression in different ways (for example, by altering relative scale similarly to Aneiseikonia) this could be a phenomenal way to generate truly useful raw data about basics of human visual perception.

Would you like to beta test EyeSkills* or just follow what we are doing?