Helping people with Lazy Eye learn to use both eyes again requires a long term commitment on their part. That shouldn’t be a drag, so I am hoping to inject a little humour through two animated characters which accompany the participant on their journey, helping to motivate people to continue.
A part of the look and feel that I am striving for is to use a Toon Shader / Cell Shader to create an Outline image similar to that shown in this short tutorial :
Unfortunately, the current release of Blender 2.8 just doesn’t work the same way. You cannot select Backface Culling from the shader menu!
It does exist in the “solid” view as shown below, but that isn’t very useful:
In the rendering modes, the option just isn’t there anymore:
Now, you need to look inside the material you chose to be your outline, and under settings you’ll find the “Backface Culling” option.
I hope this helps somebody!
One thing I discovered when testing EyeSkills at the beginning, is that people who have a lazy eye often have a particular psychological relationship to the condition. They would like to understand and perhaps change it, but they have generally come to accept it as “inevitable” and found ways to compensate in everyday life. It is regarded as an unknowable mystery.
The consequence of this, is that initial enthusiasm does not continue into daily practice where there are too many hurdles- made worse by the mountains of technical jargon associated with amblyopia and strabism! To make EyeSkills accessible, I believe it needs a simple metaphor, expressed visually, and humorously. This needs to run on a human platform, after all!
I have taught myself 2D animation over the last week, but it is a slow and inefficient process. I can see that I need to move to 3D. To do that I’m now teaching myself Blender as quickly as possible, so that I can go from from character modelling, to rigging (with humanoid bones and correct standard names) and skinning, to shape keys, to UV mapping and drawing textures, to animation, to importing into Unity and finally creating a cell shader to give the animations the cartoon like character I’m seeking.
As I run into issues with Blender, I’ll make some notes here.
I’m using Metaballs to model Domi/Supti before creating their meshes. It turns out that naming is very important. If the metaballs stop interacting, be sure to read this: https://docs.blender.org/manual/en/latest/modeling/metas/editing.html#object-families
Unfortunately, having got to the kind of base model I wanted :
I couldn’t find a way to convert it to a mesh without it suddenly becoming “overweight”!
All in all, after speaking with a friend, metaballs may not be the way to start the character modelling. I’ll try again building with meshes from the ground up.
This involves much more direct manipulation of vertices and edge/face creation:
You will probably want to half your object, enabling the mirror modifier, but only apply the mirror modifier when finished. It’s also extremely likely that you’ll want to be able to select vertices behind those visible:
Shift-Select the items you want to receive the modifier from, and finally the item with the modifier you want to copy. You then use Cntrl-L to link the modifiers, BUT you MUST do this with the mouse cursor over the modelling viewport – otherwise nothing will happen!!!
A problem with this approach is that, if Armature modifiers or mirror modifiers are already setup, you’ll end up breaking your animations and causing new limb parts to appear everywhere!!! To make use of this approach, try to model the mesh first (only on one side), then the materials, then things like the outliner, mirror modifiers, and rigging last.
This wasn’t a bad starting point : https://www.youtube.com/watch?v=vPgS6RsLIjk
It’s important to remember, when you’ve created a sprite, that you need to add a SpriteSkin. Sometimes it fails to automatically detect the bones in your sprite, but so far, that’s been simple to solve by making a few minor changes to the sprite, reapplying, and then the “CreateBones” button in the SpriteSkin successfully works. If you have an existing animation, you can drag and drop it onto the sprite. Next step – animation transitions.
In the Animation pane you can create a new animation from the drop-down, but to create links between those elements you’ll need to make sure the Animator window is visible (Window->Animation->Animator). There you can make links between the various states (https://www.youtube.com/watch?v=HVCsg_62xYw). How can we have those state transitions occur without scripting? It turns out that the transitions already happen, but you need to “Play” a scene containing the model.
Where the ordering of limbs is incorrect, go into the SpriteEditor>SkinningEditor and set the individual bone depth, by selecting the relevant bones.
The next issue will be transitioning sprite parts (rather than just animating their position). My best guess is that we’ll end up animating enable/disable/active on alternative game objects inside the Animator (I hope). Yep. That was quite intuitive. Place the object you want somewhere inside the bone hierarchy of the sprite (inside the scene editor) and then, in the Animation pane, add a property for that object for “enabled” and animate it.
I suspect that, to enable the pupil to move freely around, I’ll have to add a mask around the “white” of the eye.
This is quite exciting. A lot of opportunities for clearer communication and more interesting and interactive scenes have just opened up 🙂
Ultimately, I’d like to create a 3D representation (mesh) of the mascot, and a toon shader to go with it, which would be the most flexible approach but for now I’ll create the basic poses I need to start with as .SVG, then export to sprites and animate.
I seems that one can create too many bones. The issue I’ve run into is that slicing the sprite prevents the unity editor from allowing me to create bones which span the different sprite parts (surprise, it’s still buggy). However, using autogeometry to split up the sprite, makes it almost impossible to control when the bones overlay each other (e.g. around the eye) and control over things like mouth expression is currently beyond me using the inbuilt approach.
I suspect the way to do this is, is to create a completely separate multi-sprite for the eye and another for the mouth (with multiple expressions in the multi-sprite), and then to place these inside the bone object hierarchy.
A potential problem with this approach is that alterations to the bone structures seem to invalidate the sprite skin / bone editor in the scene – requiring it to be destroyed and recreated, which will lose all my setup 🙁
So, that worked well (I think).
There are eight sprites along the top, and only the collection of body parts below are skinned. On the left in the scene hierarchy, you can see the other parts are placed under the body bone – with each game object having a “Sprite Renderer” added. Is there are better way. The different parts of the multi-sprite are always visible in the object panel beneath the scene hierarchy.
After switching XR device to “cardboard”, any TrackedPoseDriver script which is active will have lost its connection (changes to its state make no noticeable changes to the scene).
Destroy the TPD before the device switch, then re-create and re-initialise it afterwards.
[EGL] Unable to acquire context: EGL_BAD_ALLOC: EGL failed to allocate resources for the requested operation.
Disable multi-threaded rendering in player settings.
So, I’m still trying to get the Eleksmaker A3 laser cutter to work as I need.
So far, I’ve come to the conclusion that the original version of GRBL installed on the cheap Arduino nano board the “Mana” board comes with, is just no good. I’ve flashed it to v1.1 :
brew install avrdude avrdude -c arduino -b 57600 -P /dev/cu.wchusbserial1420 -p atmega328p -vv -U flash:w:grbl_v1.1f.20170801.hex
Then I found LaserWeb for OSX, configured it, and had the print head slamming against the side of the case. It took me a while to work out that all my axes were inverted. After swapping the X-Axis carriage around, I was still left with an inverted y-axis, which I solved by sending $3=2 as GCODE to the board via the console in LaserWeb ($3=2 means invert the y-axis, where $3=1 would mean inverting the x-axis).
The next problem was that the “cuts” were three times larger than they ought to be.
The next secret is hopefully to send this set of gcode instructions to the board to configure the number of steps/mm correctly….
$0=10 ;Step pulse, microseconds $1=100 ;Step idle delay, milliseconds $3=2 ;Y axis direction inverted $10=0 ;send work coordinates in statusReport $30=255 ;max. S-value for Laser-PWM (is referenced to the LaserWeb PWM MAX S VALUE) $31=0 ;min. S-value $32=1 ;Laser Mode on $100=80 ;steps/mm in X, depending on your pulleys and microsteps $101=80 ;steps/mm in Y, depending on your pulleys and microsteps $102=80 ;steps/mm in Z, depending on your pulleys and microsteps $110=5000 ;max. rate mm/min in X, depending on your system $111=5000 ;max. rate mm/min in Y, depending on your system $112=2000 ;max. rate mm/min in Z, depending on your system $120=400 ;acceleration mm/s^2 in X, depending on your system $121=400 ;acceleration mm/s^2 in Y, depending on your system $122=400 ;acceleration mm/s^2 in Z, depending on your system $130=390 ;max. travel mm in X, depending on your system $131=297 ;max. travel mm in Y, depending on your system $132=200 ;max. travel mm in Z, depending on your system $$ ;to check the actual settings
If I type $$ I see that $100 is currently set to 250, which is certainly close to the 3X I’m seeing… So, let’s see how far this takes us…
Yes. Lovely. $110 and $111 are the pivotal instructions.
Sadly… The next problem seems to be that 2.5w just isn’t enough. I can’t get through 1mm card at 100% power only cutting 250mm per minute!
I’ll have to organise some test files to see which speeds/power ratios/repetitions work best.
Another target I’d like to accomplish is to provide an api for generating custom parameterised EyeTracktive pupil tracking headsets.
OpenSCAD can theoretically produce .stl or .png images via the cmd line, but it requires a lot of cruft to get this to work (an XServer or xvfb) – so wouldn’t it be nice if there was a pre-configured docker container?
After a little look around I found this :
Theoretically, the simplest way to handle this would be to exec/run openscad from inside the container to output a file mapped to a directory on the host accessible from the API which is part of (e.g.) EyeTracktive.org.
It looks, however, like I’ll have to see if I can cook up a docker file based on https://github.com/wtnb75/dockerfiles/tree/master/openscad-fedora first.
We need your support!
Please “like” us – https://hackaday.io/project/164944-eyetracktive Points means prizes!
The excellent hackaday.io has organised a global :
“hardware design contest focused on product development. DesignLab connects you to engineers, expert mentors, and other powerful resources to take your product from concept to DFM.”
If everybody on this list takes five minutes to support us, we’ll be at position 5 on the leaderboard and far more visible to the world (thus attracting more likes). At the moment we’re at position 28 with 14 likes.
This might be what we need to help us take EyeTracktive to a finished product. Eyetracktive is the ultra-low cost open-hardware eye tracking headset we’re working on to complement EyeSkills (so we can see objectively what the eyes are actually doing, enabling “at home” training with precision and safety).
Thanks to an anonymous donation we’ve got enough together that I could buy a discounted (super super cheap) A3 laser cutter which will allow us to continue prototyping and even producing the inner core of the EyeTracktive headset, but when it arrives I’ll still need to build an enclosure and setup a ventilation system. If nothing else, every “like” wins us $3 which will help cover those materials! 🙂
If you run into the obscure and painful problem that Unity tells you the “style/VrActivityTheme not found in AndroidManifest.xml” then fear not. You do not need to spent half a day trying to track down the problem. At some point you probably (temporarily) added Google’s DayDream as a supported VR Device in the Unity Player Settings, then removed it. Unfortunately, Unity didn’t fully clear up all the dependencies on DayDream, so it assumes that it still needs this VrActivityTheme (https://developers.google.com/vr/reference/vr-manifest) . It doesn’t. The simplest solution? Just add support for DayDream back into your player settings (that’s not a bad thing), Unity will correctly resolve the dependencies, and it will build and run once more. Sigh.
As long as we don’t know where each individual eye is looking, a reticle may appear to be in two places at once, or one view of the reticle may be suppressed.
In some cases this may be acceptable, when the user has the option to close one eye of their choice and position the reticle to make a simple selection. Otherwise, a reticle is only a viable choice of input *after* we have achieved fusion between the eyes.