MobileSheets Forums

Full Version: Face gestures
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5
Open mouth - Automatic - Gesture size 3 works well for me. No false triggers.
(12-03-2023, 08:03 AM)Zubersoft Wrote: [ -> ]Google's library does a poor job of reporting eye coordinates. There is almost no difference between an eye closed versus open. So I had to rely on one of their features where they provide a probability that each eye is open. A wink, in this case, is when there is a large probability that one eye is open and the other isn't. If this isn't working for you, it means Google's library cannot clearly identify when your eyes are open or closed. I'm not sure there is much I can do about that I'm afraid.

The head tilt detection is just based on a rotation amount. The gesture starts when the head is tilted less than 8 degrees up or down (so mostly center) and triggers when the head has tilted more than 16 degrees. It then waits until the head is less than 8 degrees again before starting the next gesture. If you are triggering this, it means your head is moving up or down more than 8 degrees at a time which is a lot. The gesture size increases or decreases the amount of rotation required. If you set the gesture size to the largest, MobileSheets will require 28.8 degree rotation. I'm not sure how you could possibly generate false positives with that... So if you haven't tried increasing the gesture size, start by increasing one at a time and then testing.

Open mouth is one of the more simple ones, but determining the amount the mouth has to be open to trigger is somewhat complicated. Increase the gesture size will require the mouth to be open more, but this is not one where false positives should be generated as the mouth has to be mostly closed to even start the gesture. If you are leaving your mouth open the entire time while playing, then this is probably not the best gesture to use unless you use a really large gesture size. If the largest gesture size isn't large enough, I can always increase the offset a little more.


Thanks Mike ...  I understand..  I'll try to set to MOUTH motion...   Curious though about the "length of time" setting?  Not clear what that needs to be set to. I'll scan the (updated?) manual to see if I can figure out the best setting for that.
I haven't updated the manual yet I'm afraid. There's just too much work at the moment, so I haven't been able to focus on that. I'm still working to get the face gesture stuff implemented on Windows and iPadOS and I want to get that done before focusing on any manual updates. 

The gesture duration determines how long you have to hold a gesture before it's triggered. This ensures that if you are just happened to briefly open your mouth that it doesn't trigger the open mouth gesture (if the duration is long enough to avoid that). If the gesture duration is shorter, it means the gestures will be fired off much quicker, but it means that there could be more false positives if your face is moving around a lot and some gestures are briefly detected.

(11-30-2023, 09:36 AM)Zubersoft Wrote: [ -> ]The feature is only available on Android at the moment, but I'm implementing it on Windows and iPadOS next. The iPad version should be much easier as Apple has support built into their framework for all this. The Windows version has been a monumental pain so far. I'm still working through it, but I will have to see how well it works.


Does it look promising that it will eventually work with windows?
I hope so, that's the version I use!

I have a restricted version of mobile sheets on my phone and the the smile feature works well as does the head turn.
But that screen is too small to use it there.
I want to use this very much, I've got a foot pedal but don't really like it. Now, I either wait for it to work on the windows version or go out and buy an android tablet.
What do I do?
I do prefer the first option.
We will see - I still have a lot of work to do there. I was able to finish the changes for Apple in a single day and it already works incredibly reliably. Apple makes this insanely easy by comparison to Android (and even more so compared to implementing it on Windows). 

I'm still working on integrating an open source library for doing the image processing on Windows. Once that work is done, then I can try feeding images from the camera to it to see how reliably it detects the facial landmarks. Once I've tested that, I should be able to report on this more.

I also have an Boox note air 2 (with a copy of Mobilesheets on it) but unforunatley it does not have camera. It would be nice if it did.

I wonder if Boox, or someone, could do a wireless camera?
I've got a wi-fi video doorbell at my home that lets me know who's at the front of my place.
So, surely it can be done with my tablet!
Just to note. 

I tried enabling Face Gestures on my first gen Duet Chromebook tablet. 

It work on the test screen. Though the duet is underpowered at the best of times... So, not fast. But, if Chromebook users are interested. 

My use would be to start/stop scrolling and playback together... I'm not sure that's possible. 

But, as a sax/flute player with glasses... I'm not so sure it's useful to me anyway! 

Great job, though!
(12-06-2023, 09:20 AM)Zubersoft Wrote: [ -> ]Once I've tested that, I should be able to report on this more.

Hope so much that you don't give up on face gestures feature for Windows!

I sometimes play with this feature on my tiny android phone and love it very much, it's amazing.

Look forward to being able to use this on my Windows version of MobileSheets which my main way of displaying music.
Today's release is a huge improvement on facial recognition, thank you! Now having much success straight out of the box with :

Mouth open
Eyebrows raised
Lips left 
Lips right
Head up
Head down 
Face left
Face right
Glad to hear that Barrie! The new model uses a facial mesh and is much more accurate (and also provides support for far more gestures if more are needed in the future). Google's library is now doing the heavy lifting when it comes to tracking what the face is doing, and just provides estimates for all sorts of different gestures. The best part is that it works nearly identically to Apple's ARKit library (they may even be based on the same backend code), so the code is basically the same on both platforms, making it much easier to manage. There is experimental support for Windows with Google's library, so I'm going to be working on seeing if I can get that working. If so, then I will have a consistent solution across platforms.

Since one of the recent updates the face gestures are completely broken for me on my Galaxy Tab S3

They used to work initially but I did not have the time to properly configure them to my liking. I wanted to do that now but the camera wont be accessed anymore.

In the meantime the resolution tab has been added to the settings - but as there now is only a black screen when trying to test I cannot proceed with the setup. I did reboot the device and tested the camera with other apps - no problem there. Something must have broken the access of MSP to the camera... Is there anything on my end I could do?

Happy Christmas to you from Germany

Edit: By accident I waited quite a long time in the settings and after some time the image appears (about half a minute to 40 Secs). Face gestures do work then but this lag is irritating. Then the mesh that used to be displayed in the test screen is not there anymore. Screen Timeout should also be disabled - the screen shuts of during testing pretty fast depending on the global setting which is not helpful there...

Edit 2: The long waiting time is each time when testing. So testing and adapting settings is practically undoable now...
I cannot find the Settings option as explained here.  Chromebook running 3.8.21.
On my android tablet, the "Face Gesture Settings" is the 7th item down when the MS Settings is displayed.

If you don't see it on your Chromebook then it might not be available on those devices

McConner - it sounds like Google's MediaPipe library is not working very well on your device for some reason. I don't know why that would be - it works perfectly fine on my Galaxy Tab S4 and there is no delay like you are describing. I needed to switch libraries in order to provide much more accurate facial landmark detection (many users said the old library barely worked for them). The new library does not track individual points on a person's face, so there are no blue dots that show up anymore. The new library just provides percentages for gestures, so once the percentage reaches a certain threshold, it's likely that the person is making a particular gesture. It's much simpler from an implementation perspective (from my code) but is more complicated in terms of what is happening inside Google's code (as it uses a face mesh instead of just points). 

Staggart - update to version 3.8.24. Just be aware that the face detection has a very long initialization time on Chromebooks for some reason. Google does not even support Intel 64-bit processors out of the box with their library - I had to compile it myself to add support for them, but you may see an initialization time upwards of 15-20 seconds on a higher end Chromebook. It takes 2-4 seconds on my standard Android tablets. Once the library loads properly, then there is no longer any delay. However, be aware that each time the "Test Gestures" dialog is displayed, it has to reinitialize the face detection again. I may see if I can prevent that in the future.

So I have some good news - I was looking for information and realized I set up the face detection to use the GPU for processing. This is what causes the conflict on Chromebook devices. Once I switched it to CPU, the problem went away and it initializes instantly. This will also probably fix the issue you saw McConner. I'm going to release another update right now that defaults everything to CPU on Chromebooks and older devices, but users can switch it to GPU if desired.

Pages: 1 2 3 4 5