12-16-2023, 05:03 AM
Glad to hear that Barrie! The new model uses a facial mesh and is much more accurate (and also provides support for far more gestures if more are needed in the future). Google's library is now doing the heavy lifting when it comes to tracking what the face is doing, and just provides estimates for all sorts of different gestures. The best part is that it works nearly identically to Apple's ARKit library (they may even be based on the same backend code), so the code is basically the same on both platforms, making it much easier to manage. There is experimental support for Windows with Google's library, so I'm going to be working on seeing if I can get that working. If so, then I will have a consistent solution across platforms.
Mike
Mike