Superpowering Wearable Sensors
DJ PLAYER + MYO ALPHA from Gabor Szanto on Vimeo.
Emerging, novel user input methods hold a lot of promise in better ways to interact with our environment in the very hot space of wearable sensors.
I was lucky to be selected as one of the alpha-developers for the MYO gesture control armband by Thalmic Labs, an A-list Canadian startup.
The MYO senses and processes the electrical activity in the movements of your finger, hand and arm muscles using a bit of tech-magic and it also has an accelerometer-magnetometer-gyro to allow users wirelessly control devices like computers, smartphones and all sorts of other devices via gesture control.
MYO can detect some finger poses and even accurately reports your arm’s actual position. My sense is that this is much, much better for the DJ booth than other novel input methods like Leap Motion, because lighting, fog or loud sound have no negative effect, whereas since Leap Motion works with infrared, making it totally useless in a club with pro lighting.
I wanted to play with MYO and I thought it would be easy to develop novel DJ gestures and implement a few effect controls too.
As usual, I was wrong. :)
Not because of the actual hardware, but because it turns out I actually am a human. The MYO delivers accurate data at 50 fps, but the fundamental problem is the way my body reports on my arm’s position back to my brain is radically inaccurate. My guess is that professional dancers and perhaps martial art experts have finely tuned proprioception, but the average human has no idea what his arm is doing within any real accuracy.
It’s easy to try: Put on a MYO and point to the horizon. You probably think your arm is parallel to the ground. The MYO showed me that I was easily off by as much as 30 degrees.
Wearable Sensors Create Virtual instruments
I started to work on a DJ environment where virtual “instruments” surrounded me, for example, a loop roll on the left, flanger on the right, a few poses here and there for specific various DJ moves.
Sort of similar to what Imogen Heap did when developing her performances using ‘magical gloves’ and Kinect:
As it turned out, this approach doesn’t work for the DJ booth.
At all.
Believe me, I tried for 2 weeks. It might be great for other sorts of musical performers, but it is simply not robust nor forgiving enough for DJ use, as it needs lots and lots of calibration, and can misalign too easily. (Maybe it just requires an even more foolproof future device.)
Another problem was finding the right, convenient yet foolproof gestures. Some gestures work great, but if you repeat the same gesture hundreds of during the night, you’d better be doing P90X on your off days to handle the pain in your shoulder.
Sensing A Virtual 2D Fx Table
So I created a more “traditional” way: a virtual 2D fx table right in front of you. It’s still spectacular for your audience: after all the most important part of being a DJ is to entertain the crowd.
(And feed your DJ ego, right?)
And this method is robust enough for the DJ booth and doesn’t need constant re-calibration.
Low Latency and Low CPU Usage
All the magic you see in the video above is based on uninterrupted, smooth 50 fps input, so being ready to process the signal input with low latency and zero jitter is paramount. Any interruption or minor stall in arm position input directly affects audio output, where even the non-trained ear can easily hear any small “hiccups”.
So, how can you achieve this? With extremely low CPU usage. You may think if your app uses, let’s say, only 50% of the CPU, then you are good to go. Let me tell you: No, you are not.
At 50% CPU usage, you will likely miss a few frames, and even worse, you will have severe jitter (the variability of your event processing times).
Even if you set your MYO event receiving thread to high priority, the “busy-ness” of your CPU may delay the processing of the input events for a few milliseconds, and that can add up to a bad jitter.
A good way to think about latencies is captured in this tweet of Time Scale of System Latencies.
Example Time Scale of System Latencies. pic.twitter.com/3o31b5J85b
— Anton (@PieCalculus) April 25, 2014
For CoreMIDI input, Superpowered can decrease jitter by around 4 ms, which is a big deal for pro musicians.
Jitter and Angry Birds
Can an end-user even feel this? Since Jitter affects the overall “smoothness” feeling of your app -- certainly! Think about Angry Birds.
It can be surprisingly difficult to drop the egg at the right time with Matilda, the white bird, because of Angry Birds’ inherent jitter. Sometimes you just simply don’t understand why Matilda didn’t drop the egg, because you tapped the touchscreen at the right time, damnit!
Having the lowest possible latency and jitter is really important, and it can be achieved with correct scheduling priorities and low CPU usage --- which is one of the reasons we developed Superpowered, with our SDK, you can achieve the lowest CPU usage for audio and radically reduce jitter.
The Internet of Things is Really the Internet of Wearable Sensors
With the advent and convergence of powerful, yet low-cost technologies like Android (free OS) and ARM (low-cost processors), more and more devices will become ‘smart’ -- these smart devices, like MYO, will have all sorts of sensing capabilities and allowing them network access -- allowing them to communicate with one another is what the Internet of Things is all about.
The Android Sensor overview makes this quite clear.
The Android platform supports three broad categories of sensors:
Motion sensors
These sensors measure acceleration forces and rotational forces along three axes. This category includes accelerometers, gravity sensors, gyroscopes, and rotational vector sensors.
Environmental sensors
These sensors measure various environmental parameters, such as ambient air temperature and pressure, illumination, and humidity. This category includes barometers, photometers, and thermometers.
Position sensors
These sensors measure the physical position of a device. This category includes orientation sensors and magnetometers.
Currently, most wearable sensor data is being pushed by the wearable device (typically via Bluetooth LE) to a smart device to be processed. That is to say, the wearable (Android+ARM in some configuration eg smart jewelry or a smart motorcycle helmet) senses and captures the data streams and then sends the data to your your smartphone for recording, processing and visualization.
This is because it isn’t cost-effective for OEMs to power their devices with processors that would allow them to process the data efficiently -- that is to say, low latency and a performance-per-watt ratio. (They could use a more expensive processor that could process the data, but power consumption would be so high as to make the device effectively useless.)
As I wrote in my blog post about Superpowered Audio Digital Signal Processing:
These optimization methods use fewer CPU clock cycles which means not only is the technology faster, but because it users fewer clock cycles, it uses less power as well -- allowing a mobile device to run other non-audio related tasks concurrently.
It’s a bit like taking an ordinary car, say a VW Bug, adding magic technology to it, which would make it accelerate like a Porsche and get the fuel efficiency of a Prius at the same time.
We designed Superpowered to solve these sorts of problems both on the wearable side -- we can superpower very low power devices and on the smartphone side, we can help devices like MYO perform better when they pipe their digital data into apps such as DJ Player, because apps that use Superpowered use less of the CPU, allowing more CPU resources free for other functions.
So, when can I try it?
I have good news: sooner than you expect!
The integration with the DJ Player app is done, and the MYO will start shipping later this year (2014). I hope that MYO + DJ Player (running on Superpowered, of course) finds a permanent place in DJ gear world.
- wearables
- wearable audio
- myo
- dj player