ARM in Your Ear: Computers, Not Hearing Aids
Almost all commenters on the recent introduction of Apple’s AirPods miss the forest for the trees as they are hung up on the removal of the headphone jack from the iPhone. They miss the fact that Tim Cook and Jony Ive snuck an computer into the earbuds.
New products like Apple AirPods, Dash by Bragi and Here One by Doppler Labs are advancing radically by leapfrogging over modern hearing aid and wireless headset technology by embedding actual general purpose computers in our ears. Not fancy-schmancy sexified hearing aids, but actual computers.
The mobile era is truly starting now.
Apple W1, Processing and Computation
At the heart of most digital hearing aids is highly specialized digital signal processing (“DSP”) hardware to meet ultra-low power, low latency needs. DSP hardware is optimized for computing and processing analog signals eg people talking to you. In a hearing aid, this means processing for frequency shaping, noise suppression, multiband amplitude compression, to name a few. These all have to be processed in real-time with sub-1-microsecond latency.
But because ear computing means completing “smart” tasks similar to what your mobile phone can do such as running user applications and building an app store ecosystem, specialized DSP hardware that is great for hearing aids is not flexible enough for ear computing.
Of course, there is spare computing capacity available in your pocket, but none of the wireless connectivity technologies like WiFi or Bluetooth (available today) can transfer audio signals to your mobile phone, process it and send it back fast enough.
Yet due to latency issues, ear computing devices are unable to delegate generalized processing tasks to the mobile phone for processing AND have to perform more general computing tasks directly “in ear”.
Meaning that computation cannot be trusted to specialized hardware like dedicated DSP chips or FPGA + firmware like those found in hearing aids and headphones.
So what will earable manufacturers do?
ARM-ing Adapters?
Apple’s familiar $40 Lightning Digital AV Adapter, oddly enough, provides a helpful clue.
The Lightning Digital AV Adapter supports mirroring for whatever is on your iPhone, iPad, or iPod screen on a HDMI-equipped display. As such, you’d be forgiven for assuming that such specialized adapter accessory be ‘dumb’, that it simply takes the signal stream, encodes it with the help of some firmware and passes it through.
But Lightning Digital AV Adapter is "smart". How smart?
Well, it has a custom ARM SoC running XNU!
Got that? There is a computer in your adapter. Not just some DSP chip -- an actual computer. What’s the advantage of making an adapter smart?
Smart Software vs Hardware
Because with a smart adapter all the complexity and heavy lifting is in software, not hardware. This way the iDevice not only doesn’t need to know what it is being plugged into, but can then output to any device - no matter the endpoint. The iDevice just sends the data down to the ARM SoC which processes it for the endpoint. For consumers, this means you don’t need buy a new iDevice as communications protocols advance or need to bother with updating protocol drivers like you did on your PC or printer that never worked.
Now you can see why we speculate that the AirPods are similarly equipped with XNU on an ARM SoC.
The ARM computing platform is the perfect candidate, as it’s the standard for mobile computing, has low-power implementations with low heat dissipation -- a few of the reasons why the entire mobile ecosystem is built around ARM already.
(Anyone who follows Ben Evans' work will recognize this is shades of ARM+iOS, ARM+Android -- let’s add ARM+XNU to this too.)
CPUs have Logarithmic, not Linear, Impact on Heat and Battery Life
Let’s assume that the AirPods use an ARM SoC, this means the AirPods can perform sophisticated computing tasks and also run an operating system (XNU) with general tasks.
It’s obvious that due to the limited space in ear computing devices can hold only a teeny-tiny battery, even smaller than the ones found in smartwatches. But as important as power consumption, heat dissipation must also be kept ultra-low.
Regardless how clever the internal heat transfer structure of any ear computing device is, dissipating heat either in your ear, or close to your ear/face is a consumer non-starter (VR folks take note!). So the solution is not better heat dissipation, but avoiding the generation of heat at all!
When Jony Ive says "we’re at the beginning of a truly wireless future we’ve been working towards", that ain’t hype, girls and boys.
As computational capacity of mobile devices grows, oddly so does the demand for computation. In economics, this is known as Jevon’s Paradox. As mobile SoCs become more efficient and deliver more performance per watt, computation itself becomes less expensive. As computation becomes less expensive, this induces greater demand and more use of computation.
Suggesting that the era of mobile computing didn’t begin with the iPhone in 2007.
But is just beginning now.
We believe that he believes - and we also believe we’ll see the earable device CPUs utilized for more and more tasks. Meaning they must be powerful as possible to compute as fast as possible it can to avoid heat generation and use the battery as short as it can.
Follow that? Mobile SoCs like W1 will demonstrate more significantly more performance per watt -- in short, using fewer CPU clock cycles to get more done.
REMEMBER: CPU load and memory bandwidth load have logarithmic impact on both heat and batry life, not linear. Recommended articles for technical readers: The 1% Rule for Mobile App Power Consumption, Mobile Audio Processing and Memory Bus Bandwidth and Load.
When your phone heats up, you can change your grip or put it down to your desk. But when your ear computing device heats up, game over.
Bluetooth’s Latency Problem
You may think that Bluetooth is sort of fine since it has been used for wireless headsets for many years now.
But it has a huge problem: latency.
The delay to transfer audio via Bluetooth is too high and causes several problems, such as breaking the flow of conversation.
Yes, the main reason why we cut each other off over Skype or Facetime is latency.
You think your friend s done speaking so you begin, but he already begun again and go ahead. No, please go ahead. Okay, so I think… Yes, the solution is… Sorry. Go ahead. No, you go first.
Bluetooth transfer takes at least 0.1s to happen. But Bluetooth bandwidth is not high enough for lossless audio, so an audio codec must be added to the audio path...adding even more latency. The end result is a 0.15s latency or more in one direction. But users need both directions (audio to ear device’s speaker, audio from ear device’s microphone), so we are at 0.3s already.
Even the proprietary aptX low-latency Bluetooth solution offers 40 milliseconds latency (in one direction), which is far from “real-time”. It’s not supported by iOS and Apple doesn’t seem to like licensing it.
One can implement custom data transfer with Core Bluetooth for iOS to solve the audio codec problems, but Bluetooth transfer latency can not be solved, and joining Apple’s Made For iPod (MFi) program is required in this case. MFi certification is bureaucratic, opaque, slow and expensive.
The takeaway: the ear computing device cannot and will not offload processing tasks to the mobile phone, it has to perform those "in the ear".
How will the new-new wireless future solve this:
- Faster hardware: design improves processing power and Apple does around 30% improvement year-on-year with every iteration of the Ax CPU. We cannot wait to see a W1 teardown on performance.
- Radically better software: Superpowered shows that better algorithms and better implementation can deliver 400% - 20,000% improvement. For example, Superpowered outperforms Core Audio Reverb by 4x, Native Instruments Compressor by 20x.
Your takeaway: the numbers show that there is more improvement in software than in hardware for ear computing!
A holistic solution here is not just a better hardware iteration, but more importantly, better software with better algorithms and implementation.
Conclusion
We have no doubt that demand for computing "in the ear" will likely to grow. This is being enabled by a ARM+XNU hardware and OS combination in the case of AirPods -- but don’t forget how critical software is in the "truly wireless future we’ve been working towards".
- ear computing
- airpods
- apple
- apple w1
- w1