I'm not too familiar with the hardware world, but does EML look like the kind of computation that's hardware-friendly? Would love for someone with more expertise to chime in here.
Yes actually, it is very regular which usually lends itself to silicon implementations - the paper event talks about this briefly.
I think the bigger question is whether it will be more energy-optimal or silicon density-optimal than math libraries that are currently baked into these processors (FPUs).
There are also some edge cases "exp(exp(x))" and infinities that seem to result in something akin to "division by zero" where you need more than standard floating-point representations to compute - but these edge cases seem like compiler workarounds vs silicon issues.
A similar function operating on the real domain for powers and logs of 2 would be extremely hardware friendly. You can build it directly out of the floating point format. First K significand bits index a LUT. Do that for each argument and subtract them.
It gets a bit more difficult for the complex domain because you need rotation.