The paper somehow seems to be missing the most interesting part, i.e. the optimal constructions of functions from eml in a readable format.
Here is my attempt. I think they should be optimal up to around 15 eml.nodrs, the latter might not be:
# 0
1=1
# 1
exp(x)=eml(x,1)
e-ln(x)=eml(1,x)
e=exp(1)
# 2
e-x=e-ln(exp(x))
# 3
0=e-e
ln(x)=e-(e-ln(x))
exp(x)-exp(y)=eml(x,exp(exp(y)))
# 4
id(x)=e-(e-x)
inf=e-ln(0)
x-ln(y)=eml(ln(x),y)
# 5
x-y=x-ln(exp(y))
-inf=e-ln(inf)
# 6
-ln(x)=eml(-inf,x)
ln(ln(x))=ln(ln(x))
# 7
-x=-ln(exp(x))
-1=-1
x^-1=exp(-ln(x))
ln(x)+ln(y)=e-((e-ln(x))-ln(y))
ln(x)-ln(y)=ln(x)-ln(y) # using x - ln(y)
# 8
xy=exp(ln(x)+ln(y))
x/y=exp(ln(x)-ln(y))
# 9
x + y = ln(exp(x))+ln(exp(y))
2 = 1+1
# 10
ipi = ln(-1)
# 13
-ipi=-ln(-1)
x^y = exp(ln(x)y)
# 16
1/2 = 2^-1
# 17
x/2 = x/2
x2 = x2
# 20
ln(sqrt(x)) = ln(x)/2
# 21
sqrt(x) = exp(ln(sqrt(x)))
# 25
sqrt(xy) = exp((ln(x)+ln(y))/2)
# 27
ln(i)=ln(sqrt(-1))
# 28
i = sqrt(-1)
-pi^2 = (ipi)(ipi)
# 31
pi^2 = (ipi)(-ipi)
# 37
exp(xi)=exp(xi)
# 44
exp(-xi)=exp(-(xi))
# 46
pi = (ipi)/i
# 90+x?
2cos(x)=exp(xi)+exp(-xi))
# 107+x?
cos(x) = (2cos(x))/2
# 118+x?
2sin(x)=(exp(x*i)-exp(-xi))/i # using exp(x)-exp(y)
# 145+x?
sin(x) = (2sin(x))/2
# 217+3x?
tan(x) = 2sin(x)/(2cos(x))
Whoa, this is huge!
My dearest congrats to the author in case s/he shows around this site ^^.
I don't mean to shit on their interesting result, but exp or ln are not really that elementary themselves... it's still an interesting result, but there's a reason that all approximations are done using series of polynomials (taylor expansion).
Judging by the title, I thought I would have a good laugh, like when the doctor discovered numerical integration and published a paper.
But no...
This is about continuous math, not ones and zeroes. Assuming peer review proves it out, this is outstanding.
[dead]
According to Gemini 3.1 Pro this would shoot the current weather forecasting power through the roof (and math processing in general):
The plan is to use this new "structurally flawless mathematical primitive" EML (this is all beyond me, was just having some fun trying to make it cook things together) in TPUs made out of logarithmic number system circuits. EML would have DAGs to help with the exponential bloat problem. Like CERN has these tiny fast "harcode models" as an inspiration. All this would be bounded by the deductive causality of Pedro Domingoses Tensor Logic and all of this would einsum like a mf. I hope it does.
Behold, The Weather Dominator!
The problem with symbolic regression is ln(y) is undefined at 0, so you can't freely generate expressions with it. We need to guard it with something like ln(1+y*y) or ln(1+|y|) or return undefined.