Working with tensor datatypes in numerical computing, I've been wondering if it would be possible to somehow add an extra dimension to tensors that would serve as the "floating point precision" dimension, instead of a data type. After all why couldn't the bit depth be one of the tensor dims? Maybe it would be possible to implement arbitrary floating point precision that way?
This is somewhat in line with the approach taken by some softfloat libraries, e.g. https://bigfloat.org/architecture.html