A couple of comments here mention using this in VR. Fwiw, years back I played a bit with shallow-3D UIs for software dev. Shallow like within a few cm of a laptop display, to minimize VAC eye strain for all-day use. Think more being able to layer and draw in color, but in 3D, rather than waving arms in a room.
The 3D can be wiggle 3D, or perspective from webcam head/eye tracking, or stereo from shutter glasses, or XR HMDs. Wiggle is easiest - just move the object orientation back and forth. Cute but distracting. Well, cross/parallel-eye gaze is easier, but limited - ok for little UI test swatches. Perspective is more subtle, less intrusive. Can be simple with a head tracker driving a single orientation, or go all in with eye pose (for distance) and window locations, to do an accurate 3D render. App stereo pairs can be "I give you two windows Left/Right-eye", or "alternating L/R view, labeled/synced/polled". Other possibilities. Many of these need window system/manager/desktop support. I found a lot of leverage in using a stack of electron and X.
It's fun to displace text in 3D. Like colorization, but more so. And if you don't mind a cluttered appearance, you can add secondary information layers segregated by depth. And... etc. Emacs with characters-have-a-depth finally gets you something LispMs didn't have. Fun aside, to explore possibilities with code text, with anything not inherently 3D, far easier to prototype UX with fg/bg colors, fonts, unicode, and animation. Or in browser, overlaid divs and transparent 2D/3D canvases.
I have a working fully 3D glyph based text rendering system I can't seem to get people to look at.
It's this. Every character is a 3d placed quad, instanced rendered, so you get tens of millions and then some. They are individually addressable and mutable like any polygon. I use it to render entire GitHub repos in one go. I have two versions, native Apple and web. Web has the basics of an ide setup. Would love insight or thoughts.
https://ivanlugo.dev/ide