logoalt Hacker News

gorgoilertoday at 8:04 AM2 repliesview on HN

I asked ChatGPT to draw the outline of an ellipse using Unicode braille. I asked for 30x8 and it absolutely nailed it. A beautiful piece of ascii (er, Unicode) art. But I wanted to mark the origin! So I asked for a 31x7 ellipse instead. It completely flubbed it, and for 31x9 too.

When a model gives a really good answer, does that just mean it’s seen the problem before? When it gives a crappy answer, is that not simply indicating the problem is novel?


Replies

Anon1096today at 12:34 PM

I wouldn't ask an LLM to output this directly. For an ellipse ascii I would guess that having it write a python program to generate it and then run it would work much better. Using claude sonnet 4.6 on a free account it seemed to work (sorry in advance if the hacker news formatting is horrendous)

⠀⠀⠀⠀⠀⣀⣠⠤⠔⠒⠒⠋⠉⠉⠉⠉⠉⠉⠉⠙⠒⠒⠢⠤⣄⣀⠀⠀⠀⠀⠀ ⠀⢀⡠⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠙⠲⢄⡀⠀ ⣰⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⣆ ⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸ ⠹⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⠏ ⠀⠈⠑⠦⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⠴⠊⠁⠀ ⠀⠀⠀⠀⠀⠉⠙⠒⠢⠤⠤⣄⣀⣀⣀⣀⣀⣀⣀⣠⠤⠤⠔⠒⠋⠉⠀⠀⠀⠀⠀

show 1 reply
ghusbandstoday at 11:44 AM

Do you posit that there are enough examples of 30x8 ellipses encoded in braille online for ChatGPT to learn from but not 31x7 or 31x9 ellipses? That seems unlikely.