logoalt Hacker News

qseratoday at 10:57 AM1 replyview on HN

I didn't understand. Can you clarify?


Replies

red75primetoday at 11:26 AM

If LLMs' internal representations are essentially one-to-one mappings of input texts with no additional structure, how can those representations be useful for tasks like object manipulation in robotics?

How is transfer learning possible when non-textual training data enhances performance on textual tasks?

show 1 reply