logoalt Hacker News

shybeartoday at 4:58 AM9 repliesview on HN

It seems like alot of scientific advancements occurred by someone applying technique X from one field to problem Y in another. I feel like LLMs are much better at making these types of connections than humans because they 1) know about many more theories/approaches than a single human can 2) don't need to worry about looking silly in front of their peers.


Replies

esjeontoday at 6:56 AM

Exactly. Much of the intellectual work is, in fact, intellectual labor. It’s mostly about combining various information in one place — the exact task that LLM far outperforms human. People traditionally misclassified this class of work as “creative”. It’s not really.

show 8 replies
squidbeaktoday at 11:21 AM

As I understand it, models form connections (weak or strong) between everything in their training sets, even the smallest details. They've already made other breakthroughs directly because of this ability and this line of research is likely to be incredibly fruitful.

renticuloustoday at 12:25 PM

> someone applying technique X from one field to problem Y in another

Witten is the canonical example of someone taking mathematics techniques and applying them to physics problems, but what made him legendary was the opposite direction: he used physical intuition and string theory to solve open problems in pure mathematics.

freakynittoday at 5:06 AM

This is what I personally consider as "reasoning" ... knowledge generalization and application across domains.

show 1 reply
bojotoday at 5:01 AM

This is what I have been doing. I don't think I've made any amazing breakthroughs, but at the same time I can't help but feel like I've come across some white paper-worthy realizations. Being able to correlate across a lot of domains I feel like I intuitively understand but have no depth of knowledge has been a fun exercise in LLM experimentation.

some_furrytoday at 5:54 AM

> It seems like alot of scientific advancements occurred by someone applying technique X from one field to problem Y in another.

Yeah, you should look into the Langlands project sometime

show 1 reply
trhwaytoday at 6:54 AM

As a civilization we went the left-brained/sequential/language based way of thinking (with computers and AI being the crown achievement of it). Personally i for example remember like around 3rd grade i switched from the whole-page-at-once reading mode into the word by word line by line mode and that mode stuck with me since then (at some point while at the University i had for some period of time, probably it was the peak of my abilities, some more deep/wide/non-linear perception into at least my area of math specialization, though not sure whether it was a mastery by the left brain or the right brain got plugged in too) LLMs will definitely beat us in that sequential way of thinking. That makes me wonder whether we will have to push into our whatever is still left there right-brainness, and whether AI will get there faster too. May be we'll abandon the left-brain completely leaving it to AI.

show 1 reply
pelasacotoday at 9:31 AM

accuracy and creativity are often quite difficult to achieve at the same time. Looks like LLM can do it, even though one can question how creative it really is...

aaron695today at 11:26 AM

[dead]