That said, random or exhaustive search is a more scientifically useful method than you might think.
The first commercial antibiotics (Sulfa drugs) were found by systemically testing thousands of random chemicals on infected mice. This was a major drug discovery method up until the 1970s or so, when they had covered most of the search space of biologically-active small molecules.
Related, I was talking to a computational chemist at a conference a few years ago. Their work was mostly at the intersection of ML and material science.
An interesting concept they mentioned was this idea of "injected serendipity" when they were screening for novel materials with a certain target performance. They proceed as normal, but 10% or so of the screened materials are randomly sampled from the chemical space.
They claimed this had led them to several interesting candidates across several problems.
A few month ago I went to a similar talk. They got a carboxylic acid from a plant (I forgot the name) that has some activity to kill caterpillar that eat corn, and made like 10 or 15 compounds with organic alcohols to get an ester. They tried different doses on the caterpillars and then make a computer model to predict the activity of similar compounds (QSAR). The idea is to use it in a long list of other organic alcohols and try to find a better compound.
But they choose chemical reactions that are usual in the lab, so they guess they will be able to make it work in the lab, and they keep most of the structure without changes. So it's closer to what they classify here as look nearby the known good points instead of a true random search.