Several parts of your claim are incorrect.
First, modern LLMs are not "a huge table of phrases". They are neural networks with billions of learned parameters that generate tokens by computing probability distributions over vocabulary given prior context. There is no lookup table of stored sentences.
Second, Eliza-style bots used explicit scripted pattern matching rules. LLMs instead learn statistical representations from large corpora and can generalize to produce novel sequences that were never present in the training data.
Kent Pitman's Lisp Eliza from MIT-AI's ITS History Project (sites.google.com):
https://news.ycombinator.com/item?id=39373567
https://sites.google.com/view/elizagen-org/
https://sites.google.com/view/elizagen-org/original-eliza
Third, while "pattern matching" is sometimes used informally, it’s misleading technically. Transformers perform high-dimensional vector computations and attention over context to model relationships between tokens. That’s very different from rule-based pattern matching.
You can certainly debate whether LLMs "think", but describing them as "Eliza with a big phrase table" is not an accurate description of how they work.
You have the resources available at your fingertips to learn what the truth is, how LLMs actually work. You could start with Wikipedia, or read Steven Wolfram's article, or simply ask an LLM to explain how it works to you. It's quite good at that, while an Eliza bot certainly can't explain to you how it works, or even write code.
What Is ChatGPT Doing … and Why Does It Work?
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...