Problem is not just NVD issuing inflated scores. That's their workaday MO. They are required to assume the worst possible combination of factors.
The real problem is that CVSS scoring is utterly divorced from reality. Even the 4.x standard is merely trying - and failing - to paper over the fundamental problems with its (much needed!) EPSS[ß] weighting. A javascript library that does something internal to software and does not even have any way of doing auth is automatically graded "can be exploited without authentication". Congrats, your baseline CVE for some utterly mundane data transformation is now an unauthenticated attack vector. The same applies to exposure scoping: when everything is used over the internet[ĸ], all attack vectors are remote and occur over the network: the highest possible baseline.
This combination means that a large fraction of CVEs in user-facing software start from CVSS score of 8.0 ("HIGH") and many mildly amusing bugs get assigned 9.0 ("CRITICAL") as the default.
Result? You end up with nonsense such as CVE-2024-24790[0] given a 9.8 score because the underlying assumption is that every software using 'netip' library is doing IsLocal* checks for AUTHENTICATION (and/or admin access) purposes. Taken to its illogical extreme we should mark every single if-condition as a call site for "critical security vulnerabilities".
CVSS scoring has long been a well of problems. In the last few years is has become outright toxic.
ß: "Exploitability" score.
k: Web browsers are the modern universal application OS