I like this strategy a lot, but the performance of read queries suffer if they span partitions, correct?
The issue I'm facing is a very large table, that is both write and read heavy, and the reads do not fall into a specific range of values for any particular column, so I don't think partitioning is an option.
Partitioning is not all that expensive. It is definitely worth testing for your specific workload. We use TimescaleDB, which relies heavily on postgres partitions, have a bit under 100 million rows in our active set (last 90 days), across 120 partitions (device*time), and it works nicely. Over 100 partitions is probably a bit many for this workload, but since it works OK we have not changed it.
Yes, partitioning will decrease a bit the read performance of queries not correlated with the partition key. That's why you need to periodically merge smaller partitions, so that you can keep the overall partition count bounded.
It is a lot of admin work, but if you really need to scale up Postgres write throughput, I don't see many other options without increasing hardware costs.
I assume you have already picked the low-hanging fruit discussed in the neighboring comments - batch writes, make sure you are using COPY instead of INSERT, tune Postgres parameters adequately and use the fastest disk you can grab for the WAL.