>NFP’s solution was to add redundancy, similar in principle to a data-storage technique called RAID (redundant array of independent disks). Instead of sending each piece of data once, we send extra information that allows missing or corrupted packets to be reconstructed.
A bit disappointing TBH if that is their solution. Seems like everything is trying to pigeon hole everyone into the proof of work scenario. If we have more power/energy then we will beat you in everything security, coding and censorship circumvention.
I think they're just doing forward error correction (FEC). Not sure what that has to do with proof-of-work?
What other method do you propose to deal with data loss?
It's a sound solution. They are describing forward error correction with a variable code rate. Reducing the code rate increases the amount of redundancy, allowing the signal to be retrieved when the signal-to-noise ratio is lower. It's a standard part of communications theory and has a strong theoretical basis. A low enough code rate will overcome almost any level of jamming at the cost of reduced data rate, provided the receiver does not saturate.