logoalt Hacker News

layer8today at 1:35 PM3 repliesview on HN

Better fill those files with random bytes, to ensure the filesystem doesn’t apply some “I don’t actually have to store all-zero blocks” sparse-file optimization. To my knowledge no non-compressing file system currently does this, but who knows about the future.


Replies

nyrikkitoday at 6:11 PM

XFS, Ext4, btrfs etc… all support sparse files, so any app can cause problems you can try it with:

    dd if=/dev/zero of=sparse_file.img bs=1M count=0 seek=1024

If you add conv=sparse to the dd command with a smaller block size it will sparsify what you copy too, use the wrong cp command flags and they will explode.

Much harder problem than the file system layers to deal with because the stat size will look smaller usually.

show 1 reply
freedombentoday at 2:12 PM

Yep, btrfs will happily do this to you. I verified it the hard way

show 1 reply
ape4today at 1:49 PM

If I recall correctly:

    dd if=/dev/urandom of=/home/myrandomfile bs=1 count=N
show 2 replies