A neat trick I was told is to always have ballast files on your systems. Just a few GiB of zeros that you can delete in cases like this. This won't fix the problem, but will buy you time and free space for stuff like lock files so you can get a working system.
Similarly, I always leave some space unallocated on LMV volume groups. It means that I can temporarily expand a volume easily if needed.
It also serves to leave some space unused to help out the wear-levelling on the SSDs on which the RAID array that is the PV¹ for LVM. I'm, not 100% sure this is needed any more² but I've not looked into that sufficiently so until I do I'll keep the habit.
--------
[1] if there are multiple PVs, from different drives/arrays, in the VG, then you might need to manually skip a bit on each one because LVM will naturally fill one before using the next. Just allocate a small LV specially on each and don't use it. You can remove one/all of them and add the extents to the fill LV if/when needed. Giving it a useful name also reminds you why that bit of space is carved out.
[2] drives under-allocate by default IIRC
> A neat trick I was told is to always have ballast files on your systems.
ZFS has a "reservation" mechanism that's handy:
> The minimum amount of space guaranteed to a dataset, not including its descendants. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by refreservation. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations.
* https://openzfs.github.io/openzfs-docs/man/master/7/zfsprops...
Quotas prevent users/groups/directories (ZFS datasets) from using too much space, but reservations ensure that particular areas always have a minimum amount set aside for them.
I always called it a “bit-mass”. Like a thermal mass used in freezers in places where the power is not very stable.
I knew I didn’t invent the concept, as there’s so many systems that cannot recover if the disk is totally full. (a write may be required in many systems in order to execute an instruction to remove things gracefully).
The latest thing I found with this issue is Unreal Engines Horde build system, its so tightly coupled with caches, object files and database references: that a manual clean up is extremely difficult and likely to create an unstable system. But you can configure it to have fewer build artefacts kept around and then it will clear itself out gracefully. - but it needs to be able to write to the disk to do it.
Now that I think about it, I don’t do this for inodes, but you can run out of those too and end up in a weird “out of disk” situation despite having lots of usable capacity left.
I did this too, but i also zipped the file, turns out it had great packing ratio!
Interesting strategy, can't believe I've never heard of this one before.
Would it be more pragmatic to allocate a swap file instead? Something that provides a theoretical benefit in the short term vs a static reservation.
This is my snippet i used alot. In doubt when even rm wont work just reboot.
Disc Space Insurance File
fallocate -l 8G /tmp/DELETE_IF_OUT_OF_SPACE.img
https://gist.github.com/klaushardt/9a5f6b0b078d28a23fd968f75...This is why I never empty the Rubbish Bin/trash Can on my Linux laptop until the disk fills.
Sounds like something straight out of Dilbert
Similar to the old game development trick of hiding some memory away and then freeing it up near the end of development when the budget starts getting tight.
I did this recently, aka, docker images prune. Can confirm, saved the day.
Surely a 50% warning alarm on disk usage covers this without manual intervention?
Some filesystems can be unable to delete a file if full. Something to be a bit worried about.
> A neat trick I was told is to always have sleep statements in your code. Just a few sleep statements that you can delete in cases like this. This won't fix the problem, but will buy you time and free up latency for stuff like slow algorithms so you can get faster code.
FTFY ;)
Would another way be to drop the reserved space (typically 1% to 5% on an ext file system)?
Better fill those files with random bytes, to ensure the filesystem doesn’t apply some “I don’t actually have to store all-zero blocks” sparse-file optimization. To my knowledge no non-compressing file system currently does this, but who knows about the future.