INFO: task btrfs:103945 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Until eventually
Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
So I'm looking forward to getting an actual count of how often this happens without needing to babysit the warning suppressions and count the incidents myself.
In my limited experience this is often from overloaded virtualization platform where this VM is located on. Can be vmware or proxmox, on proxmox its sometimes when VM is being live migrated to other virtualization host. Can also happen when backend storage where this VM is located is busy with other hosts.
EDIT: none of the VMs where I have ever seen this had btrfs, it was always ext4 mentioned in follow up error message, so pretty much filesystem agnostic issue. If this is on hardware however, then I don't know whats going on there.
NFS client code occasionally triggered this in the (distant) past, but that's just how it is when 'hard' mount option is chosen and the server becomes inaccessible for a prolonged time. The solution there is to get a reliable, sufficiently capable server and network (NOT to use 'soft' mounts as that can lead to silent data corruption) (and avoid cross-ocean mounts ;-}
But yes, running a VM on a grossly overloaded (over-committed memory?) host might trip timeout warnings as this as well.
A background thread performing blocking io is an implementation detail not a bug. Other filesystems don’t have/need that sort of bookkeeping, so if a block device stalls badly enough to trigger these warnings then it will be attributed to application threads (if at all) rather than btrfs worker threads, but regardless the stall very much still happens
That's really the issue at heart, because I've seen these on zfs as well... but you'd think the filesystem would report some progress to keep bumping the timer so it doesn't start spamming dmesg. /shrug
I am curious - is this message indicative of a problem in the fs? I would have assumed anything marked "INFO" is, tautologically, not an error, but surely a filesystem shouldn't be locking up? Or is it just suggestive of high system load or poor hardware performance?
In my experience, "hung task" is almost always due to running out of RAM and the scheduler constantly thrashing instead of doing useful work. I rarely actually reach the point of seeing the message since I'll sysrq-kill if early enough, or else hard-reboot.
Note also that modern filesystems do a lot of background work that doesn't strictly need to be done immediately for correctness.
(of course, it also seems common for people to completely disregard the well-documented "this feature is unreliable, don't use it" warnings that btrfs has, then complain that they have problems and not mention that they ignored the warnings until everyone is halfway through complaining)
The only problems I've encountered in all my years of using btrfs are:
* when (all copies of) a file bitrots on disk, you can't read it at all, rather than being able to copy the mostly-correct file and see if you can hand-correct it into something usable
* if you enable new compression algorithms on your btrfs volume, you can't read your data from old kernels (often on liveusb recovery disks)
* fsync is slow. Like, really really slow. And package managers designed for shitty CoW-less filesystems use fsync a lot.
Hung tasks due to low memory are a bug not a feature. Any time you put the Linux kernel under memory pressure you trigger its wealth of defects in error handling paths, none of which are tested and most of which are rarely exercised in practice. For example squashfs used to have a resource leak under memory pressure where it would exit a function without releasing a lock, after which all block operations system-wide would hang forever until reboot. Linux is absolutely crawling with that type of defect, but not uniformly. Some subsystems have more than others, and btrfs is unusually dense with them.
That the in-kernel code for btrfs locks up should never happen at all. There is a rumor going around that btrfs never reached maturity and suffers from design issues.
ext4 works fine on my Linux laptop and I agree, it's proven itself over many years to be supremely reliable, though it doesn't compare in features to the more complex filesystems.
On my home media server, however, I'm using ZFS in a RAID array, with regular scrubs and snapshots. ZFS has many features like RAID, scrubs, COW, snapshots, etc. that you just don't get on ext4. However, unlike btrfs, ZFS seems to have a great reputation for reliability with all its features.
I use ext4 on my home media server (24TB). I'm using LVM and MD, and it's been rock solid for a couple decades now, surviving all sorts of hardware failures.
I haven't missed out on any zfs or btrfs features. Yes, I know about their benefits, and no, I don't care if a few bits flip here or there over time.
Granted it was at least a decade ago but the team I was on had a terrible experience with ZFS and that bad taste still lingers. And I don’t need any of its features.
Could I ask you to expand on your problems with ZFS? Code bugs, data loss, operational problems, ...? (Asking because I use it and would like to learn from your problems rather than having to experience the pain myself.)
It could be any of the above, I'd say it's info because the kernel itself is not in an error state, it's information about a process doing something unusual
Just to double check my understanding (because being wrong on the internet is perhaps the fastest way to get people to check your work):
Is this saying that regular tasks that haven't been scheduled for two minutes and tasks that are uninterruptible (truly so, not idle or also killable despite being marked as uninterruptible) that haven't been woken up for two minutes are counted?
Not the same thing by any means - they don't indicate something is wrong with kernel or hardware.
The zombie process state is a normal transient state for all exiting processes where the only remaining function of the process is as a container for the exiting process's id and exit status; they go away once the parent process calls some flavor of the "wait" system call to collect the exit status. A pileup of zombies indicates a userspace bug: a negligent parent process that isn't collecting the exit status in a timely manner.
Additionally, there are a few more process accounting things, rusage, that zombie processes hold until reaped. See wait3(2), wait4(2) and getrusage(2).
My dmesg is already constantly full of
Until eventually So I'm looking forward to getting an actual count of how often this happens without needing to babysit the warning suppressions and count the incidents myself.In my limited experience this is often from overloaded virtualization platform where this VM is located on. Can be vmware or proxmox, on proxmox its sometimes when VM is being live migrated to other virtualization host. Can also happen when backend storage where this VM is located is busy with other hosts.
EDIT: none of the VMs where I have ever seen this had btrfs, it was always ext4 mentioned in follow up error message, so pretty much filesystem agnostic issue. If this is on hardware however, then I don't know whats going on there.
NFS client code occasionally triggered this in the (distant) past, but that's just how it is when 'hard' mount option is chosen and the server becomes inaccessible for a prolonged time. The solution there is to get a reliable, sufficiently capable server and network (NOT to use 'soft' mounts as that can lead to silent data corruption) (and avoid cross-ocean mounts ;-}
But yes, running a VM on a grossly overloaded (over-committed memory?) host might trip timeout warnings as this as well.
> If this is on hardware however, then I don't know whats going on there.
It is. Bare metal install on the server in my closet, with plenty of resources (CPU/memory) to spare.
You could leave this problem behind by switching to a filesystem that isn't full of deadlock bugs.
A background thread performing blocking io is an implementation detail not a bug. Other filesystems don’t have/need that sort of bookkeeping, so if a block device stalls badly enough to trigger these warnings then it will be attributed to application threads (if at all) rather than btrfs worker threads, but regardless the stall very much still happens
> if a block device stalls badly
That's really the issue at heart, because I've seen these on zfs as well... but you'd think the filesystem would report some progress to keep bumping the timer so it doesn't start spamming dmesg. /shrug
I am curious - is this message indicative of a problem in the fs? I would have assumed anything marked "INFO" is, tautologically, not an error, but surely a filesystem shouldn't be locking up? Or is it just suggestive of high system load or poor hardware performance?
In my experience, "hung task" is almost always due to running out of RAM and the scheduler constantly thrashing instead of doing useful work. I rarely actually reach the point of seeing the message since I'll sysrq-kill if early enough, or else hard-reboot.
Note also that modern filesystems do a lot of background work that doesn't strictly need to be done immediately for correctness.
(of course, it also seems common for people to completely disregard the well-documented "this feature is unreliable, don't use it" warnings that btrfs has, then complain that they have problems and not mention that they ignored the warnings until everyone is halfway through complaining)
The only problems I've encountered in all my years of using btrfs are:
* when (all copies of) a file bitrots on disk, you can't read it at all, rather than being able to copy the mostly-correct file and see if you can hand-correct it into something usable
* if you enable new compression algorithms on your btrfs volume, you can't read your data from old kernels (often on liveusb recovery disks)
* fsync is slow. Like, really really slow. And package managers designed for shitty CoW-less filesystems use fsync a lot.
> In my experience, "hung task" is almost always due to running out of RAM
In my case, I don't think this machine ever commits more than around 5GB of its 32GB available memory, so I doubt it's that.
> it also seems common for people to completely disregard the well-documented "this feature is unreliable, don't use it" warnings that btrfs has
Now that I am definitely doing. I won't give up raid6 until it eats all my data for a fourth time.
Hung tasks due to low memory are a bug not a feature. Any time you put the Linux kernel under memory pressure you trigger its wealth of defects in error handling paths, none of which are tested and most of which are rarely exercised in practice. For example squashfs used to have a resource leak under memory pressure where it would exit a function without releasing a lock, after which all block operations system-wide would hang forever until reboot. Linux is absolutely crawling with that type of defect, but not uniformly. Some subsystems have more than others, and btrfs is unusually dense with them.
That the in-kernel code for btrfs locks up should never happen at all. There is a rumor going around that btrfs never reached maturity and suffers from design issues.
That's why I use ext4 exclusively on linux. Never once had a filesystem issue.
ext4 works fine on my Linux laptop and I agree, it's proven itself over many years to be supremely reliable, though it doesn't compare in features to the more complex filesystems.
On my home media server, however, I'm using ZFS in a RAID array, with regular scrubs and snapshots. ZFS has many features like RAID, scrubs, COW, snapshots, etc. that you just don't get on ext4. However, unlike btrfs, ZFS seems to have a great reputation for reliability with all its features.
I use ext4 on my home media server (24TB). I'm using LVM and MD, and it's been rock solid for a couple decades now, surviving all sorts of hardware failures.
I haven't missed out on any zfs or btrfs features. Yes, I know about their benefits, and no, I don't care if a few bits flip here or there over time.
Granted it was at least a decade ago but the team I was on had a terrible experience with ZFS and that bad taste still lingers. And I don’t need any of its features.
Could I ask you to expand on your problems with ZFS? Code bugs, data loss, operational problems, ...? (Asking because I use it and would like to learn from your problems rather than having to experience the pain myself.)
Given the mailing History with Linus I wouldn't be surprised
It could be any of the above, I'd say it's info because the kernel itself is not in an error state, it's information about a process doing something unusual
I was planning on it but the filesystem I wanted to switch to keeps getting set back by the author's CoC drama
What did you want to switch to?
I suppose the author at least isn't a murderer :)
the drama part was most likely refering to bcachefs
Oh? What happened there?
What counts as a hung task? Blocking on unsatisfiable I/O for more than X seconds? Scheduler hasn’t gotten to it in X seconds?
If a server process is blocking on accept(), wouldn’t it count as hung until a remote client connects? or do only certain operations count?
torvalds/linux//kernel/hung_task.c :
static void check_hung_task(struct task_struct *t, unsigned long timeout) https://github.com/torvalds/linux/blob/9f16d5e6f220661f73b36...
static void check_hung_uninterruptible_tasks(unsigned long timeout) https://github.com/torvalds/linux/blob/9f16d5e6f220661f73b36...
Just to double check my understanding (because being wrong on the internet is perhaps the fastest way to get people to check your work):
Is this saying that regular tasks that haven't been scheduled for two minutes and tasks that are uninterruptible (truly so, not idle or also killable despite being marked as uninterruptible) that haven't been woken up for two minutes are counted?
The comment in the code says two minutes but the time would actually seem to depend on a timeout given as a parameter.
Your and the Llama's explanations would make good comments for the source and/or the docs if true.
And there's https://en.wikipedia.org/wiki/Zombie_process too
Not the same thing by any means - they don't indicate something is wrong with kernel or hardware.
The zombie process state is a normal transient state for all exiting processes where the only remaining function of the process is as a container for the exiting process's id and exit status; they go away once the parent process calls some flavor of the "wait" system call to collect the exit status. A pileup of zombies indicates a userspace bug: a negligent parent process that isn't collecting the exit status in a timely manner.
Additionally, there are a few more process accounting things, rusage, that zombie processes hold until reaped. See wait3(2), wait4(2) and getrusage(2).