amd64/154228: md getting stuck in wdrain state

Carl k0802647 at
Sat Jan 22 23:00:22 UTC 2011

>Number:         154228
>Category:       amd64
>Synopsis:       md getting stuck in wdrain state
>Confidential:   no
>Severity:       critical
>Priority:       high
>Responsible:    freebsd-amd64
>State:          open
>Class:          sw-bug
>Submitter-Id:   current-users
>Arrival-Date:   Sat Jan 22 23:00:21 UTC 2011
>Originator:     Carl
>Release:        FreeBSD-8.1-RELEASE-amd64
FreeBSD xxxxxxxx 8.1-RELEASE FreeBSD 8.1-RELEASE #0: Mon Jul 19 02:36:49 UTC 2010     root at  amd64
If I try to observe a 'dd' process in action whilst using it to generate a file inside a particular file-backed memory device, I end up with unkillable hung processes. It is at least faintly reminiscent of this old report:

and may be related to bug reports kern/45558 and kern/127420, neither of which appear to have ever been dealt with.

My scenario goes like this. I have a disk image in a large sparse file (60GiB apparent, 28GiB used). The image is taken from an MBR-sliced SSD containing one 34GiB slice housing a bsdlabel. The bsdlabel contains 1 swap and 5 UFS partitions. With the aid of mdconfig, I am mounting only one of the UFS partitions to /media. That partition is 1GiB in size and happens to consist of few or no sparse blocks. All I am trying to do is to zero that partition's unused space with the following:

  dd if=/dev/zero of=/media/zero bs=1M

Because this process seems to be quite slow, I switch to another window (I'm using 'screen') and run "ls /media" or "df". Both of these commands and any other commands I issue that would reference the file-backed memory device in question will immediately hang and become unkillable. The 'dd' process is also hung and unkillable. I have no recourse but to do an undignified reboot because the system as a whole hangs when I try to shut it down. This happens every time with that particular disk image file on this particular host.

The host is running FreeBSD-8.1-RELEASE (amd64) on an Intel Xeon E3110 with 4GiB DRAM, a matched pair of Seagate Constellation ES hard drives, GPT partitions which are gmirrored, and gjournalled UFS2 file systems. It is remote and used by others too, so hanging it is a bad thing.

Refer to "How to repeat the problem" for a test script I wrote which did reproduce the failure once. Here's the relevant process stats after the last time that script hung:

# ps -axl | egrep 'me\dia|ST\AT'
    0  7472  7398   0  51  0  7856  2096 wdrain D+     2    0:00.81 dd if=/dev/zero of=/media/zero bs=1M
    0  7509  7398   0  76  0  8224  1576 suspfs DE+    2    0:00.00 ls /media

I ran the script in a loop for 12 hours on a different FreeBSD-8.1-RELEASE-i386 host equipped with Intel Celeron 1.06GHz and 1GiB DRAM, but that system has yet to fail. This second host is obviously very much slower hardware, has a single Intel X25-V G2 SSD with no gjournalling, and is essentially idle.

The same script was also run for about an hour without failure on another old Pentium 4 3GHz with 2GiB DRAM and FreeBSD-8.1-RELEASE-i386, a single hard disk and again no gjournalling or gmirror.

I do not have a second FreeBSD-8.1-RELEASE-amd64 host on which to test this.

I am hoping others can reproduce the problem using the above script or some variation on the concept.

Carl                                             / K0802647

In an effort to make the problem reproducible for reporting purposes, I tried to devise a script that would approximate my situation. I created the following script that did eventually fail after running it numerous times on the same amd64 host, but it usually runs to completion successfully, unlike my original scenario. This suggests a timing sensitive bug. Because the failure rate is low with this script and I must email someone at the remote site to forcibly reboot the machine once these processes become unkillable, I have been unable to figure out further simplifications, although I am sure there would be quite a few:

---------- begin script ----------

#!/bin/sh -ve
truncate -s 1G img.img
mdconfig -f img.img -S 512 -y 16 -x 63 -u 11
gpart create -s MBR md11
gpart add -t freebsd md11
# I expect making the image bootable should be unnecessary.
gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 md11
gpart set -a active -i 1 md11
bsdlabel -w md11s1
bsdlabel md11s1 | sed -e '/^ *a:/s/unused/4.2BSD/' > /tmp/b.l
bsdlabel -R md11s1 /tmp/b.l ; rm /tmp/b.l
newfs /dev/md11s1a
# The next 2 lines are weird and probably unnecessary,
# but it is the original scenario.
mdconfig -d -u 11
mdconfig -f img.img -S 512 -y 255 -x 63 -u 11
mount /dev/md11s1a /media || exit
df -h | egrep 'Size|md11'
dd if=/dev/zero of=/media/zero bs=1M &
ps -axl | egrep 'ST\AT|d\d if' || true
while jobid > /dev/null
sleep 1
ls /media > /dev/null
df -h | egrep 'md11'
ps -axl | egrep 'ST\AT|d\d if' || true
umount /media
mdconfig -d -u 11
rm img.img

---------- end script ----------

No known fix.


More information about the freebsd-amd64 mailing list