zfs destroy / dataset is busy

zfs destroy / dataset is busy

  • Written by
    Walter Doekes
  • Published on

Just now, I tried to remove a ZFS dataset, and it reported dataset is busy for no apparent reason.

# zfs list -r data
NAME                      USED  AVAIL  REFER  MOUNTPOINT
data                     3.12T   405G   251M  /data
data/kubernetes-logging  2.08T   405G  2.08T  /data/kubernetes/logging
data/rook-config         36.5M   405G  36.5M  /data/rook-config
data/rook-data           1.03T   708G   753G  -
# zfs destroy data/kubernetes-logging
cannot destroy 'data/kubernetes-logging': dataset is busy

The usual suspects were checked:

  • The dataset was not mounted (cat /proc/mounts | grep kubernetes). It could be mounted and unmounted just fine though.
  • There were no clones: zdb data | grep '%' turned up nothing.
  • lsof | grep data/kubernetes turned up nothing either.

After spending some wasted time with zdb, it turned out the mount point / directory was held by a docker image:

# grep data/kubernetes /proc/*/mounts
grep: /proc/11941/mounts: Invalid argument
/proc/16986/mounts:data /data/kubernetes zfs rw,noatime,xattr,posixacl 0 0
/proc/16986/mounts:data/kubernetes-logging /data/kubernetes/logging zfs rw,noatime,xattr,posixacl 0 0
grep: /proc/18343/mounts: No such file or directory
grep: /proc/18365/mounts: No such file or directory

A-ha!

# ps faxu | grep 16986 -B1
16961  ?  Sl   jul08   9:04  \_ containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/2e5ff94b9e13a139eb125eeeddf31044a74f97c74adec2e781d5f33b6d3149e1 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
16986  ?  Ssl  jul08  77:36  |   \_ /hostpath-provisioner

Interestingly, the container itself did not seem to be touching the actual mount point:

# docker inspect 2e5ff94b9e13a | grep data/kubernetes -B2
        "HostConfig": {
            "Binds": [
                "/data/kubernetes:/data/kubernetes",

(See, no /logging.)

But, destroying that hostpath-provisioner did the trick:

# kill 16986
# zfs destroy data/kubernetes-logging

(No error. Lots of free space again.)


Back to overview Newer post: recap 2020 Older post: cumulus / postfix in the right vrf