git: 71f8b70ba8 - main - Convert ZFS chapter to active voice and remove weasel/unnecessary words

From: Benedict Reuschling <bcr_at_FreeBSD.org>
Date: Sun, 14 Nov 2021 14:06:36 UTC
The branch main has been updated by bcr:

URL: https://cgit.FreeBSD.org/doc/commit/?id=71f8b70ba8bc089344114edf4600ec551bc9b27f

commit 71f8b70ba8bc089344114edf4600ec551bc9b27f
Author:     Benedict Reuschling <bcr@FreeBSD.org>
AuthorDate: 2021-11-14 14:00:01 +0000
Commit:     Benedict Reuschling <bcr@FreeBSD.org>
CommitDate: 2021-11-14 14:00:01 +0000

    Convert ZFS chapter to active voice and remove weasel/unnecessary words
    
    I used [1] to find the passive voice sentences and weasel words.
    However, reviewers pointed out that some sentences were better of in the passive voice and that some of these weasel words were useful to understand the text. In those instances, I kept the original
    sentences as they were.
    
    The review became quite long in the process, but I think that the text
    has improved a lot and that the concepts are better explained now.
    
    Thanks to all the reviewers for their perseverance and good suggestions.
    
    Reviewed by:            debdrup, ceri, ygy, pauamma_gundo.com
    Differential Revision:  https://reviews.freebsd.org/D31707
    
    [1] https://github.com/btford/write-good
---
 .../content/en/books/handbook/zfs/_index.adoc      | 1099 ++++++++++----------
 1 file changed, 539 insertions(+), 560 deletions(-)

diff --git a/documentation/content/en/books/handbook/zfs/_index.adoc b/documentation/content/en/books/handbook/zfs/_index.adoc
index 650d70dbdc..fd3323a4c2 100644
--- a/documentation/content/en/books/handbook/zfs/_index.adoc
+++ b/documentation/content/en/books/handbook/zfs/_index.adoc
@@ -3,7 +3,7 @@ title: Chapter 20. The Z File System (ZFS)
 part: Part III. System Administration
 prev: books/handbook/geom
 next: books/handbook/filesystems
-description: The Z File System, or ZFS, is an advanced file system designed to overcome many of the major problems found in previous designs
+description: ZFS is an advanced file system designed to solve major problems found in previous storage subsystem software
 tags: ["ZFS", "filesystem", "administration", "zpool", "features", "terminology", "RAID-Z"]
 ---
 
@@ -15,68 +15,65 @@ tags: ["ZFS", "filesystem", "administration", "zpool", "features", "terminology"
 :icons: font
 :sectnums:
 :sectnumlevels: 6
-:sectnumoffset: 20
-:partnums:
 :source-highlighter: rouge
 :experimental:
-:images-path: books/handbook/zfs/
+:skip-front-matter:
+:xrefstyle: basic
+:relfileprefix: ../
+:outfilesuffix:
+:sectnumoffset: 20
 
-ifdef::env-beastie[]
-ifdef::backend-html5[]
-:imagesdir: ../../../../images/{images-path}
+ifeval::["{backend}" == "html5"]
+:imagesdir: ../../../../images/books/handbook/zfs/
 endif::[]
-ifndef::book[]
-include::shared/authors.adoc[]
-include::shared/mirrors.adoc[]
-include::shared/releases.adoc[]
-include::shared/attributes/attributes-{{% lang %}}.adoc[]
-include::shared/{{% lang %}}/teams.adoc[]
-include::shared/{{% lang %}}/mailing-lists.adoc[]
-include::shared/{{% lang %}}/urls.adoc[]
-toc::[]
-endif::[]
-ifdef::backend-pdf,backend-epub3[]
-include::../../../../../shared/asciidoctor.adoc[]
+
+ifeval::["{backend}" == "pdf"]
+:imagesdir: ../../../../static/images/books/handbook/zfs/
 endif::[]
+
+ifeval::["{backend}" == "epub3"]
+:imagesdir: ../../../../static/images/books/handbook/zfs/
 endif::[]
 
-ifndef::env-beastie[]
+include::shared/authors.adoc[]
+include::shared/releases.adoc[]
+include::shared/en/mailing-lists.adoc[]
+include::shared/en/teams.adoc[]
+include::shared/en/urls.adoc[]
+
 toc::[]
-include::../../../../../shared/asciidoctor.adoc[]
-endif::[]
 
-The _Z File System_, or ZFS, is an advanced file system designed to overcome many of the major problems found in previous designs.
+ZFS is an advanced file system designed to solve major problems found in previous storage subsystem software.
 
 Originally developed at Sun(TM), ongoing open source ZFS development has moved to the http://open-zfs.org[OpenZFS Project].
 
 ZFS has three major design goals:
 
-* Data integrity: All data includes a <<zfs-term-checksum,checksum>> of the data. When data is written, the checksum is calculated and written along with it. When that data is later read back, the checksum is calculated again. If the checksums do not match, a data error has been detected. ZFS will attempt to automatically correct errors when data redundancy is available.
-* Pooled storage: physical storage devices are added to a pool, and storage space is allocated from that shared pool. Space is available to all file systems, and can be increased by adding new storage devices to the pool.
-* Performance: multiple caching mechanisms provide increased performance. <<zfs-term-arc,ARC>> is an advanced memory-based read cache. A second level of disk-based read cache can be added with <<zfs-term-l2arc,L2ARC>>, and disk-based synchronous write cache is available with <<zfs-term-zil,ZIL>>.
+* Data integrity: All data includes a <<zfs-term-checksum,checksum>> of the data. ZFS calculates checksums and writes them along with the data. When reading that data later, ZFS recalculates the checksums. If the checksums do not match, meaning detecting one or more data errors, ZFS will attempt to automatically correct errors when ditto-, mirror-, or parity-blocks are available.
+* Pooled storage: adding physical storage devices to a pool, and allocating storage space from that shared pool. Space is available to all file systems and volumes, and increases by adding new storage devices to the pool.
+* Performance: caching mechanisms provide increased performance. <<zfs-term-arc,ARC>> is an advanced memory-based read cache. ZFS provides a second level disk-based read cache with <<zfs-term-l2arc,L2ARC>>, and a disk-based synchronous write cache named <<zfs-term-zil,ZIL>>.
 
-A complete list of features and terminology is shown in <<zfs-term>>.
+A complete list of features and terminology is in <<zfs-term>>.
 
 [[zfs-differences]]
 == What Makes ZFS Different
 
-ZFS is significantly different from any previous file system because it is more than just a file system.
+More than a file system, ZFS is fundamentally different from traditional file systems.
 Combining the traditionally separate roles of volume manager and file system provides ZFS with unique advantages.
 The file system is now aware of the underlying structure of the disks.
-Traditional file systems could only be created on a single disk at a time.
-If there were two disks then two separate file systems would have to be created.
-In a traditional hardware RAID configuration, this problem was avoided by presenting the operating system with a single logical disk made up of the space provided by a number of physical disks, on top of which the operating system placed a file system.
-Even in the case of software RAID solutions like those provided by GEOM, the UFS file system living on top of the RAID transform believed that it was dealing with a single device.
-ZFS's combination of the volume manager and the file system solves this and allows the creation of many file systems all sharing a pool of available storage.
-One of the biggest advantages to ZFS's awareness of the physical layout of the disks is that existing file systems can be grown automatically when additional disks are added to the pool.
-This new space is then made available to all of the file systems.
-ZFS also has a number of different properties that can be applied to each file system,
-giving many advantages to creating a number of different file systems and datasets rather than a single monolithic file system.
+Traditional file systems could exist on a single disk alone at a time.
+If there were two disks then creating two separate file systems was necessary.
+A traditional hardware RAID configuration avoided this problem by presenting the operating system with a single logical disk made up of the space provided by physical disks on top of which the operating system placed a file system.
+Even with software RAID solutions like those provided by GEOM, the UFS file system living on top of the RAID believes it's dealing with a single device.
+ZFS' combination of the volume manager and the file system solves this and allows the creation of file systems that all share a pool of available storage.
+One big advantage of ZFS' awareness of the physical disk layout is that existing file systems grow automatically when adding extra disks to the pool.
+This new space then becomes available to the file systems.
+ZFS can also apply different properties to each file system. This makes it useful to create separate file systems and datasets instead of a single monolithic file system.
 
 [[zfs-quickstart]]
 == Quick Start Guide
 
-There is a startup mechanism that allows FreeBSD to mount ZFS pools during system initialization.
+FreeBSD can mount ZFS pools and datasets during system initialization.
 To enable it, add this line to [.filename]#/etc/rc.conf#:
 
 [.programlisting]
@@ -116,9 +113,8 @@ devfs               1       1        0   100%    /dev
 example      17547136       0 17547136     0%    /example
 ....
 
-This output shows that the `example` pool has been created and mounted.
-It is now accessible as a file system.
-Files can be created on it and users can browse it:
+This output shows creating and mounting of the `example` pool, and that is now accessible as a file system.
+Create files for users to browse:
 
 [source,shell]
 ....
@@ -132,7 +128,7 @@ drwxr-xr-x  21 root  wheel  512 Aug 29 23:12 ..
 -rw-r--r--   1 root  wheel    0 Aug 29 23:15 testfile
 ....
 
-However, this pool is not taking advantage of any ZFS features.
+This pool is not using any advanced ZFS features and properties yet.
 To create a dataset on this pool with compression enabled:
 
 [source,shell]
@@ -144,7 +140,7 @@ To create a dataset on this pool with compression enabled:
 The `example/compressed` dataset is now a ZFS compressed file system.
 Try copying some large files to [.filename]#/example/compressed#.
 
-Compression can be disabled with:
+Disable compression with:
 
 [source,shell]
 ....
@@ -178,7 +174,7 @@ example             17547008       0 17547008     0%    /example
 example/compressed  17547008       0 17547008     0%    /example/compressed
 ....
 
-The pool and file system may also be observed by viewing the output from `mount`:
+Running `mount` shows the pool and file systems:
 
 [source,shell]
 ....
@@ -190,10 +186,10 @@ example on /example (zfs, local)
 example/compressed on /example/compressed (zfs, local)
 ....
 
-After creation, ZFS datasets can be used like any file systems.
-However, many other features are available which can be set on a per-dataset basis.
-In the example below, a new file system called `data` is created.
-Important files will be stored here, so it is configured to keep two copies of each data block:
+Use ZFS datasets like any file system after creation.
+Set other available features on a per-dataset basis when needed.
+The example below creates a new file system called `data`.
+It assumes the file system contains important files and configures it to store two copies of each data block.
 
 [source,shell]
 ....
@@ -201,7 +197,7 @@ Important files will be stored here, so it is configured to keep two copies of e
 # zfs set copies=2 example/data
 ....
 
-It is now possible to see the data and space utilization by issuing `df`:
+Use `df` to see the data and space usage: 
 
 [source,shell]
 ....
@@ -215,11 +211,11 @@ example/compressed  17547008       0 17547008     0%    /example/compressed
 example/data        17547008       0 17547008     0%    /example/data
 ....
 
-Notice that each file system on the pool has the same amount of available space.
-This is the reason for using `df` in these examples, to show that the file systems use only the amount of space they need and all draw from the same pool.
-ZFS eliminates concepts such as volumes and partitions, and allows multiple file systems to occupy the same pool.
+Notice that all file systems in the pool have the same available space.
+Using `df` in these examples shows that the file systems use the space they need and all draw from the same pool.
+ZFS gets rid of concepts such as volumes and partitions, and allows several file systems to share the same pool.
 
-To destroy the file systems and then destroy the pool as it is no longer needed:
+To destroy the file systems and then the pool that is no longer needed:
 
 [source,shell]
 ....
@@ -231,7 +227,8 @@ To destroy the file systems and then destroy the pool as it is no longer needed:
 [[zfs-quickstart-raid-z]]
 === RAID-Z
 
-Disks fail. One method of avoiding data loss from disk failure is to implement RAID.
+Disks fail.
+One way to avoid data loss from disk failure is to use RAID.
 ZFS supports this feature in its pool design.
 RAID-Z pools require three or more disks but provide more usable space than mirrored pools.
 
@@ -246,7 +243,7 @@ This example creates a RAID-Z pool, specifying the disks to add to the pool:
 ====
 Sun(TM) recommends that the number of devices used in a RAID-Z configuration be between three and nine.
 For environments requiring a single pool consisting of 10 disks or more, consider breaking it up into smaller RAID-Z groups.
-If only two disks are available and redundancy is a requirement, consider using a ZFS mirror.
+If two disks are available, ZFS mirroring provides redundancy if required.
 Refer to man:zpool[8] for more details.
 ====
 
@@ -258,7 +255,7 @@ This example makes a new file system called `home` in that pool:
 # zfs create storage/home
 ....
 
-Compression and keeping extra copies of directories and files can be enabled:
+Enable compression and store an extra copy of directories and files:
 
 [source,shell]
 ....
@@ -279,18 +276,17 @@ To make this the new home directory for users, copy the user data to this direct
 Users data is now stored on the freshly-created [.filename]#/storage/home#.
 Test by adding a new user and logging in as that user.
 
-Try creating a file system snapshot which can be rolled back later:
+Create a file system snapshot to roll back to later:
 
 [source,shell]
 ....
 # zfs snapshot storage/home@08-30-08
 ....
 
-Snapshots can only be made of a full file system, not a single directory or file.
+ZFS creates snapshots of a dataset, not a single directory or file.
 
 The `@` character is a delimiter between the file system name or the volume name.
-If an important directory has been accidentally deleted, the file system can be backed up,
-then rolled back to an earlier snapshot when the directory still existed:
+Before deleting an important directory, back up the file system, then roll back to an earlier snapshot in which the directory still exists:
 
 [source,shell]
 ....
@@ -298,23 +294,24 @@ then rolled back to an earlier snapshot when the directory still existed:
 ....
 
 To list all available snapshots, run `ls` in the file system's [.filename]#.zfs/snapshot# directory.
-For example, to see the previously taken snapshot:
+For example, to see the snapshot taken:
 
 [source,shell]
 ....
 # ls /storage/home/.zfs/snapshot
 ....
 
-It is possible to write a script to perform regular snapshots on user data.
-However, over time, snapshots can consume a great deal of disk space.
-The previous snapshot can be removed using the command:
+Write a script to take regular snapshots of user data.
+Over time, snapshots can use up a lot of disk space.
+Remove the previous snapshot using the command:
 
 [source,shell]
 ....
 # zfs destroy storage/home@08-30-08
 ....
 
-After testing, [.filename]#/storage/home# can be made the real [.filename]#/home# using this command:
+After testing, make [.filename]#/storage/home# the real
+[.filename]#/home# with this command:
 
 [source,shell]
 ....
@@ -341,8 +338,7 @@ storage/home  26320512       0 26320512     0%    /home
 ....
 
 This completes the RAID-Z configuration.
-Daily status updates about the file systems created can be generated as part of the nightly man:periodic[8] runs.
-Add this line to [.filename]#/etc/periodic.conf#:
+Add daily status updates about the created file systems to the nightly man:periodic[8] runs by adding this line to [.filename]#/etc/periodic.conf#:
 
 [.programlisting]
 ....
@@ -353,7 +349,7 @@ daily_status_zfs_enable="YES"
 === Recovering RAID-Z
 
 Every software RAID has a method of monitoring its `state`.
-The status of RAID-Z devices may be viewed with this command:
+View the status of RAID-Z devices using:
 
 [source,shell]
 ....
@@ -367,7 +363,7 @@ If all pools are <<zfs-term-online,Online>> and everything is normal, the messag
 all pools are healthy
 ....
 
-If there is an issue, perhaps a disk is in the <<zfs-term-offline,Offline>> state, the pool state will look similar to:
+If there is a problem, perhaps a disk being in the <<zfs-term-offline,Offline>> state, the pool state will look like this:
 
 [source,shell]
 ....
@@ -391,22 +387,22 @@ config:
 errors: No known data errors
 ....
 
-This indicates that the device was previously taken offline by the administrator with this command:
+"OFFLINE" shows the administrator took [.filename]#da1# offline using:
 
 [source,shell]
 ....
 # zpool offline storage da1
 ....
 
-Now the system can be powered down to replace [.filename]#da1#.
-When the system is back online, the failed disk can replaced in the pool:
+Power down the computer now and replace [.filename]#da1#.
+Power up the computer and return [.filename]#da1# to the pool:
 
 [source,shell]
 ....
 # zpool replace storage da1
 ....
 
-From here, the status may be checked again, this time without `-x` so that all pools are shown:
+Next, check the status again, this time without `-x` to display all pools:
 
 [source,shell]
 ....
@@ -432,17 +428,17 @@ In this example, everything is normal.
 === Data Verification
 
 ZFS uses checksums to verify the integrity of stored data.
-These are enabled automatically upon creation of file systems.
+Creating file systems automatically enables them.
 
 [WARNING]
 ====
-Checksums can be disabled, but it is _not_ recommended! Checksums take very little storage space and provide data integrity.
-Many ZFS features will not work properly with checksums disabled.
-There is no noticeable performance gain from disabling these checksums.
+Disabling Checksums is possible but _not_ recommended!
+Checksums take little storage space and provide data integrity.
+Most ZFS features will not work properly with checksums disabled.
+Disabling these checksums will not increase performance noticeably.
 ====
 
-Checksum verification is known as _scrubbing_.
-Verify the data integrity of the `storage` pool with this command:
+Verifying the data checksums (called _scrubbing_) ensures integrity of the `storage` pool with:
 
 [source,shell]
 ....
@@ -451,8 +447,8 @@ Verify the data integrity of the `storage` pool with this command:
 
 The duration of a scrub depends on the amount of data stored.
 Larger amounts of data will take proportionally longer to verify.
-Scrubs are very I/O intensive, and only one scrub is allowed to run at a time.
-After the scrub completes, the status can be viewed with `status`:
+Since scrubbing is I/O intensive, ZFS allows a single scrub to run at a time.
+After scrubbing completes, view the status with `zpool status`:
 
 [source,shell]
 ....
@@ -472,7 +468,7 @@ config:
 errors: No known data errors
 ....
 
-The completion date of the last scrub operation is displayed to help track when another scrub is required.
+Displaying the completion date of the last scrubbing helps decide when to start another.
 Routine scrubs help protect data from silent corruption and ensure the integrity of the pool.
 
 Refer to man:zfs[8] and man:zpool[8] for other ZFS options.
@@ -480,20 +476,20 @@ Refer to man:zfs[8] and man:zpool[8] for other ZFS options.
 [[zfs-zpool]]
 == `zpool` Administration
 
-ZFS administration is divided between two main utilities.
-The `zpool` utility controls the operation of the pool and deals with adding, removing, replacing, and managing disks.
-The <<zfs-zfs,`zfs`>> utility deals with creating, destroying, and managing datasets, both <<zfs-term-filesystem,file systems>> and <<zfs-term-volume,volumes>>.
+ZFS administration uses two main utilities.
+The `zpool` utility controls the operation of the pool and allows adding, removing, replacing, and managing disks.
+The <<zfs-zfs,`zfs`>> utility allows creating, destroying, and managing datasets, both <<zfs-term-filesystem,file systems>> and <<zfs-term-volume,volumes>>.
 
 [[zfs-zpool-create]]
 === Creating and Destroying Storage Pools
 
-Creating a ZFS storage pool (_zpool_) involves making a number of decisions that are relatively permanent because the structure of the pool cannot be changed after the pool has been created.
-The most important decision is what types of vdevs into which to group the physical disks.
+Creating a ZFS storage pool (_zpool_) requires permanent decisions, as the pool structure cannot change after creation.
+The most important decision is which types of vdevs to group the physical disks into. 
 See the list of <<zfs-term-vdev,vdev types>> for details about the possible options.
-After the pool has been created, most vdev types do not allow additional disks to be added to the vdev.
-The exceptions are mirrors, which allow additional disks to be added to the vdev, and stripes, which can be upgraded to mirrors by attaching an additional disk to the vdev.
-Although additional vdevs can be added to expand a pool, the layout of the pool cannot be changed after pool creation.
-Instead, the data must be backed up and the pool destroyed and recreated.
+After creating the pool, most vdev types do not allow adding disks to the vdev.
+The exceptions are mirrors, which allow adding new disks to the vdev, and stripes, which upgrade to mirrors by attaching a new disk to the vdev.
+Although adding new vdevs expands a pool, the pool layout cannot change after pool creation.
+Instead, back up the data, destroy the pool, and recreate it.
 
 Create a simple mirror pool:
 
@@ -515,8 +511,7 @@ config:
 errors: No known data errors
 ....
 
-Multiple vdevs can be created at once.
-Specify multiple groups of disks separated by the vdev type keyword, `mirror` in this example:
+To create more than one vdev with a single command, specify groups of disks separated by the vdev type keyword, `mirror` in this example:
 
 [source,shell]
 ....
@@ -539,13 +534,13 @@ config:
 errors: No known data errors
 ....
 
-Pools can also be constructed using partitions rather than whole disks.
+Pools can also use partitions rather than whole disks.
 Putting ZFS in a separate partition allows the same disk to have other partitions for other purposes.
-In particular, partitions with bootcode and file systems needed for booting can be added.
+In particular, it allows adding partitions with bootcode and file systems needed for booting.
 This allows booting from disks that are also members of a pool.
-There is no performance penalty on FreeBSD when using a partition rather than a whole disk.
+ZFS adds no performance penalty on FreeBSD when using a partition rather than a whole disk.
 Using partitions also allows the administrator to _under-provision_ the disks, using less than the full capacity.
-If a future replacement disk of the same nominal size as the original actually has a slightly smaller capacity, the smaller partition will still fit, and the replacement disk can still be used.
+If a future replacement disk of the same nominal size as the original actually has a slightly smaller capacity, the smaller partition will still fit, using the replacement disk.
 
 Create a <<zfs-term-vdev-raidz,RAID-Z2>> pool using partitions:
 
@@ -571,27 +566,28 @@ config:
 errors: No known data errors
 ....
 
-A pool that is no longer needed can be destroyed so that the disks can be reused.
-Destroying a pool involves first unmounting all of the datasets in that pool.
-If the datasets are in use, the unmount operation will fail and the pool will not be destroyed.
-The destruction of the pool can be forced with `-f`, but this can cause undefined behavior in applications which had open files on those datasets.
+Destroy a pool that is no longer needed to reuse the disks.
+Destroying a pool requires unmounting the file systems in that pool first.
+If any dataset is in use, the unmount operation fails without destroying the pool.
+Force the pool destruction with `-f`.
+This can cause undefined behavior in applications which had open files on those datasets.
 
 [[zfs-zpool-attach]]
 === Adding and Removing Devices
 
-There are two cases for adding disks to a zpool: attaching a disk to an existing vdev with `zpool attach`, or adding vdevs to the pool with `zpool add`.
-Only some <<zfs-term-vdev,vdev types>> allow disks to be added to the vdev after creation.
+Two ways exist for adding disks to a zpool: attaching a disk to an existing vdev with `zpool attach`, or adding vdevs to the pool with `zpool add`.
+Some <<zfs-term-vdev,vdev types>> allow adding disks to the vdev after creation.
 
 A pool created with a single disk lacks redundancy.
-Corruption can be detected but not repaired, because there is no other copy of the data.
+It can detect corruption but can not repair it, because there is no other copy of the data.
 The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector,
 but does not provide the same level of protection as mirroring or RAID-Z.
-Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror.
-`zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance.
-If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second.
-`gpart backup` and `gpart restore` can be used to make this process easier.
+Starting with a pool consisting of a single disk vdev, use `zpool attach` to add a new disk to the vdev, creating a mirror.
+Also use `zpool attach` to add new disks to a mirror group, increasing redundancy and read performance.
+When partitioning the disks used for the pool, replicate the layout of the first disk on to the second.
+Use `gpart backup` and `gpart restore` to make this process easier.
 
-Upgrade the single disk (stripe) vdev _ada0p3_ to a mirror by attaching _ada1p3_:
+Upgrade the single disk (stripe) vdev [.filename]#ada0p3# to a mirror by attaching [.filename]#ada1p3#:
 
 [source,shell]
 ....
@@ -607,13 +603,11 @@ config:
 
 errors: No known data errors
 # zpool attach mypool ada0p3 ada1p3
-Make sure to wait until resilver is done before rebooting.
+Make sure to wait until resilvering finishes before rebooting.
 
-If you boot from pool 'mypool', you may need to update
-boot code on newly attached disk 'ada1p3'.
+If you boot from pool 'mypool', you may need to update boot code on newly attached disk _ada1p3_.
 
-Assuming you use GPT partitioning and 'da0' is your new boot disk
-you may use the following command:
+Assuming you use GPT partitioning and _da0_ is your new boot disk you may use the following command:
 
         gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
 # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
@@ -652,17 +646,18 @@ errors: No known data errors
 ....
 
 When adding disks to the existing vdev is not an option, as for RAID-Z, an alternative method is to add another vdev to the pool.
-Additional vdevs provide higher performance, distributing writes across the vdevs. Each vdev is responsible for providing its own redundancy.
-It is possible, but discouraged, to mix vdev types, like `mirror` and `RAID-Z`.
+Adding vdevs provides higher performance by distributing writes across the vdevs.
+Each vdev provides its own redundancy.
+Mixing vdev types like `mirror` and `RAID-Z` is possible but discouraged.
 Adding a non-redundant vdev to a pool containing mirror or RAID-Z vdevs risks the data on the entire pool.
-Writes are distributed, so the failure of the non-redundant disk will result in the loss of a fraction of every block that has been written to the pool.
+Distributing writes means a failure of the non-redundant disk will result in the loss of a fraction of every block written to the pool.
 
-Data is striped across each of the vdevs.
+ZFS stripes data across each of the vdevs.
 For example, with two mirror vdevs, this is effectively a RAID 10 that stripes writes across two sets of mirrors.
-Space is allocated so that each vdev reaches 100% full at the same time.
-There is a performance penalty if the vdevs have different amounts of free space, as a disproportionate amount of the data is written to the less full vdev.
+ZFS allocates space so that each vdev reaches 100% full at the same time.
+Having vdevs with different amounts of free space will lower performance, as more data writes go to the less full vdev.
 
-When attaching additional devices to a boot pool, remember to update the bootcode.
+When attaching new devices to a boot pool, remember to update the bootcode.
 
 Attach a second mirror group ([.filename]#ada2p3# and [.filename]#ada3p3#) to the existing mirror:
 
@@ -704,8 +699,8 @@ config:
 errors: No known data errors
 ....
 
-Currently, vdevs cannot be removed from a pool, and disks can only be removed from a mirror if there is enough remaining redundancy.
-If only one disk in a mirror group remains, it ceases to be a mirror and reverts to being a stripe, risking the entire pool if that remaining disk fails.
+Removing vdevs from a pool is impossible and removal of disks from a mirror is exclusive if there is enough remaining redundancy.
+If a single disk remains in a mirror group, that group ceases to be a mirror and becomes a stripe, risking the entire pool if that remaining disk fails.
 
 Remove a disk from a three-way mirror group:
 
@@ -745,9 +740,9 @@ errors: No known data errors
 === Checking the Status of a Pool
 
 Pool status is important.
-If a drive goes offline or a read, write, or checksum error is detected, the corresponding error count increases.
+If a drive goes offline or ZFS detects a read, write, or checksum error, the corresponding error count increases.
 The `status` output shows the configuration and status of each device in the pool and the status of the entire pool.
-Actions that need to be taken and details about the last <<zfs-zpool-scrub,`scrub`>> are also shown.
+Actions to take and details about the last <<zfs-zpool-scrub,`scrub`>> are also shown.
 
 [source,shell]
 ....
@@ -773,19 +768,19 @@ errors: No known data errors
 [[zfs-zpool-clear]]
 === Clearing Errors
 
-When an error is detected, the read, write, or checksum counts are incremented.
-The error message can be cleared and the counts reset with `zpool clear _mypool_`.
+When detecting an error, ZFS increases the read, write, or checksum error counts.
+Clear the error message and reset the counts with `zpool clear _mypool_`.
 Clearing the error state can be important for automated scripts that alert the administrator when the pool encounters an error.
-Further errors may not be reported if the old errors are not cleared.
+Without clearing old errors, the scripts may fail to report further errors.
 
 [[zfs-zpool-replace]]
 === Replacing a Functioning Device
 
-There are a number of situations where it may be desirable to replace one disk with a different disk.
+It may be desirable to replace one disk with a different disk.
 When replacing a working disk, the process keeps the old disk online during the replacement.
 The pool never enters a <<zfs-term-degraded,degraded>> state, reducing the risk of data loss.
-`zpool replace` copies all of the data from the old disk to the new one.
-After the operation completes, the old disk is disconnected from the vdev.
+Running `zpool replace` copies the data from the old disk to the new one.
+After the operation completes, ZFS disconnects the old disk from the vdev.
 If the new disk is larger than the old disk, it may be possible to grow the zpool, using the new space.
 See <<zfs-zpool-online,Growing a Pool>>.
 
@@ -807,13 +802,11 @@ config:
 
 errors: No known data errors
 # zpool replace mypool ada1p3 ada2p3
-Make sure to wait until resilver is done before rebooting.
+Make sure to wait until resilvering finishes before rebooting.
 
-If you boot from pool 'zroot', you may need to update
-boot code on newly attached disk 'ada2p3'.
+When booting from the pool 'zroot', update the boot code on the newly attached disk 'ada2p3'.
 
-Assuming you use GPT partitioning and 'da0' is your new boot disk
-you may use the following command:
+Assuming GPT partitioning is used and [.filename]#da0# is the new boot disk, use the following command:
 
         gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
 # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
@@ -856,16 +849,16 @@ errors: No known data errors
 === Dealing with Failed Devices
 
 When a disk in a pool fails, the vdev to which the disk belongs enters the <<zfs-term-degraded,degraded>> state.
-All of the data is still available, but performance may be reduced because missing data must be calculated from the available redundancy.
-To restore the vdev to a fully functional state, the failed physical device must be replaced.
+The data is still available, but with reduced performance because ZFS computes missing data from the available redundancy.
+To restore the vdev to a fully functional state, replace the failed physical device.
 ZFS is then instructed to begin the <<zfs-term-resilver,resilver>> operation.
-Data that was on the failed device is recalculated from available redundancy and written to the replacement device.
+ZFS recomputes data on the failed device from available redundancy and writes it to the replacement device.
 After completion, the vdev returns to <<zfs-term-online,online>> status.
 
-If the vdev does not have any redundancy, or if multiple devices have failed and there is not enough redundancy to compensate, the pool enters the <<zfs-term-faulted,faulted>> state.
-If a sufficient number of devices cannot be reconnected to the pool, the pool becomes inoperative and data must be restored from backups.
+If the vdev does not have any redundancy, or if devices have failed and there is not enough redundancy to compensate, the pool enters the <<zfs-term-faulted,faulted>> state.
+Unless enough devices can reconnect the pool becomes inoperative requiring a data restore from backups.
 
-When replacing a failed disk, the name of the failed disk is replaced with the GUID of the device.
+When replacing a failed disk, the name of the failed disk changes to the GUID of the new disk.
 A new device name parameter for `zpool replace` is not required if the replacement device has the same device name.
 
 Replace a failed disk using `zpool replace`:
@@ -928,9 +921,9 @@ errors: No known data errors
 [[zfs-zpool-scrub]]
 === Scrubbing a Pool
 
-It is recommended that pools be <<zfs-term-scrub,scrubbed>> regularly, ideally at least once every month.
-The `scrub` operation is very disk-intensive and will reduce performance while running.
-Avoid high-demand periods when scheduling `scrub` or use <<zfs-advanced-tuning-scrub_delay,`vfs.zfs.scrub_delay`>> to adjust the relative priority of the `scrub` to prevent it interfering with other workloads.
+Routinely <<zfs-term-scrub,scrub>> pools, ideally at least once every month.
+The `scrub` operation is disk-intensive and will reduce performance while running.
+Avoid high-demand periods when scheduling `scrub` or use <<zfs-advanced-tuning-scrub_delay,`vfs.zfs.scrub_delay`>> to adjust the relative priority of the `scrub` to keep it from slowing down other workloads.
 
 [source,shell]
 ....
@@ -956,24 +949,22 @@ config:
 errors: No known data errors
 ....
 
-In the event that a scrub operation needs to be cancelled, issue `zpool scrub -s _mypool_`.
+To cancel a scrub operation if needed, run `zpool scrub -s _mypool_`.
 
 [[zfs-zpool-selfheal]]
 === Self-Healing
 
 The checksums stored with data blocks enable the file system to _self-heal_.
 This feature will automatically repair data whose checksum does not match the one recorded on another device that is part of the storage pool.
-For example, a mirror with two disks where one drive is starting to malfunction and cannot properly store the data any more.
-This is even worse when the data has not been accessed for a long time, as with long term archive storage.
-Traditional file systems need to run algorithms that check and repair the data like man:fsck[8].
-These commands take time, and in severe cases, an administrator has to manually decide which repair operation must be performed.
-When ZFS detects a data block with a checksum that does not match, it tries to read the data from the mirror disk.
-If that disk can provide the correct data, it will not only give that data to the application requesting it,
-but also correct the wrong data on the disk that had the bad checksum.
+For example, a mirror configuration with two disks where one drive is starting to malfunction and cannot properly store the data any more.
+This is worse when the data was not accessed for a long time, as with long term archive storage.
+Traditional file systems need to run commands that check and repair the data like man:fsck[8].
+These commands take time, and in severe cases, an administrator has to decide which repair operation to perform.
+When ZFS detects a data block with a mismatched checksum, it tries to read the data from the mirror disk.
+If that disk can provide the correct data, ZFS will give that to the application and correct the data on the disk with the wrong checksum.
 This happens without any interaction from a system administrator during normal pool operation.
 
-The next example demonstrates this self-healing behavior.
-A mirrored pool of disks [.filename]#/dev/ada0# and [.filename]#/dev/ada1# is created.
+The next example shows this self-healing behavior by creating a mirrored pool of disks [.filename]#/dev/ada0# and [.filename]#/dev/ada1#.
 
 [source,shell]
 ....
@@ -996,8 +987,7 @@ NAME     SIZE  ALLOC   FREE   CKPOINT  EXPANDSZ   FRAG   CAP  DEDUP  HEALTH  ALT
 healer   960M  92.5K   960M         -         -     0%    0%  1.00x  ONLINE  -
 ....
 
-Some important data that have to be protected from data errors using the self-healing feature are copied to the pool.
-A checksum of the pool is created for later comparison.
+Copy some important data to the pool to protect from data errors using the self-healing feature and create a checksum of the pool for later comparison.
 
 [source,shell]
 ....
@@ -1010,16 +1000,16 @@ healer   960M  67.7M   892M     7%  1.00x  ONLINE  -
 SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f
 ....
 
-Data corruption is simulated by writing random data to the beginning of one of the disks in the mirror.
-To prevent ZFS from healing the data as soon as it is detected, the pool is exported before the corruption and imported again afterwards.
+Simulate data corruption by writing random data to the beginning of one of the disks in the mirror.
+To keep ZFS from healing the data when detected, export the pool before the corruption and import it again afterwards.
 
 [WARNING]
 ====
-This is a dangerous operation that can destroy vital data.
-It is shown here for demonstrational purposes only and should not be attempted during normal operation of a storage pool.
-Nor should this intentional corruption example be run on any disk with a different file system on it.
+This is a dangerous operation that can destroy vital data, shown here for demonstration alone.
+*Do not try* it during normal operation of a storage pool.
+Nor should this intentional corruption example run on any disk with a file system not using ZFS on another partition in it.
 Do not use any other disk device names other than the ones that are part of the pool.
-Make certain that proper backups of the pool are created before running the command!
+Ensure proper backups of the pool exist and test them before running the command!
 ====
 
 [source,shell]
@@ -1035,7 +1025,7 @@ Make certain that proper backups of the pool are created before running the comm
 The pool status shows that one device has experienced an error.
 Note that applications reading data from the pool did not receive any incorrect data.
 ZFS provided data from the [.filename]#ada0# device with the correct checksums.
-The device with the wrong checksum can be found easily as the `CKSUM` column contains a nonzero value.
+To find the device with the wrong checksum, look for one whose `CKSUM` column contains a nonzero value.
 
 [source,shell]
 ....
@@ -1059,7 +1049,7 @@ The device with the wrong checksum can be found easily as the `CKSUM` column con
 errors: No known data errors
 ....
 
-The error was detected and handled by using the redundancy present in the unaffected [.filename]#ada0# mirror disk.
+ZFS detected the error and handled it by using the redundancy present in the unaffected [.filename]#ada0# mirror disk.
 A checksum comparison with the original one will reveal whether the pool is consistent again.
 
 [source,shell]
@@ -1070,12 +1060,12 @@ SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f
 SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f
 ....
 
-The two checksums that were generated before and after the intentional tampering with the pool data still match.
+Generate checksums before and after the intentional tampering while the pool data still matches.
 This shows how ZFS is capable of detecting and correcting any errors automatically when the checksums differ.
-Note that this is only possible when there is enough redundancy present in the pool.
+Note this is possible with enough redundancy present in the pool.
 A pool consisting of a single device has no self-healing capabilities.
-That is also the reason why checksums are so important in ZFS and should not be disabled for any reason.
-No man:fsck[8] or similar file system consistency check program is required to detect and correct this and the pool was still available during the time there was a problem.
+That is also the reason why checksums are so important in ZFS; do not disable them for any reason.
+ZFS requires no man:fsck[8] or similar file system consistency check program to detect and correct this, and keeps the pool available while there is a problem.
 A scrub operation is now required to overwrite the corrupted data on [.filename]#ada1#.
 
 [source,shell]
@@ -1103,8 +1093,7 @@ config:
 errors: No known data errors
 ....
 
-The scrub operation reads data from [.filename]#ada0# and rewrites any data with an incorrect checksum on [.filename]#ada1#.
-This is indicated by the `(repairing)` output from `zpool status`.
+The scrub operation reads data from [.filename]#ada0# and rewrites any data with a wrong checksum on [.filename]#ada1#, shown by the `(repairing)` output from `zpool status`.
 After the operation is complete, the pool status changes to:
 
 [source,shell]
@@ -1129,8 +1118,7 @@ config:
 errors: No known data errors
 ....
 
-After the scrub operation completes and all the data has been synchronized from [.filename]#ada0# to [.filename]#ada1#,
-the error messages can be <<zfs-zpool-clear,cleared>> from the pool status by running `zpool clear`.
+After the scrubbing operation completes with all the data synchronized from [.filename]#ada0# to [.filename]#ada1#, <<zfs-zpool-clear,clear>> the error messages from the pool status by running `zpool clear`.
 
 [source,shell]
 ....
@@ -1150,30 +1138,29 @@ config:
 errors: No known data errors
 ....
 
-The pool is now back to a fully working state and all the errors have been cleared.
+The pool is now back to a fully working state, with all error counts now zero.
 
 [[zfs-zpool-online]]
 === Growing a Pool
 
-The usable size of a redundant pool is limited by the capacity of the smallest device in each vdev.
-The smallest device can be replaced with a larger device.
+The smallest device in each vdev limits the usable size of a redundant pool.
+Replace the smallest device with a larger device.
 After completing a <<zfs-zpool-replace,replace>> or <<zfs-term-resilver,resilver>> operation, the pool can grow to use the capacity of the new device. 
 For example, consider a mirror of a 1 TB drive and a 2 TB drive.
 The usable space is 1 TB.
-When the 1 TB drive is replaced with another 2 TB drive, the resilvering process copies the existing data onto the new drive.
-As both of the devices now have 2 TB capacity, the mirror's available space can be grown to 2 TB.
+When replacing the 1 TB drive with another 2 TB drive, the resilvering process copies the existing data onto the new drive.
+As both of the devices now have 2 TB capacity, the mirror's available space grows to 2 TB.
 
-Expansion is triggered by using `zpool online -e` on each device.
-After expansion of all devices, the additional space becomes available to the pool.
+Start expansion by using `zpool online -e` on each device.
+After expanding all devices, the extra space becomes available to the pool.
 
 [[zfs-zpool-import]]
 === Importing and Exporting Pools
 
-Pools are _exported_ before moving them to another system.
-All datasets are unmounted, and each device is marked as exported but still locked so it cannot be used by other disk subsystems.
-This allows pools to be _imported_ on other machines, other operating systems that support ZFS,
-and even different hardware architectures (with some caveats, see man:zpool[8]).
-When a dataset has open files, `zpool export -f` can be used to force the export of a pool.
+_Export_ pools before moving them to another system.
+ZFS unmounts all datasets, marking each device as exported but still locked to prevent use by other disks.
+This allows pools to be _imported_ on other machines, other operating systems that support ZFS, and even different hardware architectures (with some caveats, see man:zpool[8]).
+When a dataset has open files, use `zpool export -f` to force exporting the pool.
 Use this with caution.
 The datasets are forcibly unmounted, potentially resulting in unexpected behavior by the applications which had open files on those datasets.
 
@@ -1185,10 +1172,10 @@ Export a pool that is not in use:
 ....
 
 Importing a pool automatically mounts the datasets.
-This may not be the desired behavior, and can be prevented with `zpool import -N`.
-`zpool import -o` sets temporary properties for this import only.
+If this is undesired behavior, use `zpool import -N` to prevent it.
+`zpool import -o` sets temporary properties for this specific import.
 `zpool import altroot=` allows importing a pool with a base mount point instead of the root of the file system.
-If the pool was last used on a different system and was not properly exported, an import might have to be forced with `zpool import -f`.
+If the pool was last used on a different system and was not properly exported, force the import using `zpool import -f`.
 `zpool import -a` imports all pools that do not appear to be in use by another system.
 
 List all available pools for import:
@@ -1220,11 +1207,10 @@ mypool               110K  47.0G    31K  /mnt/mypool
 [[zfs-zpool-upgrade]]
 === Upgrading a Storage Pool
 
-After upgrading FreeBSD, or if a pool has been imported from a system using an older version of ZFS,
-the pool can be manually upgraded to the latest version of ZFS to support newer features.
-Consider whether the pool may ever need to be imported on an older system before upgrading.
+After upgrading FreeBSD, or if importing a pool from a system using an older version, manually upgrade the pool to the latest ZFS version to support newer features.
+Consider whether the pool may ever need importing on an older system before upgrading.
 Upgrading is a one-way process.
-Older pools can be upgraded, but pools with newer features cannot be downgraded.
+Upgrade older pools is possible, but downgrading pools with newer features is not.
 
 Upgrade a v28 pool to support `Feature Flags`:
 
@@ -1251,10 +1237,8 @@ errors: No known data errors
 # zpool upgrade
 This system supports ZFS pool feature flags.
 
-The following pools are formatted with legacy version numbers and can
-be upgraded to use feature flags.  After being upgraded, these pools
-will no longer be accessible by software that does not support feature
-flags.
+The following pools are formatted with legacy version numbers and are upgraded to use feature flags.
+After being upgraded, these pools will no longer be accessible by software that does not support feature flags.
 
 VER  POOL
 ---  ------------
@@ -1274,10 +1258,9 @@ Enabled the following features on 'mypool':
 ....
 
 The newer features of ZFS will not be available until `zpool upgrade` has completed.
-`zpool upgrade -v` can be used to see what new features will be provided by upgrading,
-as well as which features are already supported.
+Use `zpool upgrade -v` to see what new features the upgrade provides, as well as which features are already supported.
 
-Upgrade a pool to support additional feature flags:
+Upgrade a pool to support new feature flags:
 
 [source,shell]
 ....
@@ -1332,9 +1315,9 @@ Enabled the following features on 'mypool':
 
 [WARNING]
 ====
-The boot code on systems that boot from a pool must be updated to support the new pool version.
+Update the boot code on systems that boot from a pool to support the new pool version.
 Use `gpart bootcode` on the partition that contains the boot code.
-There are two types of bootcode available, depending on way the system boots: GPT (the most common option) and EFI (for more modern systems).
+Two types of bootcode are available, depending on way the system boots: GPT (the most common option) and EFI (for more modern systems).
 
 For legacy boot using GPT, use the following command:
 
@@ -1357,9 +1340,8 @@ See man:gpart[8] for more information.
 [[zfs-zpool-history]]
 === Displaying Recorded Pool History
 
-Commands that modify the pool are recorded.
-Recorded actions include the creation of datasets, changing properties, or replacement of a disk.
-This history is useful for reviewing how a pool was created and which user performed a specific action and when.
+ZFS records commands that change the pool, including creating datasets, changing properties, or replacing a disk.
+Reviewing history about a pool's creation is useful, as is checking which user performed a specific action and when.
 History is not kept in a log file, but is part of the pool itself.
 The command to review this history is aptly named `zpool history`:
 
@@ -1373,12 +1355,11 @@ History for 'tank':
 2013-02-27.18:51:18 zfs create tank/backup
 ....
 
-The output shows `zpool` and `zfs` commands that were executed on the pool along with a timestamp.
-Only commands that alter the pool in some way are recorded.
+The output shows `zpool` and `zfs` commands altering the pool in some way along with a timestamp.
 Commands like `zfs list` are not included.
-When no pool name is specified, the history of all pools is displayed.
+When specifying no pool name, ZFS displays history of all pools.
 
-`zpool history` can show even more information when the options `-i` or `-l` are provided.
+`zpool history` can show even more information when providing the options `-i` or `-l`.
 `-i` displays user-initiated events as well as internally logged ZFS events.
 
 [source,shell]
@@ -1394,8 +1375,8 @@ History for 'tank':
 2013-02-27.18:51:18 zfs create tank/backup
 ....
 
-More details can be shown by adding `-l`.
-History records are shown in a long format, including information like the name of the user who issued the command and the hostname on which the change was made.
+Show more details by adding `-l`.
+Showing history records in a long format, including information like the name of the user who issued the command and the hostname on which the change happened.
 
 [source,shell]
 ....
@@ -1409,21 +1390,19 @@ History for 'tank':
 
 The output shows that the `root` user created the mirrored pool with disks [.filename]#/dev/ada0# and [.filename]#/dev/ada1#.
 The hostname `myzfsbox` is also shown in the commands after the pool's creation.
-The hostname display becomes important when the pool is exported from one system and imported on another.
-The commands that are issued on the other system can clearly be distinguished by the hostname that is recorded for each command.
+The hostname display becomes important when exporting the pool from one system and importing on another.
+It's possible to distinguish the commands issued on the other system by the hostname recorded for each command.
 
-Both options to `zpool history` can be combined to give the most detailed information possible for any given pool.
-Pool history provides valuable information when tracking down the actions that were performed or when more detailed output is needed for debugging.
+Combine both options to `zpool history` to give the most detailed information possible for any given pool.
+Pool history provides valuable information when tracking down the actions performed or when needing more detailed output for debugging.
 
 [[zfs-zpool-iostat]]
 === Performance Monitoring
 
 A built-in monitoring system can display pool I/O statistics in real time.
-It shows the amount of free and used space on the pool,
-how many read and write operations are being performed per second,
-and how much I/O bandwidth is currently being utilized.
-By default, all pools in the system are monitored and displayed.
-A pool name can be provided to limit monitoring to just that pool.
+It shows the amount of free and used space on the pool, read and write operations performed per second, and I/O bandwidth used.
+By default, ZFS monitors and displays all pools in the system.
+Provide a pool name to limit monitoring to that pool.
 A basic example:
 
 [source,shell]
@@ -1435,16 +1414,14 @@ pool        alloc   free   read  write   read  write
 data         288G  1.53T      2     11  11.3K  57.1K
 ....
 
-To continuously monitor I/O activity, a number can be specified as the last parameter,
-indicating a interval in seconds to wait between updates.
-The next statistic line is printed after each interval.
+To continuously see I/O activity, specify a number as the last parameter, indicating an interval in seconds to wait between updates.
+The next statistic line prints after each interval.
 Press kbd:[Ctrl+C] to stop this continuous monitoring.
-Alternatively, give a second number on the command line after the interval to specify the total number of statistics to display.
+Give a second number on the command line after the interval to specify the total number of statistics to display.
 
-Even more detailed I/O statistics can be displayed with `-v`.
-Each device in the pool is shown with a statistics line.
-This is useful in seeing how many read and write operations are being performed on each device,
-and can help determine if any individual device is slowing down the pool.
+Display even more detailed I/O statistics with `-v`.
+Each device in the pool appears with a statistics line.
+This is useful for seeing read and write operations performed on each device, and can help determine if any individual device is slowing down the pool.
 This example shows a mirrored pool with two devices:
 
 [source,shell]
@@ -1463,30 +1440,29 @@ data                      288G  1.53T      2     12  9.23K  61.5K
 [[zfs-zpool-split]]
 === Splitting a Storage Pool
 
-A pool consisting of one or more mirror vdevs can be split into two pools.
-Unless otherwise specified, the last member of each mirror is detached and used to create a new pool containing the same data.
-The operation should first be attempted with `-n`.
-The details of the proposed operation are displayed without it actually being performed.
+ZFS can split a pool consisting of one or more mirror vdevs into two pools.
+Unless otherwise specified, ZFS detaches the last member of each mirror and creates a new pool containing the same data.
+Be sure to make a dry run of the operation with `-n` first. 
+This displays the details of the requested operation without actually performing it.
 This helps confirm that the operation will do what the user intends.
 
 [[zfs-zfs]]
 == `zfs` Administration
 
-The `zfs` utility is responsible for creating, destroying, and managing all ZFS datasets that exist within a pool.
-The pool is managed using <<zfs-zpool,`zpool`>>.
*** 1279 LINES SKIPPED ***