Oracle Solaris 11.4 ZFS Device Removal Example
One of the new features in the recent Solaris 11.4 release(that rely rocks), is, ZFS Device Removal.
Below I am going to demonstrated one example, on how you can use ZFS Device Removal.
The example below show how migrated a pool from raidz1 => mirrored pool.
First, lets create a test directory with test files, do so by running the below.
1 |
mkdir test && cd test |
Lets prepare / create test files to use in this test.
1 |
for i in {1..7}; do mkfile 175m file$i;done |
Next, lets create a test pool.
1 |
zpool create testPool raidz1 /root/test/file1 /root/test/file2 /root/test/file3 |
Lets see the newly create raidz1 pool.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
zpool status testPool pool: testPool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testPool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 /root/test/file1 ONLINE 0 0 0 /root/test/file2 ONLINE 0 0 0 /root/test/file3 ONLINE 0 0 0 errors: No known data errors |
The goal of the next exercise is to convert the testPool from raidz1 to a mirrored configuration.
To accomplish that, we are going to add a new mirror to the existing pool.
1 2 3 4 |
zpool add testPool mirror /root/test/file4 /root/test/file5 mirror /root/test/file6 /root/test/file7 vdev verification failed: use -f to override the following errors: mismatched replication level: pool uses raidz and new vdev is mirror Unable to build pool from specified devices: invalid vdev configuration |
So running the above gives you a warning to not mix raid types, typicality not a good practice in a normal environment.
So lets force adding the newly mirrored raid disks (as this is a pre- requisite to the migration/removal), by adding a -f.
1 |
zpool add -f testPool mirror /root/test/file4 /root/test/file5 mirror /root/test/file6 /root/test/file7 |
Lets take a look on the pool.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
zpool status testPool pool: testPool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testPool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 /root/test/file1 ONLINE 0 0 0 /root/test/file2 ONLINE 0 0 0 /root/test/file3 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 /root/test/file4 ONLINE 0 0 0 /root/test/file5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 /root/test/file6 ONLINE 0 0 0 /root/test/file7 ONLINE 0 0 0 errors: No known data errors |
As you can see from the zpool status output above. the zpool now contains a mix of raidz and a mirror.
We are now ready for prime time test. so lets remove the raidz raid set.
You do that by simply running the below.
1 |
zpool remove testPool raidz1-0 |
Now, lets take a look on the zpool status.
As you can see below, we are only left with the mirrored configuration.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
zpool status testPool pool: testPool state: ONLINE scan: resilvered 17.5K in 1s with 0 errors on Tue Aug 28 12:33:14 2018 config: NAME STATE READ WRITE CKSUM testPool ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 /root/test/file4 ONLINE 0 0 0 /root/test/file5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 /root/test/file6 ONLINE 0 0 0 /root/test/file7 ONLINE 0 0 0 errors: No known data errors |
Thats all it takes to trigger the ZFS device removal option.
Cleaning up.
Just run the below to remove your pool and remove the testing files.
1 2 |
zpool destroy testPool rm file[1-7] |
You might also like – Articles related to Oracle Solaris 11.4/Solaris 12.
Like what you’re reading? please provide feedback, any feedback is appreciated.
Hey Eli, Great blog about Solaris ZFS device removal. It works great with real devices too. 🙂
Thanks, Cindy
Great article, i heard that even with data inside zfs redistributes the data in the removal process if there is enough space in the remaining devices.