Print this page
8024 mdb_ctf_vread() needn't be so strict about unions
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Approved by: Dan McDonald <danmcd@omniti.com>
NEX-6850 FMA messages need updating, badly
Reviewed by: Cynthia Eastham <cynthia.eastham@nexenta.com>
8490 Remove Sun/Solaris references from FMA messages
Reviewed by: Jason King <jason.king@joyent.com>
Reviewed by: Elijah Zupancic <elijah.zupancic@joyent.com>
Reviewed by: Robert Mustacchi <rm@joyent.com>
Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com>
Reviewed by: Eric Sproul <eric.sproul@circonus.com>
Reviewed by: Dale Ghent <daleg@elemental.org>
Approved by: Gordon Ross <gordon.w.ross@gmail.com>
NEX-5736 implement autoreplace matching based on FRU slot number
NEX-6200 hot spares are not reactivated after reinserting into enclosure
NEX-9403 need to update FRU for spare and l2cache devices
NEX-9404 remove lofi autoreplace support from syseventd
NEX-9409 hotsparing doesn't work for vdevs without FRU
NEX-9424 zfs`vdev_online() needs better notification about state changes
Portions contributed by: Alek Pinchuk <alek@nexenta.com>
Portions contributed by: Josef 'Jeff' Sipek <josef.sipek@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Steve Peng <steve.peng@nexenta.com>
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
NEX-7397 Hotspare didn't kick in automatically when one of the drive in pool went "Faulty"
Reviewed by: Sanjay Nadkarni <sanjay.nadkarni@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
NEX-2846 Enable Automatic/Intelligent Hot Sparing capability
Reviewed by: Jeffry Molanus <jeffry.molanus@nexenta.com>
Reviewed by: Roman Strashkin <roman.strashkin@nexenta.com>
Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com>
NEX-941 zfs doesn't replace "UNAVAIL" disk from spares in pool
Fix up some merges where we wanted the upstream version.
re #6853 rb1787 remove references to sun.com

Split Close
Expand all
Collapse all
          --- old/usr/src/cmd/fm/dicts/ZFS.po
          +++ new/usr/src/cmd/fm/dicts/ZFS.po
   1    1  #
   2      -# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
   3      -# Use is subject to license terms.
   4      -#
   5    2  # CDDL HEADER START
   6    3  #
   7    4  # The contents of this file are subject to the terms of the
   8    5  # Common Development and Distribution License (the "License").
   9    6  # You may not use this file except in compliance with the License.
  10    7  #
  11    8  # You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
  12    9  # or http://www.opensolaris.org/os/licensing.
  13   10  # See the License for the specific language governing permissions
  14   11  # and limitations under the License.
  15   12  #
  16   13  # When distributing Covered Code, include this CDDL HEADER in each
  17   14  # file and include the License file at usr/src/OPENSOLARIS.LICENSE.
  18   15  # If applicable, add the following below this CDDL HEADER, with the
  19   16  # fields enclosed by brackets "[]" replaced with your own identifying
  20   17  # information: Portions Copyright [yyyy] [name of copyright owner]
  21   18  #
  22   19  # CDDL HEADER END
  23   20  #
       21 +
       22 +
  24   23  #
  25      -# DO NOT EDIT -- this file is generated by the Event Registry.
       24 +# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
       25 +# Use is subject to license terms.
       26 +# Copyright 2017 Nexenta Systems, Inc.
  26   27  #
       28 +
  27   29  #
  28   30  # code: ZFS-8000-14
  29   31  # keys: ereport.fs.zfs.pool.corrupt_cache
  30   32  #
  31   33  msgid "ZFS-8000-14.type"
  32   34  msgstr "Error"
  33   35  msgid "ZFS-8000-14.severity"
  34   36  msgstr "Critical"
  35   37  msgid "ZFS-8000-14.description"
  36      -msgstr "The ZFS cache file is corrupted  Refer to %s for more information."
       38 +msgstr "The ZFS cache file is corrupted."
  37   39  msgid "ZFS-8000-14.response"
  38   40  msgstr "No automated response will be taken."
  39   41  msgid "ZFS-8000-14.impact"
  40   42  msgstr "ZFS filesystems are not available"
  41   43  msgid "ZFS-8000-14.action"
  42      -msgstr "\nZFS keeps a list of active pools on the filesystem to avoid having to\nscan all devices when the system is booted.  If this file is corrupted, then\nnormally active pools will not be automatically opened.  The pools can be\nrecovered using the 'zpool import' command:\n\n\n# zpool import\n  pool: test\n    id: 12743384782310107047\n state: ONLINE\naction: The pool can be imported using its name or numeric identifier.\nconfig:\n\n        test              ONLINE\n          c0t0d0          ONLINE\n\n\nThis will automatically scan /dev/dsk for any\ndevices part of a pool.  If devices have been made available in an alternate\nlocation, use the '-d' option to 'zpool import' to search for devices in a\ndifferent directory.\n\nOnce you have determined which pools are available for import, you can\nimport the pool explicitly by specifying the name or numeric\nidentifier:\n\n\n# zpool import test\n\n\nAlternately, you can import all available pools by specifying the\n'-a' option.  Once a pool has been imported, the ZFS cache will be repaired so\nthat the pool will appear normally in the future.\n       "
       44 +msgstr "Re-import the pool(s) to recreate ZFS cache file."
  43   45  #
  44   46  # code: ZFS-8000-2Q
  45   47  # keys: ereport.fs.zfs.device.missing_r
  46   48  #
  47   49  msgid "ZFS-8000-2Q.type"
  48   50  msgstr "Error"
  49   51  msgid "ZFS-8000-2Q.severity"
  50   52  msgstr "Major"
  51   53  msgid "ZFS-8000-2Q.description"
  52      -msgstr "A device in a replicated configuration could not be\n       opened.  Refer to %s for more information."
       54 +msgstr "A device in a replicated configuration could not be opened."
  53   55  msgid "ZFS-8000-2Q.response"
  54   56  msgstr "A hot spare will be activated if available."
  55   57  msgid "ZFS-8000-2Q.impact"
  56      -msgstr "The pool is no longer providing the configured level of\n           replication."
       58 +msgstr "The pool is no longer providing the configured level of replication."
  57   59  msgid "ZFS-8000-2Q.action"
  58      -msgstr "\nFor an active pool\n\nIf this error was encountered while running 'zpool import', please see\nthe section below.  Otherwise, run 'zpool status -x' to determine which pool has\nexperienced a failure:\n\n\n# zpool status -x\n  pool: test\n state: DEGRADED\nstatus: One or more devices could not be opened.  Sufficient replicas exist for\n        the pool to continue functioning in a degraded state.\naction: Attach the missing device and online it using 'zpool online'.\n   see: http://illumos.org/msg/ZFS-8000-2Q\n scrub: none requested\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  DEGRADED     0     0     0\n          mirror              DEGRADED     0     0     0\n            c0t0d0            ONLINE       0     0     0\n            c0t0d1            FAULTED      0     0     0  cannot open\n\nerrors: No known data errors\n\n\nDetermine which device failed to open by looking for a FAULTED device\nwith an additional 'cannot open' message.  If this device has been inadvertently\nremoved from the system, attach the device and bring it online with 'zpool\nonline':\n\n\n# zpool online test c0t0d1\n\n\nIf the device is no longer available, the device can be replaced using\nthe 'zpool replace' command:\n\n\n# zpool replace test c0t0d1 c0t0d2\n\n\nIf the device has been replaced by another disk in the same physical\nslot, then the device can be replaced using a single argument to the 'zpool\nreplace' command:\n\n\n# zpool replace test c0t0d1\n\n\nExisting data will be resilvered to the new device.  Once the\nresilvering completes, the device will be removed from the pool.\n\nFor an exported pool\n\nIf this error is encountered during a 'zpool import', it means that one\nof the devices is not attached to the system:\n\n\n# zpool import\n  pool: test\n    id: 10121266328238932306\n state: DEGRADED\nstatus: One or more devices are missing from the system.\naction: The pool can be imported despite missing or damaged devices.  The\n        fault tolerance of the pool may be compromised if imported.\n   see: http://illumos.org/msg/ZFS-8000-2Q\nconfig:\n\n        test              DEGRADED\n          mirror          DEGRADED\n            c0t0d0        ONLINE\n            c0t0d1        FAULTED   cannot open\n\n\nUnlike when the pool is active on the system, the device cannot be\nreplaced while the pool is exported.  If the device can be attached to the\nsystem, attach the device and run 'zpool import' again.\n\nAlternatively, the pool can be imported as-is, though it will be placed\nin the DEGRADED state due to a missing device.  The device will be marked as\nUNAVAIL.  Once the pool has been imported, the missing device can be replaced as\ndescribed above.\n   "
       60 +msgstr "Replace the bad device."
  59   61  #
  60   62  # code: ZFS-8000-3C
  61   63  # keys: ereport.fs.zfs.device.missing_nr
  62   64  #
  63   65  msgid "ZFS-8000-3C.type"
  64   66  msgstr "Error"
  65   67  msgid "ZFS-8000-3C.severity"
  66   68  msgstr "Critical"
  67   69  msgid "ZFS-8000-3C.description"
  68      -msgstr "A device could not be opened and no replicas are available.  Refer to %s for more information."
       70 +msgstr "A device could not be opened and no replicas are available."
  69   71  msgid "ZFS-8000-3C.response"
  70   72  msgstr "No automated response will be taken."
  71   73  msgid "ZFS-8000-3C.impact"
  72   74  msgstr "The pool is no longer available"
  73   75  msgid "ZFS-8000-3C.action"
  74      -msgstr "\nFor an active pool\n\nIf this error was encountered while running 'zpool import', please see\nthe section below.  Otherwise, run 'zpool status -x' to determine which pool\nhas experienced a failure:\n\n\n# zpool status -x\n  pool: test\n state: FAULTED\nstatus: One or more devices could not be opened.  There are insufficient\n      replicas for the pool to continue functioning.\naction: Attach the missing device and online it using 'zpool online'.\n   see: http://illumos.org/msg/ZFS-8000-3C\n scrub: none requested\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  FAULTED      0     0     0  insufficient replicas\n          c0t0d0              ONLINE       0     0     0\n          c0t0d1              FAULTED      0     0     0  cannot open\n\nerrors: No known data errors\n\n\nIf the device has been temporarily detached from the system, attach the\ndevice to the system and run 'zpool status' again.  The pool should\nautomatically detect the newly attached device and resume functioning.  You may\nhave to mount the filesystems in the pool explicitly using 'zfs\nmount -a'.\n\nIf the device is no longer available and cannot be reattached to the\nsystem, then the pool must be destroyed and re-created from a backup\nsource.\n\nFor an exported pool\n\nIf this error is encountered during a 'zpool import', it means that one\nof the devices is not attached to the system:\n\n\n# zpool import\n  pool: test\n    id: 10121266328238932306\n state: FAULTED\nstatus: One or more devices are missing from the system.\naction: The pool cannot be imported.  Attach the missing devices and\n    try again.\n   see: http://illumos.org/msg/ZFS-8000-3C\nconfig:\n\n        test              FAULTED   insufficient replicas\n          c0t0d0          ONLINE\n          c0t0d1          FAULTED   cannot open\n\n\nThe pool cannot be imported until the missing device is attached to the\nsystem.  If the device has been made available in an alternate location, use the\n'-d' option to 'zpool import' to search for devices in a different directory.\nIf the missing device is unavailable, then the pool cannot be imported.\n        "
       76 +msgstr "If the device is no longer available and cannot be reattached to the system, then the pool must be destroyed and re-created from a backup source."
  75   77  #
  76   78  # code: ZFS-8000-4J
  77   79  # keys: ereport.fs.zfs.device.corrupt_label_r
  78   80  #
  79   81  msgid "ZFS-8000-4J.type"
  80   82  msgstr "Error"
  81   83  msgid "ZFS-8000-4J.severity"
  82   84  msgstr "Major"
  83   85  msgid "ZFS-8000-4J.description"
  84      -msgstr "A device could not be opened due to a missing or invalid\n          device label.  Refer to %s for more information."
       86 +msgstr "A device could not be opened due to a missing or invalid device label."
  85   87  msgid "ZFS-8000-4J.response"
  86   88  msgstr "A hot spare will be activated if available."
  87   89  msgid "ZFS-8000-4J.impact"
  88      -msgstr "The pool is no longer providing the configured level of\n           replication."
       90 +msgstr "The pool is no longer providing the configured level of replication."
  89   91  msgid "ZFS-8000-4J.action"
  90      -msgstr "\nFor an active pool\n\nIf this error was encountered while running 'zpool import', please see\nthe section below.  Otherwise, run 'zpool status -x' to determine which pool\nhas experienced a failure:\n\n\n\n# zpool status -x\n  pool: test\n state: DEGRADED\nstatus: One or more devices could not be used because the label is missing or\n        invalid.  Sufficient replicas exist for the pool to continue\n        functioning in a degraded state.\naction: Replace the device using 'zpool replace'.\n   see: http://illumos.org/msg/ZFS-8000-4J\n scrub: none requested\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  DEGRADED     0     0     0\n          mirror              DEGRADED     0     0     0\n            c0t0d0            ONLINE       0     0     0\n            c0t0d1            FAULTED      0     0     0  corrupted data\n\nerrors: No known data errors\n\n\nIf the device has been temporarily detached from the system, attach the\ndevice to the system and run 'zpool status' again.  The pool should\nautomatically detect the newly attached device and resume functioning.\n\nIf the device is no longer available, it can be replaced using 'zpool\nreplace':\n\n\n# zpool replace test c0t0d1 c0t0d2\n\n\nIf the device has been replaced by another disk in the same physical\nslot, then the device can be replaced using a single argument to the 'zpool\nreplace' command:\n\n\n# zpool replace test c0t0d1\n\n\nZFS will begin migrating data to the new device as soon as the replace\nis issued.  Once the resilvering completes, the original device (if different\nfrom the replacement) will be removed, and the pool will be restored to the\nONLINE state.\n\nFor an exported pool\n\nIf this error is encountered while running 'zpool import', the pool can\nbe still be imported despite the failure:\n\n\n# zpool import\n  pool: test\n    id: 5187963178597328409\n state: DEGRADED\nstatus: One or more devices contains corrupted data.  The fault tolerance of\n     the pool may be compromised if imported.\naction: The pool can be imported using its name or numeric identifier.\n   see: http://illumos.org/msg/ZFS-8000-4J\nconfig:\n\n        test              DEGRADED\n          mirror          DEGRADED\n            c0t0d0        ONLINE\n            c0t0d1        FAULTED   corrupted data\n\n\nTo import the pool, run 'zpool import':\n\n\n# zpool import test\n\n\nOnce the pool has been imported, the damaged device can be replaced\naccording to the above procedure.\n       "
       92 +msgstr "Replace the bad device."
  91   93  #
  92   94  # code: ZFS-8000-5E
  93   95  # keys: ereport.fs.zfs.device.corrupt_label_nr
  94   96  #
  95   97  msgid "ZFS-8000-5E.type"
  96   98  msgstr "Error"
  97   99  msgid "ZFS-8000-5E.severity"
  98  100  msgstr "Critical"
  99  101  msgid "ZFS-8000-5E.description"
 100      -msgstr "A device could not be opened due to a missing or invalid\n          device label and no replicas are available.  Refer to %s for more information."
      102 +msgstr "A device could not be opened due to a missing or invalid device label and no replicas are available."
 101  103  msgid "ZFS-8000-5E.response"
 102  104  msgstr "No automated response will be taken."
 103  105  msgid "ZFS-8000-5E.impact"
 104      -msgstr "The pool is no longer available"
      106 +msgstr "The pool is no longer available."
 105  107  msgid "ZFS-8000-5E.action"
 106      -msgstr "\nFor an active pool\n\nIf this error was encountered while running 'zpool import', please see\nthe section below.  Otherwise, run 'zpool status -x' to determine which pool\nhas experienced a failure:\n\n\n# zpool status -x\n  pool: test\n state: FAULTED\nstatus: One or more devices could not be used because the the label is missing \n        or invalid.  There are insufficient replicas for the pool to continue\n        functioning.\naction: Destroy and re-create the pool from a backup source.\n   see: http://illumos.org/msg/ZFS-8000-5E\n scrub: none requested\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  FAULTED      0     0     0  insufficient replicas\n          c0t0d0              FAULTED      0     0     0  corrupted data\n          c0t0d1              ONLINE       0     0     0\n\nerrors: No known data errors\n\n\nThe device listed as FAULTED with 'corrupted data' cannot be opened due\nto a corrupt label.  ZFS will be unable to use the pool, and all data within the\npool is irrevocably lost.  The pool must be destroyed and recreated from an\nappropriate backup source.  Using replicated configurations will prevent this\nfrom happening in the future.\n\nFor an exported pool\n\nIf this error is encountered during 'zpool import', the action is the\nsame.  The pool cannot be imported - all data is lost and must be restored from\nan appropriate backup source.\n   "
      108 +msgstr "The pool must be destroyed and recreated from an appropriate backup source."
 107  109  #
 108  110  # code: ZFS-8000-6X
 109  111  # keys: ereport.fs.zfs.pool.bad_guid_sum
 110  112  #
 111  113  msgid "ZFS-8000-6X.type"
 112  114  msgstr "Error"
 113  115  msgid "ZFS-8000-6X.severity"
 114  116  msgstr "Critical"
 115  117  msgid "ZFS-8000-6X.description"
 116      -msgstr "One or more top level devices are missing.  Refer to %s for more information."
      118 +msgstr "One or more top level devices are missing."
 117  119  msgid "ZFS-8000-6X.response"
 118  120  msgstr "No automated response will be taken."
 119  121  msgid "ZFS-8000-6X.impact"
 120      -msgstr "The pool cannot be imported"
      122 +msgstr "The pool cannot be imported."
 121  123  msgid "ZFS-8000-6X.action"
 122      -msgstr "\nRun 'zpool import' to list which pool cannot be imported:\n\n\n# zpool import\n  pool: test\n    id: 13783646421373024673\n state: FAULTED\nstatus: One or more devices are missing from the system.\naction: The pool cannot be imported.  Attach the missing\n      devices and try again.\n   see: http://illumos.org/msg/ZFS-8000-6X\nconfig:\n\n        test              FAULTED   missing device\n          c0t0d0          ONLINE\n\n        Additional devices are known to be part of this pool, though their\n        exact configuration cannot be determined.\n\n\nZFS attempts to store enough configuration data on the devices such\nthat the configuration is recoverable from any subset of devices.  In some\ncases, particularly when an entire toplevel virtual device is not attached to\nthe system, ZFS will be unable to determine the complete configuration.  It will\nalways detect that these devices are missing, even if it cannot identify all of\nthe devices.\n\nThe pool cannot be imported until the unknown missing device is\nattached to the system.  If the device has been made available in an alternate\nlocation, use the '-d' option to 'zpool import' to search for devices in a\ndifferent directory.  If the missing device is unavailable, then the pool cannot\nbe imported.\n      "
      124 +msgstr "Attach the missing devices and try again.  The pool cannot be imported until the unknown missing device is attached to the system."
 123  125  #
 124  126  # code: ZFS-8000-72
 125  127  # keys: ereport.fs.zfs.pool.corrupt_pool
 126  128  #
 127  129  msgid "ZFS-8000-72.type"
 128  130  msgstr "Error"
 129  131  msgid "ZFS-8000-72.severity"
 130  132  msgstr "Critical"
 131  133  msgid "ZFS-8000-72.description"
 132      -msgstr "The metadata required to open the pool is corrupt.  Refer to %s for more information."
      134 +msgstr "The metadata required to open the pool is corrupt."
 133  135  msgid "ZFS-8000-72.response"
 134  136  msgstr "No automated response will be taken."
 135  137  msgid "ZFS-8000-72.impact"
 136  138  msgstr "The pool is no longer available"
 137  139  msgid "ZFS-8000-72.action"
 138      -msgstr "\nEven though all the devices are available, the on-disk data\nhas been corrupted such that the pool cannot be opened.  If a recovery\naction is presented, the pool can be returned to a usable state.\nOtherwise, all data within the pool is lost, and the pool must be\ndestroyed and restored from an appropriate backup source.  ZFS\nincludes built-in metadata replication to prevent this from happening\neven for unreplicated pools, but running in a replicated configuration\nwill decrease the chances of this happening in the future.\n\nIf this error is encountered during 'zpool import', see the\nsection below.  Otherwise, run 'zpool status -x' to determine which\npool is faulted and if a recovery option is available:\n\n\n# zpool status -x\n  pool: test\n    id: 13783646421373024673\n state: FAULTED\nstatus: The pool metadata is corrupted and cannot be opened.\naction: Recovery is possible, but will result in some data loss.\n        Returning the pool to its state as of Mon Sep 28 10:24:39 2009\n        should correct the problem.  Approximately 59 seconds of data\n        will have to be discarded, irreversibly.  Recovery can be\n        attempted by executing 'zpool clear -F test'.  A scrub of the pool\n        is strongly recommended following a successful recovery.\n   see: http://illumos.org/msg/ZFS-8000-72\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  FAULTED      0     0     2  corrupted data\n            c0t0d0            ONLINE       0     0     2\n            c0t0d1            ONLINE       0     0     2\n\n\nIf recovery is unavailable, the recommended action will be:\n\n\naction: Destroy the pool and restore from backup.\n\n\nIf this error is encountered during 'zpool import', and if no\nrecovery option is mentioned, the pool is unrecoverable and cannot be\nimported.  The pool must be restored from an appropriate backup\nsource.  If a recovery option is available, the output from 'zpool\nimport' will look something like the following:\n\n\n# zpool import share\ncannot import 'share': I/O error\n        Recovery is possible, but will result in some data loss.\n        Returning the pool to its state as of Sun Sep 27 12:31:07 2009\n        should correct the problem.  Approximately 53 seconds of data\n        will have to be discarded, irreversibly.  Recovery can be\n        attempted by executing 'zpool import -F share'.  A scrub of the pool\n        is strongly recommended following a successful recovery.\n\n\nRecovery actions are requested with the -F option to either\n'zpool clear' or 'zpool import'.  Recovery will result in some data\nloss, because it reverts the pool to an earlier state.  A dry-run\nrecovery check can be performed by adding the -n option, affirming if\nrecovery is possible without actually reverting the pool to its\nearlier state.\n"
      140 +msgstr "Try recovery of the pool if it's possible.  Otherwise the pool must be destroyed and restored from an appropriate backup source."
 139  141  #
 140  142  # code: ZFS-8000-8A
 141  143  # keys: ereport.fs.zfs.object.corrupt_data
 142  144  #
 143  145  msgid "ZFS-8000-8A.type"
 144  146  msgstr "Error"
 145  147  msgid "ZFS-8000-8A.severity"
 146  148  msgstr "Critical"
 147  149  msgid "ZFS-8000-8A.description"
 148      -msgstr "A file or directory could not be read due to corrupt data.  Refer to %s for more information."
      150 +msgstr "A file or directory could not be read due to corrupt data."
 149  151  msgid "ZFS-8000-8A.response"
 150  152  msgstr "No automated response will be taken."
 151  153  msgid "ZFS-8000-8A.impact"
 152  154  msgstr "The file or directory is unavailable."
 153  155  msgid "ZFS-8000-8A.action"
 154      -msgstr "\nRun 'zpool status -x' to determine which pool is damaged:\n\n\n# zpool status -x\n  pool: test\n state: ONLINE\nstatus: One or more devices has experienced an error and no valid replicas\n        are available.  Some filesystem data is corrupt, and applications\n        may have been affected.\naction: Destroy the pool and restore from backup.\n   see: http://illumos.org/msg/ZFS-8000-8A\n scrub: none requested\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  ONLINE       0     0     2\n          c0t0d0              ONLINE       0     0     2\n          c0t0d1              ONLINE       0     0     0\n\nerrors: 1 data errors, use '-v' for a list\n\n\nUnfortunately, the data cannot be repaired, and the only choice to\nrepair the data is to restore the pool from backup.  Applications attempting to\naccess the corrupted data will get an error (EIO), and data may be permanently\nlost.\n\nOn recent versions of illumos, the list of affected files can be\nretrieved by using the '-v' option to 'zpool status':\n\n\n# zpool status -xv\n  pool: test\n state: ONLINE\nstatus: One or more devices has experienced an error and no valid replicas\n        are available.  Some filesystem data is corrupt, and applications\n        may have been affected.\naction: Destroy the pool and restore from backup.\n   see: http://illumos.org/msg/ZFS-8000-8A\n scrub: none requested\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  ONLINE       0     0     2\n          c0t0d0              ONLINE       0     0     2\n          c0t0d1              ONLINE       0     0     0\n\nerrors: Permanent errors have been detected in the following files:\n\n        /export/example/foo\n\n\nDamaged files may or may not be able to be removed depending on the\ntype of corruption.  If the corruption is within the plain data, the file should\nbe removable.  If the corruption is in the file metadata, then the file cannot\nbe removed, though it can be moved to an alternate location.  In either case,\nthe data should be restored from a backup source.  It is also possible for the\ncorruption to be within pool-wide metadata, resulting in entire datasets being\nunavailable.  If this is the case, the only option is to destroy the pool and\nre-create the datasets from backup.\n       "
      156 +msgstr "Try recovery of the pool if it's possible.  Otherwise the pool must be destroyed and restored from an appropriate backup source."
 155  157  #
 156  158  # code: ZFS-8000-9P
 157  159  # keys: ereport.fs.zfs.device.failing
 158  160  #
 159  161  msgid "ZFS-8000-9P.type"
 160  162  msgstr "Error"
 161  163  msgid "ZFS-8000-9P.severity"
 162  164  msgstr "Minor"
 163  165  msgid "ZFS-8000-9P.description"
 164      -msgstr "A device has experienced uncorrectable errors in a\n        replicated configuration.  Refer to %s for more information."
      166 +msgstr "A device has experienced uncorrectable errors in a replicated configuration."
 165  167  msgid "ZFS-8000-9P.response"
 166  168  msgstr "ZFS has attempted to repair the affected data."
 167  169  msgid "ZFS-8000-9P.impact"
 168      -msgstr "The system is unaffected, though errors may indicate future\n       failure.  Future errors may cause ZFS to automatically fault\n          the device."
      170 +msgstr "The system is unaffected, though errors may indicate future failure.  Future errors may cause ZFS to automatically fault the device."
 169  171  msgid "ZFS-8000-9P.action"
 170      -msgstr "\nRun 'zpool status -x' to determine which pool has experienced\nerrors:\n\n\n# zpool status\n  pool: test\n state: ONLINE\nstatus: One or more devices has experienced an unrecoverable error.  An\n        attempt was made to correct the error.  Applications are unaffected.\naction: Determine if the device needs to be replaced, and clear the errors\n        using 'zpool online' or replace the device with 'zpool replace'.\n   see: http://illumos.org/msg/ZFS-8000-9P\n scrub: none requested\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  ONLINE       0     0     0\n          mirror              ONLINE       0     0     0\n            c0t0d0            ONLINE       0     0     2\n            c0t0d1            ONLINE       0     0     0\n\nerrors: No known data errors\n\n\nFind the device with a non-zero error count for READ, WRITE, or CKSUM.\nThis indicates that the device has experienced a read I/O error, write I/O\nerror, or checksum validation error.  Because the device is part of a mirror or\nRAID-Z device, ZFS was able to recover from the error and subsequently repair\nthe damaged data.\n\nIf these errors persist over a period of time, ZFS may determine the\ndevice is faulty and mark it as such.  However, these error counts may or may\nnot indicate that the device is unusable.  It depends on how the errors were\ncaused, which the administrator can determine in advance of any ZFS diagnosis.\nFor example, the following cases will all produce errors that do not indicate\npotential device failure:\n\n\nA network attached device lost connectivity but has now\nrecovered\nA device suffered from a bit flip, an expected event over long\nperiods of time\nAn administrator accidentally wrote over a portion of the disk using\nanother program\n\n\nIn these cases, the presence of errors does not indicate that the\ndevice is likely to fail in the future, and therefore does not need to be\nreplaced.  If this is the case, then the device errors should be cleared using\n'zpool clear':\n\n\n# zpool clear test c0t0d0\n\n\nOn the other hand, errors may very well indicate that the device has\nfailed or is about to fail.  If there are continual I/O errors to a device that\nis otherwise attached and functioning on the system, it most likely needs to be\nreplaced.   The administrator should check the system log for any driver\nmessages that may indicate hardware failure.  If it is determined that the\ndevice needs to be replaced, then the 'zpool replace' command should be\nused:\n\n\n# zpool replace test c0t0d0 c0t0d2\n\n\nThis will attach the new device to the pool and begin resilvering data\nto it.  Once the resilvering process is complete, the old device will\nautomatically be removed from the pool, at which point it can safely be removed\nfrom the system.  If the device needs to be replaced in-place (because there are\nno available spare devices), the original device can be removed and replaced\nwith a new device, at which point a different form of 'zpool replace' can be\nused:\n\n\n# zpool replace test c0t0d0\n\n\nThis assumes that the original device at 'c0t0d0' has been replaced\nwith a new device under the same path, and will be replaced\nappropriately.\n\nYou can monitor the progress of the resilvering operation by using the\n'zpool status -x' command:\n\n\n# zpool status -x\n  pool: test\n state: DEGRADED\nstatus: One or more devices is currently being replaced.  The pool may not be\n     providing the necessary level of replication.\naction: Wait for the resilvering operation to complete\n scrub: resilver in progress, 0.14% done, 0h0m to go\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  ONLINE       0     0     0\n          mirror              ONLINE       0     0     0\n            replacing         ONLINE       0     0     0\n              c0t0d0          ONLINE       0     0     3\n              c0t0d2          ONLINE       0     0     0  58.5K resilvered\n            c0t0d1            ONLINE       0     0     0\n\nerrors: No known data errors\n\n      "
      172 +msgstr "Determine how errors were caused, and replace the device if needed."
 171  173  #
 172  174  # code: ZFS-8000-A5
 173  175  # keys: ereport.fs.zfs.device.version_mismatch
 174  176  #
 175  177  msgid "ZFS-8000-A5.type"
 176  178  msgstr "Error"
 177  179  msgid "ZFS-8000-A5.severity"
 178  180  msgstr "Major"
 179  181  msgid "ZFS-8000-A5.description"
 180      -msgstr "The on-disk version is not compatible with the running\n            system.  Refer to %s for more information."
      182 +msgstr "The on-disk version is not compatible with the running system."
 181  183  msgid "ZFS-8000-A5.response"
 182  184  msgstr "No automated response will occur."
 183  185  msgid "ZFS-8000-A5.impact"
 184  186  msgstr "The pool is unavailable."
 185  187  msgid "ZFS-8000-A5.action"
 186      -msgstr "\nIf this error is seen during 'zpool import', see the section below.\nOtherwise, run 'zpool status -x' to determine which pool is faulted:\n\n\n# zpool status -x\n  pool: test\n state: FAULTED\nstatus: The ZFS version for the pool is incompatible with the software running\n        on this system.\naction: Destroy and re-create the pool.\n scrub: none requested\nconfig:\n\n        NAME                  STATE     READ WRITE CKSUM\n        test                  FAULTED      0     0     0  incompatible version\n          mirror              ONLINE       0     0     0\n            c0t0d0            ONLINE       0     0     0\n            c0t0d1            ONLINE       0     0     0\n\nerrors: No known errors\n\n\nThe pool cannot be used on this system.  Either move the storage to the\nsystem where the pool was originally created, upgrade the current system\nsoftware to a more recent version, or destroy the pool and re-create it from\nbackup.\n\nIf this error is seen during import, the pool cannot be imported on the\ncurrent system.  The disks must be attached to the system which originally\ncreated the pool, and imported there.\n\nThe list of currently supported versions can be displayed using 'zpool\nupgrade -v'.\n "
      188 +msgstr "Either move the storage to the system where the pool was originally created, upgrade the current system software to a more recent version, or destroy the pool and re-create it from backup."
 187  189  #
 188  190  # code: ZFS-8000-CS
 189  191  # keys: fault.fs.zfs.pool
 190  192  #
 191  193  msgid "ZFS-8000-CS.type"
 192  194  msgstr "Fault"
 193  195  msgid "ZFS-8000-CS.severity"
 194  196  msgstr "Major"
 195  197  msgid "ZFS-8000-CS.description"
 196      -msgstr "A ZFS pool failed to open.  Refer to %s for more information."
      198 +msgstr "A ZFS pool failed to open."
 197  199  msgid "ZFS-8000-CS.response"
 198  200  msgstr "No automated response will occur."
 199  201  msgid "ZFS-8000-CS.impact"
 200      -msgstr "The pool data is unavailable"
      202 +msgstr "The pool data is unavailable."
 201  203  msgid "ZFS-8000-CS.action"
 202      -msgstr "Run 'zpool status -x' and attach any missing devices, follow\n     any provided recovery instructions or restore from backup."
      204 +msgstr "Attach any missing devices, follow any provided recovery instructions or restore from backup."
 203  205  #
 204  206  # code: ZFS-8000-D3
 205  207  # keys: fault.fs.zfs.device
 206  208  #
 207  209  msgid "ZFS-8000-D3.type"
 208  210  msgstr "Fault"
 209  211  msgid "ZFS-8000-D3.severity"
 210  212  msgstr "Major"
 211  213  msgid "ZFS-8000-D3.description"
 212      -msgstr "A ZFS device failed.  Refer to %s for more information."
      214 +msgstr "A ZFS device failed."
 213  215  msgid "ZFS-8000-D3.response"
 214      -msgstr "No automated response will occur."
      216 +msgstr "A hot spare will be activated if available."
 215  217  msgid "ZFS-8000-D3.impact"
 216  218  msgstr "Fault tolerance of the pool may be compromised."
 217  219  msgid "ZFS-8000-D3.action"
 218      -msgstr "Run 'zpool status -x' and replace the bad device."
      220 +msgstr "Replace the bad device."
 219  221  #
 220  222  # code: ZFS-8000-EY
 221  223  # keys: ereport.fs.zfs.pool.hostname_mismatch
 222  224  #
 223  225  msgid "ZFS-8000-EY.type"
 224  226  msgstr "Error"
 225  227  msgid "ZFS-8000-EY.severity"
 226  228  msgstr "Major"
 227  229  msgid "ZFS-8000-EY.description"
 228      -msgstr "The ZFS pool was last accessed by another system  Refer to %s for more information."
      230 +msgstr "The ZFS pool was last accessed by another system."
 229  231  msgid "ZFS-8000-EY.response"
 230  232  msgstr "No automated response will be taken."
 231  233  msgid "ZFS-8000-EY.impact"
 232      -msgstr "ZFS filesystems are not available"
      234 +msgstr "ZFS filesystems are not available."
 233  235  msgid "ZFS-8000-EY.action"
 234      -msgstr "\n\nThe pool has been written to from another host, and was not cleanly exported\nfrom the other system.  Actively importing a pool on multiple systems will\ncorrupt the pool and leave it in an unrecoverable state.  To determine which\nsystem last accessed the pool, run the 'zpool import' command:\n\n\n# zpool import\n  pool: test\n    id: 14702934086626715962\nstate:  ONLINE\nstatus: The pool was last accessed by another system.\naction: The pool can be imported using its name or numeric identifier and\n        the '-f' flag.\n   see: http://illumos.org/msg/ZFS-8000-EY\nconfig:\n\n        test              ONLINE\n          c0t0d0          ONLINE\n\n# zpool import test\ncannot import 'test': pool may be in use from other system, it was last\naccessed by 'tank' (hostid: 0x1435718c) on Fri Mar  9 15:42:47 2007\nuse '-f' to import anyway\n\n\n\nIf you are certain that the pool is not being actively accessed by another\nsystem, then you can use the '-f' option to 'zpool import' to forcibly\nimport the pool.\n\n "
      236 +msgstr "If you are certain that the pool is not being actively accessed by another system, forcibly import the pool."
 235  237  #
 236  238  # code: ZFS-8000-FD
 237  239  # keys: fault.fs.zfs.vdev.io
 238  240  #
 239  241  msgid "ZFS-8000-FD.type"
 240  242  msgstr "Fault"
 241  243  msgid "ZFS-8000-FD.severity"
 242  244  msgstr "Major"
 243  245  msgid "ZFS-8000-FD.description"
 244      -msgstr "The number of I/O errors associated with a ZFS device exceeded\n             acceptable levels.  Refer to %s for more information."
      246 +msgstr "The number of I/O errors associated with a ZFS device exceeded acceptable levels."
 245  247  msgid "ZFS-8000-FD.response"
 246      -msgstr "The device has been offlined and marked as faulted.  An attempt\n            will be made to activate a hot spare if available. "
      248 +msgstr "The device has been offlined and marked as faulted.  An attempt will be made to activate a hot spare if available."
 247  249  msgid "ZFS-8000-FD.impact"
 248  250  msgstr "Fault tolerance of the pool may be compromised."
 249  251  msgid "ZFS-8000-FD.action"
 250      -msgstr "Run 'zpool status -x' and replace the bad device."
      252 +msgstr "Replace the bad device."
 251  253  #
 252  254  # code: ZFS-8000-GH
 253  255  # keys: fault.fs.zfs.vdev.checksum
 254  256  #
 255  257  msgid "ZFS-8000-GH.type"
 256  258  msgstr "Fault"
 257  259  msgid "ZFS-8000-GH.severity"
 258  260  msgstr "Major"
 259  261  msgid "ZFS-8000-GH.description"
 260      -msgstr "The number of checksum errors associated with a ZFS device\nexceeded acceptable levels.  Refer to %s for more information."
      262 +msgstr "The number of checksum errors associated with a ZFS device exceeded acceptable levels."
 261  263  msgid "ZFS-8000-GH.response"
 262      -msgstr "The device has been marked as degraded.  An attempt\nwill be made to activate a hot spare if available."
      264 +msgstr "The device has been marked as degraded.  An attempt will be made to activate a hot spare if available."
 263  265  msgid "ZFS-8000-GH.impact"
 264  266  msgstr "Fault tolerance of the pool may be compromised."
 265  267  msgid "ZFS-8000-GH.action"
 266      -msgstr "Run 'zpool status -x' and replace the bad device."
      268 +msgstr "Replace the bad device."
 267  269  #
 268  270  # code: ZFS-8000-HC
 269  271  # keys: fault.fs.zfs.io_failure_wait
 270  272  #
 271  273  msgid "ZFS-8000-HC.type"
 272  274  msgstr "Error"
 273  275  msgid "ZFS-8000-HC.severity"
 274  276  msgstr "Major"
 275  277  msgid "ZFS-8000-HC.description"
 276      -msgstr "The ZFS pool has experienced currently unrecoverable I/O\n          failures.  Refer to %s for more information."
      278 +msgstr "The ZFS pool has experienced currently unrecoverable I/O failures."
 277  279  msgid "ZFS-8000-HC.response"
 278  280  msgstr "No automated response will be taken."
 279  281  msgid "ZFS-8000-HC.impact"
 280  282  msgstr "Read and write I/Os cannot be serviced."
 281  283  msgid "ZFS-8000-HC.action"
 282      -msgstr "Make sure the affected devices are connected, then run\n            'zpool clear'."
      284 +msgstr "Make sure the affected devices are connected, then clear the pool's device errors."
 283  285  #
 284  286  # code: ZFS-8000-JQ
 285  287  # keys: fault.fs.zfs.io_failure_continue
 286  288  #
 287  289  msgid "ZFS-8000-JQ.type"
 288  290  msgstr "Error"
 289  291  msgid "ZFS-8000-JQ.severity"
 290  292  msgstr "Major"
 291  293  msgid "ZFS-8000-JQ.description"
 292      -msgstr "The ZFS pool has experienced currently unrecoverable I/O\n          failures.  Refer to %s for more information."
      294 +msgstr "The ZFS pool has experienced currently unrecoverable I/O failures."
 293  295  msgid "ZFS-8000-JQ.response"
 294  296  msgstr "No automated response will be taken."
 295  297  msgid "ZFS-8000-JQ.impact"
 296  298  msgstr "Read and write I/Os cannot be serviced."
 297  299  msgid "ZFS-8000-JQ.action"
 298      -msgstr "Make sure the affected devices are connected, then run\n            'zpool clear'."
      300 +msgstr "Make sure the affected devices are connected, then clear the pool's device errors."
 299  301  #
 300  302  # code: ZFS-8000-K4
 301  303  # keys: fault.fs.zfs.log_replay
 302  304  #
 303  305  msgid "ZFS-8000-K4.type"
 304  306  msgstr "Error"
 305  307  msgid "ZFS-8000-K4.severity"
 306  308  msgstr "Major"
 307  309  msgid "ZFS-8000-K4.description"
 308      -msgstr "A ZFS intent log device could not be read.  Refer to %s for more information."
      310 +msgstr "A ZFS intent log device could not be read."
 309  311  msgid "ZFS-8000-K4.response"
 310  312  msgstr "No automated response will be taken."
 311  313  msgid "ZFS-8000-K4.impact"
 312  314  msgstr "The intent log(s) cannot be replayed."
 313  315  msgid "ZFS-8000-K4.action"
 314      -msgstr "Either restore the affected device(s) and run 'zpool online',\n     or ignore the intent log records by running 'zpool clear'."
 315      -
      316 +msgstr "Either restore the affected device(s) and online them, or ignore the intent log records by clearing the pool's device errors."
      317 +#
      318 +# code: ZFS-8000-LR
      319 +# keys: fault.fs.zfs.vdev.not_spared
      320 +#
      321 +msgid "ZFS-8000-LR.type"
      322 +msgstr "Fault"
      323 +msgid "ZFS-8000-LR.severity"
      324 +msgstr "Major"
      325 +msgid "ZFS-8000-LR.description"
      326 +msgstr "Suitable spare was not found."
      327 +msgid "ZFS-8000-LR.response"
      328 +msgstr "No automated response will be taken."
      329 +msgid "ZFS-8000-LR.impact"
      330 +msgstr "Pool redundancy may be compromised."
      331 +msgid "ZFS-8000-LR.action"
      332 +msgstr "Replace the failed device and clear the pool's device errors."
      333 +#
      334 +# code: ZFS-8000-M2
      335 +# keys: fault.fs.zfs.vdev.dumb_spared
      336 +#
      337 +msgid "ZFS-8000-M2.type"
      338 +msgstr "Fault"
      339 +msgid "ZFS-8000-M2.severity"
      340 +msgstr "Major"
      341 +msgid "ZFS-8000-M2.description"
      342 +msgstr "Spare matching based on sparegroup and FRU has failed."
      343 +msgid "ZFS-8000-M2.response"
      344 +msgstr "No automated response will be taken."
      345 +msgid "ZFS-8000-M2.impact"
      346 +msgstr "Pool redundancy may be compromised."
      347 +msgid "ZFS-8000-M2.action"
      348 +msgstr "Inspect activated spare to confirm pool redundancy is intact."
    
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX