hard drive sector testing on Ubuntu
lazer100
lazer100 at talktalk.net
Wed Oct 10 08:18:48 UTC 2012
On 04-Oct-12 19:28:07 Steve Flynn wrote:
>On 4 October 2012 10:43, lazer100 <lazer100 at talktalk.net> wrote:
>>>Also note that by default, badblocks repeats the test with 4 different
>>>patterns. On a modern >1TB drive, that ... can take a while, so I would
>>>usually use -t 0xaa (or -t random) to only do 1 pass.
>>
>> there is an option to set the number of passes, but the program rejected
>> the command line when I tried this!
>What command did you issue?
$
$ sudo badblocks -p=1 -sn /dev/sdc
badblocks: invalid number of clean passes - =1
$
>>>Do not even thing of doing this on an SDD. Probably not a good idea on
>>>those hybrid drives either. Reasons are outside the scope of this quick
>>>e-mail, just don't do it.
>>
>> now I'm curious what the reasons are!
>SSD's have a limited number of read/write cycles. Running badblocks
>against one for any length of time would do little other than burn up
>its life.
when you say limited number, what order of magnitude?
by SSD I assume you mean solid state drive?
are SD cards regarded as SSD?
one camera uses an SDHC card and another uses SD.
>> not sure what you mean by a hybrid drive.
>A spinning platter drive backed up by a smaller SSD on-board. Heavily
>used blocks are cached into the SSD, thereby speeding up access to
>them.
>> the drive would appear to have removed the bad sectors, but I just wonder
>> what happens if a bad sector is part of a file. The drive cannot just
>> replace that sector as that part of the file would become junk.
>>
>> or are the bad sectors removed on a write?
>Bad blocks are found on a write. The data is written to the block, and
>then read back and compared to the write buffer.
the hardware always does this verify test?
>If the two match, the
>data is good on the drive and the next block is written. if the two
>differ, that block cannot be trusted so it's moved into the badblocks
>list on the drive and the data is written to another block where the
>process repeats. This is why you can hear a failing drive struggling
>to write blocks cleanly and the heads zipping about. It's also why
>failing drives get slower - they spend more time trying to find a good
>place to stash the data.
what is needed is to be able to specify a limit to how many retries,
so you know its time to replace the drive!
presumably eventually the drive runs out of spare sectors
(unless the drive is a portal to another universe)
>It's not coincidence that people normally think there's something
>wrong with a failing drive because it "sounds funny" and "it's much
>slower".
>Of course, If a block goes "bad" (maybe from a head touchdown) after
>it's already got data on it, then you are indeed looking at a trashed
>file.
>> if the drive has started to malfunction I'm a bit wary about continuing
>> to use the drive,
>I would be too. Only way to find out is to run badblocks in write mode
>and see how many (if any) blocks are written to the badblocks list. On
>big drives, this takes some time, as you have discovered.
I'll have to delay trying this because it takes so long,
as there are many alternative usages,
what command line do you recommend?
>> would you say that zeroing a modern drive would then remove the bad
sectors?
>It would re-write the badblocks list, but as soon as those blocks are
>written to again (and they are still bad) they'd be put back onto the
>badblocks list.
More information about the ubuntu-users
mailing list