RAID Question

I have a non-Linode related question I was hoping to get an answer from someone who's an expert in mdadm. I have/had a RAID5 array with four 250GB drives. As a result of some poking around inside the computer while it was running, the system froze and I had to hard reset it. After booting up it looks like the RAID array got messed up. It looks as if one of the drives were removed. Here's what mdadm says:

# mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Thu Dec 27 08:47:02 2007
     Raid Level : raid5
    Device Size : 244198464 (232.89 GiB 250.06 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Feb 11 22:04:35 2009
          State : active, degraded, Not Started
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 654d2ad9:ca55af99:8b394a0b:cda00542
         Events : 0.5

    Number   Major   Minor   RaidDevice State
       0      33        0        0      active sync   /dev/hde
       1      33       64        1      active sync   /dev/hdf
       2      34        0        2      active sync   /dev/hdg
       3       0        0        3      removed

and this is what dmesg says:

# dmesg | grep md0
md: md0 stopped.
md: md0: raid array is not clean -- starting background reconstruction
raid5: cannot start dirty degraded array for md0
raid5: failed to run raid set md0

and some more info

# mdadm -E /dev/md0
mdadm: No md superblock detected on /dev/md0.

Do I still have hope or did I lose all my data? I stopped messing around with it before I do something bad and mess it up even more (if that's possible).

Any help would be greatly appreciated.

  • George

2 Replies

I found this on a forum. Would this help me? I really don't know much about mdadm and I want to make sure I don't mess it up any more than it already is. Maybe changing the state manually is not that bad though. I don't know. Anybody?

[root@ornery ~]# cat /sys/block/md0/md/array_state
inactive
[root@ornery ~]# echo "clean" > /sys/block/md0/md/array_state
[root@ornery ~]# cat /sys/block/md0/md/array_state
clean
[root@ornery ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdc1[1] sdi1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2]
      2344252416 blocks level 6, 256k chunk, algorithm 2 [8/7] [_UUUUUUU]

unused devices: <none>
[root@ornery ~]# mount -o ro /dev/md0 /data
[root@ornery ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda2             226G   46G  168G  22% /
/dev/hda1             251M   52M  187M  22% /boot
/dev/shm              2.9G     0  2.9G   0% /dev/shm
/dev/sda2              65G   35G   27G  56% /var
/dev/md0              2.2T  307G  1.8T  15% /data</none>

Is the "removed" drive still present? Is it listed when you run fdisk -l? If it's not, then you probably need to start looking at the hardware and finding out why Linux doesn't know it's there anymore…

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct