Category:

The solution to the problems mount software RAID

linux-security

In this post I will tell you how to fix a bug with renaming the array disk and consider the situation of replacing the hard disk in the array.

After creating the array and rebooting the system, we see that the array is going under a different name.

fdisk -l

The output in the console:

Disk /dev/md127: 10 GiB, 10726932480 bytes, 20951040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x38ff5a9e
Device       Boot Start      End  Sectors Size Id Type
/dev/md127p1       2048 20951039 20948992  10G 83 Linux

The same as when you run the blkid command.

In new versions, the system can't count anything, and creates a random RAID. The first 127 and down.

To update the RAID name, run in the console:

update-initramfs -u

(-u is to update the current RAM disk)

After rebooting the system, RAID gets a normal name.

At failure of the hard disk in RAID - the system at loading gives an error.

Press S - to skip and M - to manual recovery (In new versions Ctrl+D or Enter).

Or you can restart the server and skip the error message.

Check the RAID:

cat /proc/mdstat

See that RAID is not active.

It must be stopped:

mdadm --stop /dev/md0

(you can do from safe mode with root rights)

Then you can try to connect it again, to do this in the console run the command:

mdam --assemble --scan

You can then mount the array manually or reboot the server.

To add a new disc to the RAID:

Add an empty disk and start the server.

After a reboot everything is connected as it was, but when you request:

cat /proc/mdstat

you can see that the RAID is not assembled and there is only one disk:

root@ubuntu-server:~# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[0]
      10475520 blocks super 1.2 [2/1] [U_]
      
unused devices: 

We can see the details:

mdadm --detail /dev/md0p

From the console:

root@ubuntu-server:~# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Aug 21 16:53:34 2018
        Raid Level : raid1
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent
Update Time : Tue Aug 21 23:23:07 2018
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0
Consistency Policy : resync
Name : ubuntu-server:0  (local to host ubuntu-server)
              UUID : d988e415:1b7f30f9:3d246a5d:70ed725e
            Events : 47
Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       -       0        0        1      removed

Format the new hard drive as you did when creating (Link) (file system - df).

After the partition is created, add it to the RAID:

mdadm /dev/md0 -add /dev/sdc1

The output from the console:

root@ubuntu-server:~# mdadm /dev/md0 --add /dev/sdc1 
mdadm: added /dev/sdc1

After verification the RAID:

cat /proc/mdstat
root@ubuntu-server:~# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[2] sdb1[0]
      10475520 blocks super 1.2 [2/2] [UU]
      
unused devices: 


Posted: 2018-09-30

Comments