A site for solving at least some of your technical problems...
A site for solving at least some of your technical problems...
Loads of very detailed info are available on Wikipedia. Just in case, it means: Redundant Array of Independent Disks.
More or less, you can create arrays of hard drives to make things run faster, safer, bigger (pick what you need!) The safer is what I use, it is RAID1. The faster is RAID0 and if any one drive dies, bye bye your data! And the bigger is RAID5. That's what Linux offers, so you do not have to waste your time reading about the others.
[note, if you already installed your system and want to switch it into a RAID, check the next chapter]
Not too surprising, it's not exactly straight forward. When installing Ubuntu, it requires a few steps and these are not exactly in order you would expect.
First of all, select the "Manually edit the partition table" when asked how you want to partition disks.
Now, you see the drives. If you do not need the data left on the drive (easier!) then just delete any existing partitions.
Now define a list of partitions that you want to create. This is important because you will do it twice. There is an example of default partitions:
/ /boot /home swap /tmp /usr /var
I usually only use /, /home, /tmp, /var and of course a swap partition. Just in case.
The /tmp and swap, I do not create as RAID to safe time. Also, the /tmp partition, these days, often remains in memory only. So if you have a failure, you are good for a reboot in many cases.
Once you know the partitions, define the size (amount of Mb, Gb, Tb, Pb you want to assign to each partition.) Note that you do that only once since you will be duplicating that data on both systems.
Okay, now you can start creating the partitions. You can always create them in the order you defined them, but I suggest putting the /tmp and swap partitions in the middle of the drive. That way you have less head work (if you do not yet have Flash drives that is.)
Now, you said "Create new partition". I suggest you use 3 Primary partitions, 1st one / (or /boot) and mark it as boot (i.e. it will boot). Then change the file system type from Ext3 journaling file system to Physical volume for RAID. This may feel really weird since the partition becomes what I would call "not much". In other words, it does not give you a way to assign a directory and other format and mount point information.
Don't worry, this is the next step. Repeat this step for each partition and don't forget that having the swap on RAID is not very likely to work. So if you want to get more swap or just make it faster, avoid the duplication (it is faster to write, to read it can be slower but that will depend on your hardware.) Anyway, if you swap a lot, you may want to extend your RAM.
Now you are done with the partitions, it should all say "K raid". Now look closely in the menu, at the very top (yep! usually well hidden), you see the menu: Configure Software RAID.
This option will let you attach the different partitions together. So if you created the exact same disks on, say, sda and sdb, you will attach sda1 with sdb1, sda2 with sdb2, etc.
Note that only RAID partitions are shown. Also, partitions already attached together do not show up as a choice for the following attachments.
Once all the RAID paritions are attached, exit that menu and you will now see a new set of entries in the list of drives. These are the software RAID drives. Hit Enter on them to edit their info.
Yes! This step looks like a normal partition setup. Except that by default it says "do not use" instead of "Ext3 ...". Change the partition type and you get the mount point, the different modes, etc. Change to your liking.
Finish the installation and enjoy!
Ah! If you want to install GRUB on the 2nd drive, since that's not done automatically, you may want to do some extra work. I do not have instructions here (I had a link which now went bye bye!)
Also, you may want to test, test, test. Just in case.
This time around, I have to install a RAID from an existing system. There is the story in brief and then the steps to install my new RAID. So... I had a system break down, that is, the old computer would just auto-shutdown at random times. The first time, I thought maybe it was the UPS. But after a couple more "let's go bye bye now!" it was clear that I needed a new computer.
To go fast, I built the new computer with just one drive using the default hard drive installation with something like 10 partitions. I used the Ubuntu 10.04 installer and did not even know that I was creating a new kind of system: one GPT partition, which means the system was installed to run an LVA partitioning system. At first I thought that looked okay and we could just move forward, but when I wanted to create my software RAID (I much prefer software because it's much more flexible,) I had no clue on how to proceed. So the new drive, I used fdisk to create similar partitions but create them as "Linux raid autodetect". Then copied the first drive on the second, and finally re-formatted the first drive and did the same again (copy from the new drive to the first via the md system.)
So...
If you are not too sure about this, know that a wrong manipulation and you could destroy all your data. Just be warned. If you're not sure, you may want to first backup all your important data (and even the not so important) so that way you can rescue your system if required.
I will assume that you know how to open your computer and install the new drive (i.e. connect the data line and power line after you turned off the computer and power supply, etc.)
Now, reboot. You may want to stop in the BIOS to make sure that the new drive is properly recognized. At this point the new drive is just a slave (i.e. you're not booting off of it.)
So first we want to format the drive and make partitions as on the first drive. The partitions don't need to be exactly the same size. Just make sure it's the same total number of partitions unless you want to remove one or two from the original drive. Obviously, the new partitions should be large enough so the data of the existing partitions will fit.
On my end, I use fdisk to format the drives. Now, I like fdisk, but I have no clue how you can properly compute the size of a partition. You'll have to do some testing if you use that tool too. One idea is to create one partition: Partition 1, that you then format, mount, and check to see whether it's the right size. You could write a little script for the purpose (AGAIN, KNOW THAT FORMATTING A HARD DRIVE MEANS DELETING THE DATA THAT'S ON IT. USE MY SCRIPT SAMPLE AT YOUR OWN RISK!)
fdisk /dev/sdb <afdisk sudo mkfs -t ext4 /dev/sdb1 sudo mount /dev/sdb1 /mnt/test df -B512 sudo umount /dev/sdb1
So... My script will first format the disk /dev/sdb (change sdb and sdb1 into whatever you need to make it work on your system!) It uses a text file named afdisk that includes the commands that are sent to fdisk to create the partition. That file needs to be edited each time you want to change the size of the partition being tested. The example creates a new partition of 17674 blocks (i.e. 17675 - 1 since those are cylinder positions.)
d n p 1 1 17675 t fd p w
(don't include any spaces in your file!) The keys mean: (d)elete existing partition [the first time you will get an error on that one], (n)ew partition, (p)artition [instead of extended], partition location (1), start cylinder (1), end cylinder (17675), (t)ype of partition, (fd) Linux RAID auto-detect, (p)rint result, (w)rite and exit.
This file expects that you have exactly one partition defined on the drive. If you messed around and created more or got some old drive, you may want to delete the other partitions first because this script will otherwise likely fail.
After the fdisk command, we run mkfs to create the file system. The type is set to ext4 in this example, you may want to change that type in your case. Note that at this point you can test the swap partition with ext4, that's fine. We'll anyway have to re-format with the RAID device (i.e. /dev/md1).
The mount attaches the new partition to your file system. Notice the /mnt/test destination folder. You will need to create that folder or change the script to use the folder of your choice (newer systems like to use the folder named /media/... for extra mounts.)
The df -B512 prints out the table of partitions and shows their sizes in blocks of 512 bytes which is the still the basics of hard drives (even though the hard drive may internally use a much larger block size like 16Kb or even 64Kb.)
Finally, once we're done with the partition, we unmount it (which is important for the next iteration to work, otherwise fdisk can change the partition and save it, but it won't be taken in account by the system.)
Okay, now that you created your table, using (n)ew partition for each partition, we can see the resulting table using the (p)rint command. There is how my 2Tb drive looks like once formatted:
Command (m for help): p Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x24796452 Device Boot Start End Blocks Id System /dev/sdb1 * 1 12158 97659103+ fd Linux raid autodetect /dev/sdb2 12159 60788 390620475 fd Linux raid autodetect /dev/sdb3 60789 75378 117194175 fd Linux raid autodetect /dev/sdb4 75379 243201 1348038247+ 5 Extended /dev/sdb5 75379 111851 292969341 fd Linux raid autodetect /dev/sdb6 111852 112459 4883728+ fd Linux raid autodetect /dev/sdb7 112460 116106 29294496 fd Linux raid autodetect /dev/sdb8 116107 128264 97659103+ fd Linux raid autodetect /dev/sdb9 128265 176895 390628476 fd Linux raid autodetect /dev/sdb10 176896 207289 244139773+ fd Linux raid autodetect /dev/sdb11 207290 225526 146488671 fd Linux raid autodetect /dev/sdb12 225527 243201 141974406 fd Linux raid autodetect
IMPORTANT: Don't forget to quit using the (w)rite command and not the (q)uit command since the (q)uit command discards all your changes. As long as you're modifying the right drive, using the (w)rite command won't hurt anything. If you're doing that on the wrong drive, that's really too bad... because there isn't really any way to restore the correct table (although I think they save it somewhere, but I do not know how to rescue the partition table in such a case.)
Notice that the Blocks say + once in a while. I still have no clue how to eliminate that problem and I'm not even sure that on today's system this is really a problem (I think I am wasting a bit of space or something like that. Big deal on a 2Tb drive using Linux RAID auto-detect with ext4 on top...)
To mark the first partition as the boot partition (necessary as we will be booting from that device soon) use the (a) command and select 1.
Note that after creating partitions 1, 2 and 3, I create an extended partition that represents the rest of the drive. This way you can create up to 15 partitions (really only 14 since the extended partition cannot be used as a partition.) If you only need 4 partitions or less, then you do not have to create an extended partition.
Frankly, I do not know... I think it's mainly habit and a way to clearly separate the different data. One reason to have /home separate is so your data can easily be copied without having to copy the system with it. Similarly, /var should be separated as it includes a lot of system data. Some people separate /boot from / which I do not do because that can cause problems (i.e. if somehow the booting system cannot mount / then you cannot boot at all.)
Also, if you want to have a swap partition, well... you need a separate partition for it (very preferable!) Note that it is much safer to create a Linux RAID auto-detect partition for your swap partition (type fd). You could also create a standard Linux swap partition (type 82). The problem with a standard partition is when a drive fails. At that point, if that swap was in use, your system will lock up or crash. This is why I recommend that you create only Linux RAID auto-detect partitions at this point. Later, we'll format the swap partition as such so the system can use it as such.
Now we have our new drive. Note that you do not have to format the partitions yet. First you should create the Array (RAID). Then format the /dev/md# devices. These are different from the standard /dev/sda# partitions.
First, you can check that you already have an Array or not using the following command:
cat /proc/mdstat
If you do have some Array defined, it will show up with a size and status for each partition (each /dev/md# partition.) At this point, I will assume that you do not have any Array (i.e. you're creating your first on that system.) Although there should be no problem keeping an existing Array going.
So... to get a new partition into an Array device, use the mdadm command line:
mdadm --create --verbose /dev/md3 --level=1 --force --raid-devices=1 /dev/sdb3 mdadm: /dev/sdb3 appears to contain an ext2fs file system size=117194172K mtime=Wed Dec 15 17:52:18 2010 mdadm: size set to 117194048K Continue creating array? y mdadm: array /dev/md3 started.
This command line assigns the new Array to device /dev/md3. It marks the array as a Level 1 RAID. This is the only type I use (i.e. mirroring on each drive.) The --force makes sure that the --raid-devices=1 order is accepted. By default mdadm refuses an array with just one drive, but since the 2nd drive is not ready yet, we cannot attach it to the array at this time. The following is the error message you get without the --force option:
mdadm: '1' is an unusual number of drives for an array, so it is probably a mistake. If you really mean it you will need to specify --force before setting the number of drives.
Finally, the command line ends with a list of devices to include in the array. Obviously, in our case we want one partition. Notice that I used /dev/md3 for /dev/sdb3. That way, it makes it easy to track what's what. However, if you have multiple arrays, you just won't be able to do that. Or like me, I have my /dev/sdb5 partition also duplicated on /dev/sdc1. So the numbers are not always a 1 to 1 match which in a software RAID is just fine (This being said, both partitions are exactly the same size—with fdisk, create partitions with the exact same number of blocks.)
Note that the mdadm warns me, in this example, because I formatted the partition with ext4 before adding it to the array. This can be ignored. As I mentioned earlier, we'll have to format from /dev/md3 again.
Repeat this operation for each one of your partitions.
Now we have a complete Array, but it is not yet initialized with data. First we want to format it. You can do so on the command line with mkfs as in:
mkfs -t ext4 /dev/md3
There are ways to set a label, place blocks with inode information, etc. Check out the mkfs man page or the specialized man page such as mkfs.ext3.
The swap partition requires the special command mkswap and the device name:
mkswap -L Swap\ Space /dev/md6
If the command tells you that you shouldn't do that because there is already a file system on that partition, then use the -f option.
The array is ready to receive data now. You can copy the data from your existing hard drive to the new array. Note that you don't copy to the new hard drive per se. Instead, you copy to the /dev/md1, /dev/md2... partitions. That builds the array data. It is that simple.
There are several commands that can be used to copy partitions. However, do NOT even try to use dd since that would destroy the data on the RAID devices. Plus, you're really not likely to have the right sizes for both systems.
Instead, I suggest rsync (probably the simplest in this situation), dump | restore (slow), tar c | tar x (complicated and slow), cp (which I do not recommend! it's complicated to copy special devices and most often fails with cp.)
IMPORTANT NOTES: Make sure your source and destination are respected and that you do not copy the source in a sub-folder in the destination. Also, with rsync, make sure to use the -x (--one-file-system) option and copy each partition one by one.
Obviously:
1. All the data should be copied by root to make sure that everything is preserved (i.e. creation time, modification time, owner, group, permissions and any other attributes.)
2. You should not be working on your system, or at least not in any of the partitions that were copied or the one being copied (i.e. watch a movie while copying your home folder which you probably want to copy last.)
3. If you have a server with Apache, Postfix, MySQL, etc. it is wise to turn off those daemons before the copy to avoid problems on reboot.
Needless to say, you've already done a lot of work here! You're really close to be done with phase 1.
Remember that I asked you to make /dev/sdb1 a boot partition. Now we have to install the "boot sector" (really that's several sectors, but anyway...) This can be done using GRUB directly on the RAID device. This is true since 8.10 (which was apparently replicated in 8.04.) So if you have 10.04 or 10.10, it will work as is.
sudo grub-install /dev/md0
Note that the copy (rsync) you've done before will include all the data necessary for the grub-install to work, although it will be found in the main drive since both drives are alike it will work just fine.
One last thing you have to do on your new drive: edit the /etc/fstab from the existing /dev/sda partitions to the new drive partitions. If you fstab currently uses UUID=... you can replace them with /dev/md1, /dev/md2, etc. Or determine the UUID of each Array partition using the following command line:
ls -l /dev/disk/by-uuid
Or use the vol_id command to determine the UUID of a specific partition. Note that I had problems using UUIDs. When you change drives in a certain way the UUID changes too. That means the mounting fails on next reboot forcing you to have a Linux Live CD to go edit the file... Not too good if you ask me. And in contrast your /dev/md1 device will not change (but I think they created the UUID feature because they had the problem the other way around... go figure!)
Okay... time to test now. So reboot and in your BIOS make sure to swap the boot sequence to the new RAID drive. You will later be able to swap it back. If necessary, swap the drive connectors after you turned off the computer. Although modern BIOS can boot from any drives and Linux is a smart system that can continue it's boot sequence without messing up.
Once you rebooted, verify that everything works. You should know better than I what you have to test on your system. Especially, make sure you are running with /dev/md# devices (i.e. use the df command.)