Imagine you are driving to your favorite park, but the only road leading there is blocked by a fallen tree. You would be stuck, right? In the world of computers, we solve this problem by building extra roads. This is exactly what multipathing does for your Linux servers!
Multipathing is a clever technique used in Linux to provide redundancy and high availability for storage devices. Instead of having just one physical connection between a server and its data, multipathing allows the system to access a single storage device through several different physical paths. This is commonly used with Storage Area Networks (SAN), specifically using technologies like iSCSI, Fibre Channel, and SAS storage.
The primary goal of multipathing is to ensure your data is always reachable. If a network card fails, a cable gets unplugged, or a switch breaks, the traffic can automatically switch to a different path without stopping the computer. This is called “failover.” Additionally, multipathing helps with “load balancing,” which means it can spread data across multiple paths to make everything run faster. Finally, it ensures “high availability,” keeping critical storage accessible even during hardware repairs.
To understand how this works, we need to look at the Device Mapper Multipath (DM Multipath). This is a framework in the Linux kernel that detects all the different paths leading to the same storage device. It then bundles them into one single “logical” device, usually found in /dev/mapper/mpatha. When you save a file, the system talks to this logical device, and the multipath driver decides which physical road to take.
There are four key components you should know about. First is the Device Mapper, the kernel framework that maps one block device to another. Second is multipathd, which is a “daemon” or a background program that monitors the paths and handles errors. Third is the multipath.conf file, where all the rules and settings are kept. Finally, there is the dm-multipath kernel driver itself, which does the heavy lifting of moving data.
Setting this up on a modern system like RHEL 10 involves two main parts: the Target and the Initiator. The “Target” is the server that holds the data, and the “Initiator” is the client machine that wants to use it.
On the Target server, we use a tool called targetcli. First, we create a virtual disk using a command like dd to make a 5GB file. Then, we use the targetcli utility to create a “Backstore” and a “LUN” (Logical Unit Number). Think of a LUN as a specific slice of storage. We must also set up “Portals,” which are the IP addresses the client will use to connect. In RHEL 10, it is important to configure firewall rules using firewall-cmd to allow iSCSI traffic, or the machines won’t be able to talk to each other.
On the Initiator (client) side, we need to install two packages: iscsi-initiator-utils and device-mapper-multipath. Once installed, we use the iscsiadm command to “discover” the storage. If everything is connected correctly, the command will show multiple IP addresses for the same storage disk. After logging in, we enable multipathing using the mpathconf –enable command and start the multipathd service.
You can verify your work by running lsblk or multipath -ll. You will see that even though there are two or more physical disks listed (like sdb and sdc), they are both linked to one multipath device. If you manually turn off one of the network paths, the storage will remain active. This proves that your “backup roads” are working perfectly!
The multipath.conf file allows for even more control. You can use a “blacklist” to prevent the system from trying to multipath your local hard drive (like sda), which usually only has one path anyway. You can also give your devices “user friendly names” so that instead of a long string of numbers (the WWID), you see a simple name like mpatha.
By mastering multipathing, you are learning how to build “fault-tolerant” systems. This means your computer systems are strong enough to keep working even when parts of them break. This is a vital skill for anyone who wants to manage big data centers or cloud servers. Now that you understand the basics of iSCSI targets and initiators, I recommend practicing this in a virtual environment. Try creating different scenarios where you “break” a connection to see how fast the system recovers. For your next step, you might want to look into “Network Bonding,” which does something similar but for your internet connection instead of your storage!
