Designing Todays SANs

Send to friend

Storage networking has undergone fundamental and rapid change since its introduction a decade ago. The first storage area networks (SANs) were primarily based on Fibre Channel arbitrated loop, a topology very similar to Token Ring. These configurations were fairly small, with just a few servers and storage devices on a single shared network. Today, enterprise SANs are typically constructed with multiple large Fibre Channel directors and may provide shared connectivity to thousands of servers, large storage arrays and centralized tape libraries. Designing storage networks has become an extremely complex undertaking. Fortunately, new technologies have evolved to assist the storage architect in designing and implementing todays sophisticated SANs. SAN management platforms, SAN security features, SAN routing, dynamic partitioning, IP-enabled SAN transports and iSCSI now provide a rich set of tools for designing storage networks that align with and reinforce customer business application requirements. In addition, SAN technology is becoming more affordable for departments and medium and small business operations, which can now leverage shared storage to reduce administrative overhead and streamline storage operations.

SAN Design Considerations
Selection of the appropriate host connectivity, SAN interconnect, storage devices and software requires careful analysis of your current and future application requirements. A configuration for one department or application may be inappropriate for another. iSCSI, for example, is suitable for moderate to low performance applications where low cost of connectivity is a budgetary priority. iSCSI is probably not the best choice, though, for high performance applications that require high availability and the fastest throughput. Some common guidelines should be followed to insure that the technical solution that is put into place aligns with business goals and will adapt to your growing storage requirements over time.

Know your application
SAN technology provides the infrastructure to support reliable, robust and high performance access to storage data by upper layer applications. Ironically, however, although customers may spend millions of dollars implementing an enterprise SAN, few customers know what their upper layer applications actually require. Bandwidth may be over-provisioned, storage capacity under-utilized, or security issues overlooked.

Current Fibre Channel speeds, for example, are at 2 gigabits per second, soon to be increased to 4 gigabits per second. The vast majority of business applications, however, require far less than 1 gigabit per second for efficient operation. And yet, customers often deploy Fibre Channel-attached servers with redundant, 2 Gbps host bus adapters. Why? Because vendors of both Fibre Channel HBAs and switches will readily sell 2 Gbps products at the same cost that 1 Gbps products previously commanded. Increasing the speed of connectivity has thus kept Fibre Channel pricing at a fairly consistent level, instead of the gradual decline in costs one might anticipate over time. Although higher speeds are quite useful for simplifying cabling between Fibre Channel switches or directors and for providing higher bandwidth to storage ports, most applications do not need or benefit from multi-gigabit throughput at the server.

Understanding application performance and availability requirements can help the storage architect classify server connectivity and match technology to the application. High availability mechanisms such as alternate pathing are well-established in Fibre Channel but have not demonstrated maturity in iSCSI. If an upper layer application is critical to business operations, it will typically be hosted on an enterprise-class server platform with redundant Fibre Channel HBAs for fail-over. Although the 2 Gbps connectivity may be excessive for the applications needs, there is value in providing proven fail-over capability and Fibre Channels efficient fabric switching support. The higher performance 2 Gbps driven at the storage port likewise gives the SAN designer more flexibility in deciding how many hosts can be configured to a given storage port (e.g., fan-in ratio).

If, by contrast, an application is less critical to business operations, it may not require high availability or high performance connectivity. Second-tier servers, for example, are typically lower cost platforms running business applications that can withstand some outages. In many enterprise environments, these second-tier servers still have direct-attached storage (DAS) and a separate tape backup process.

If the server itself only cost $3k-$5k, its difficult to justify installing one or more $1500 HBAs for shared storage access. This class of applications is an ideal candidate for iSCSI connectivity. Wintel platforms running Windows or Linux can be economically incorporated into the SAN by using iSCSI device drivers (for free), iSCSI accelerator cards (~ $500) and iSCSI to Fibre Channel gateways. The investment in the total SAN infrastructure is thus amortized over a much higher population of servers, and administrative overhead for second-tier servers can be dramatically reduced.

Knowing the requirements of upper layer applications enables the SAN architect to cost-effectively design solution sets that deliver various levels of performance and availability. As new applications are introduced, it is then a matter of selecting the most appropriate connectivity. This server strategy mirrors the new industry focus on classes of storage and the introduction of more economical storage arrays based on Serial ATA (SATA) or Serial-attached SCSI (SAS). Cost, which was previously the main barrier to entry for secondary applications, is thus no longer an absolute limit for extending the benefits of shared storage throughout the enterprise.

Tiered SAN Infrastructures
The first-generation SANs were typically based on flat fabrics, with one or more Fibre Channel directors or switches connected by expansion ports (E_Ports). As more ports were required, an additional switch would be added to the SAN. For high availability within the SAN transport, multiple switches would be configured in a meshed configuration, with each switch attached to multiple neighbors. In some cases, customers would end up with 30 or more switches in a single fabric in an attempt to provide adequate ports for servers and storage.

The difficulty of implementing large flat fabrics (essentially bridged networks) is in maintaining stability of the entire configuration. In a large multi-switch fabric, state change notifications may be broadcast throughout the fabric when a storage device enters or leaves the fabrics. The introduction of a new switch may trigger a disruptive fabric reconfiguration, causing all storage transactions to be momentarily suspended. Convergence time is also an issue, as it takes a longer time for multiple switches to exchange switch-to-switch protocols, build the fabric and stabilize.

For large fabrics today, the recommended architecture is a tiered infrastructure using central core directors and a fan-out of additional departmental directors or switches to support storage and servers. As shown in Figure 1, this tiered structure supports multiple departments while enabling sharing of large centralized assets such as tape libraries. Since more storage data remains local to each departmental application, high performance is provided on a department by department basis. At the same time, departments may need to share some data or resources, and the core director becomes the transit point for shared data and resources within the data center.











Figure 1: Building a tiered SAN infrastructure that scales throughout the enterprise

In addition to sharing of assets between various departments, it may be desirable to segregate departments for security or data isolation. Core directors may support dynamic partitioning so that, for example, a large 256 port director can be separated into functionally separate SANs. Hardware-enforced partitioning ensures that a reset of one partition does not affect the others, even though all partitions are housed in the same director chassis. This allows, for example, human resources and engineering to be connected to the same director, with no possibility of data exposure between the two.

The complement to partitioning for asset isolation is SAN Routing for asset sharing. A large data center SAN configuration may include one or more SAN Routers so that both selective isolation and selective sharing can be performed. In the example given above, it may be necessary to absolutely isolate sensitive human resource data from engineering and yet enable both departments to share a data center tape library for backup. With partitioning alone, it would not be possible to share centralized assets from two logically independent SANs. With SAN Routing, however, it is now possible to selectively share designated assets while still maintaining the independence and autonomy of each departmental SAN. Only those centralized assets that have been specifically authorized by the SAN administrator would be made visible to each partitioned SAN.

Enterprise-wide storage networking
Fibre Channel is a channel architecture, designed for very high performance and reliability within the fairly small circumference of the data center. It was not originally designed to span long distances or to provide fault isolation between multiple remote sites. Large companies, however, typically have multiple branch locations or data centers, often geographically dispersed and sometimes international in reach. Although native Fibre Channel was not designed to support enterprise-wide storage networking, new technologies can now drive storage data between multiple locations separated by thousands of kilometers.

The wide area network (WAN) depicted in Figure 1 may be dark fiber, DWDM, SONET, ATM, Gigabit Ethernet or a routed IP network. Previously, native Fibre Channel extension did not support multi-point connectivity and could not be driven more than metropolitan distances. With new IP storage protocols, much further distances can be accommodated for enterprise storage. The Internet Fibre Channel Protocol (iFCP), for example, is a natively routable IP protocol that supports multi-point connectivity. It also includes network address translation (NAT) that enables SAN Routing for fault isolation between connected SANs. Customers today are able to deploy storage networking technology that can drive block data between the US and Europe, or Europe and Asia so that even multi-national companies can share storage data regardless of geography.

This new capability is enabling new types of storage applications. Branch offices, for example, have previously performed local tape backup, but with few means to monitor whether backups were actually performed or if local tapes are restorable. Storage over distance now enables those remote branch offices to backup to a central data center, where both the integrity of the backup and its restorability can be verified. This is helping companies comply with government regulations for data integrity and accessibility while giving IT managers more control over dispersed assets.

Likewise, disaster recovery (DR) planning can now be extended throughout the enterprise. Previously, disaster recovery solutions were limited by the distance restrictions of native Fibre Channel extension. Performing DR within limited metropolitan distances often did not provide a sufficient circumference of safety against natural or political disturbances. Now it is possible to drive asynchronous disk-to-disk data replication over thousands of kilometers, well outside an area of potential disruption. Consequently, companies may now consider using synchronous replication within a local geography and supplemental asynchronous replication well outside a danger zone. In the event of a regional power outage or other mishap, a safe copy of data is secured in some other region, country or continent.

The evolution of SAN technology continues to provide richer functionality, more flexibility and greater economies for both large and small SAN implementations. Today, SAN technologies can be sized to particular application requirements so that both mission-critical and secondary business applications can be cost-effectively serviced. Designing very large data center SANs is enhanced by dynamic partitioning and SAN Routing technologies, while IP storage facilitates deployment of enterprise-wide storage networking on a global scale. The continued development of SAN management tools, storage virtualization and new classes of storage will provide more value for SAN solutions and enable more powerful and more productive use of shared storage in the coming years.

McData are exhibiting at Storage Expo the UK's largest and most important event dedicated to data storage, now in its 5th year, the show features a comprehensive FREE education programme, and over 90 exhibitors at the National Hall, Olympia, London from 12 - 13 October 2005

Comments (0)

Add a Comment

This thread has been closed from taking new comments.