Storage 2

Together
grow
Current trends in storage networking and storage systems
The fence between the camps of the SAN and nasa provider gets more and more holes. Quite a few are still active on both sides of the border, and some even try to abolish all barriers.

L in time were regarded as opposites SAN and NAS.Users had to take sides: either - or. Therefore the one hand the different technical approaches were responsible, on the other hand, the interests of individual producers who originally opted for a variant of the storage networking and name them loudly in their marketing produce - were issued. Only at a later date, they have each added one or the other function, which gradually dissolved the original conflict, sometimes both in
one combined unit. Today almost all manufacturers have almost everything in her belly shop. Managed several start-ups with their exclusive offers, from its market niche ago - auszubewegen - about Equal - Logic, LeftHand Networks or
Data Domain - they were bought up quickly by one of the industry giants. Has not changed except for the fact
this day, the fact that users consider when SAN and NAS opposites - a triumph of sustained marketing campaigns later, if you will. Best example is the company NetApp (formerly Network Appliance), the user is identified generally still with NAS products, although they are now far more available has - and like even more would like the failed acquisition of Data Domain demonstrated, for the Finally, the competitor EMC was awarded the contract. It's co-founder Dave Hitz NetApp had already expressed in a 2005 interview with Project 57: "NAS and SAN are the same as two flavors of ice cream.
NAS and SAN is chocolate strawberry. And everything a customer needs to know about the two techniques is simply that at any time, both systems are used for data storage. What sensible person would be bothered by it that the other is not chocolate, but rather likes strawberry ice cream. "

Declared dead: Direct
Attached Storage
Common to both approaches is that they are the mass storage cher - free from the immediate surroundings of Baren server - hard disk drives.Electronic data is now not just once covered by applications and processes, but usually then stored for a longer period of storage media. In the early days of computers were the magnetic tapes, hard drives up to generally translated by [2]. Direct Attached Storage (DAS) was the norm, from the mainframe to the application server with attached storage device, both ready to access
of individual users (clients) on a corporate network or Local Area Network (LAN). For several reasons, there was the spin-off of the plates from the server or a server-oriented storage device - the term includes both direct-attached storage - to a more distant, and offered only via network connection accessible location. A direct assignment of servers and storage often means that the hard drives that have been reserved for stock or for peak, are not fully utilized during normal operation. Usually you go with such a structure made ​​of about 40% memory utilization.

Distributed storage is not
busy
Other reasons why Silowelt than the directly connected server and storage technology
must be considered obsolete, can be outlined as follows: With the wave of decentralization of computers in the company, which began 15 to 20 years wandering, more and more computing power on the desks of users.This was ideal for such developers to their workstations or small workgroup, but extremely tedious for those who had to manage a data center or a few central servers along with storage devices. Quickly, the index had lost all computers, and if you wanted to install a new hard disk into the server, you had to shut down all computers and applications. A huge effort to increase only slightly to the space. And you could connect with the then usual SCSI connection technology, only a limited number of hard drives. The Small Computer System Interface (SCSI) also turned out to be as physically inadequate in the long run: Each had to pay to increase the pace of transmission by shortening the cable length so that the responsible for the standardization T10 Technical Committee (www.t10.org) the parallel transmission method, finally at 320 Mbytes / s and a maximum distance of 12 meters gave up - such a short distance is no longer viable for data centers. Only the SCSI commands have survived on
Transfer the layers to work today, other protocols. Ethernet in the LAN and the prevalent TCP / IP protocols allow connections while longer, but were originally designed to transport a relatively short messages and not for the transfer of large amounts of data. In particular, it may come at the loss of IP data packets, where the higher protocol layers have to counter by resending the data. This results in unwanted delays that reduce the transmission rate.

Reliable Connections
The need herzuschicken store data reliably hinund between the devices introduced in the late 90s, especially in large companies to a separate network just for storage: the storage area network (SAN) - driven largely by the 1994 adopted Fibre Channel standard . Here, a serial transmission method is used, at the command level governs an evolution of the SCSI protocol. A SAN allows us to consolidate memory. Servers need no internal disks or more dedicated storage arrays, which are connected via a SCSI adapter, but can access via a host bus adapter (HBA) to a group of external storage systems, the same are usually multiple servers. In between are directors or switches that direct data flows and were equipped with the passage of time more and more "intelligence" and additional features such as zoning or virtualization. Unlike the DAS servers are in operation a storage volume - sometimes wrongly   , as a LUN (Logical Unit Number) refers to - or even entire drives from multiple arrays are available.Some operating systems permit it, moreover, to increase volumes in existing operating in or out.
Thus, for the memory size to adapt to the actual demand - a flexibility that is sorely lacking THE
leaves.

10GE is only
slowly
On top of that Fibre Channel offers a high data throughput, losses and delays do not occur as with IP in general. The transfer rate in FC networks is also continuously from 1 Gbit / s over 2 and 4 increased to the current level of 8 Gbit / sec.Ethernet, however, remained at the constant speed over the years
at 1 Gb / s, the young 10-Gigabit Ethernet uses has been slow.Standards for 40 and 100 Gbit / s are currently in development, but their introduction is likely drag on. Ljubo Cemeras, Technology Business Consultant at EMC, but the necessary additional investments blamed. Fibre Channel SANs or have earned a reputation for requiring much expertise and hard to install and manage to be. FC administrators and specialists, however, deny this and attribute the statement to the marketing of Ethernet Group. It is undisputed, however, that a SAN installation expensive on FC-based - it used high-quality hard drives, and the numbers for arrays, switches, HBAs, and so on are measured in which components from the Ethernet world a lot less, the therefore, more expensive devices. Standards in the SAN world, despite propaganda to the contrary let the Manufacturers Association SNIA (Storage Networking Industry Association, www.snia.org ) still left to be desired - with the result that almost all operators in administrative tools or cut - make their own thing cook.For years there have been attempts to reduce the cost of SANs
by using the relatively inexpensive Ethernet as a connection technology. By - has set itself only iSCSI. Its architecture uses the classic TCP / IP infrastructure to transport data. The administrators must, however, two Aufgabenbe - rich - the actual administration of the Internet for message transport and the iSCSI storage network - join the other - with each. The sales figures for iSCSI equipment have also gone up in the crisis years, with Dell through the acquisition of the iSCSI specialist EqualLogic for most of the cake
has secured. Some devices offer today to iSCSI and FC - a form of "unified storage", which leaves it to the customer, what connection it uses technology or whether he prefers two tracks on a permanent moves, depending on the application and performance requirements. Gartner has discontinued the investigation and reporting of iSCSI as an independent market next SAN and NAS in the meantime. In their report, "iSCSI MarketScope Is Retired, But iSCSI-Based Disk Storage Expands Market Presence", the analysts explain the following: "The iSCSI-based SAN market in the six years, has been adopted since the standard changed. 2003/2004 NetApp and a handful of start-ups and the template for a new market had delivered. About two years later, large storage vendors including Dell, EMC, HDS, HP, and IBM went with low-end iSCSI arrays on the plan in order to compete with start-ups. But the last few years have seen a consolidation of products and manufacturers.
NetApp was the pattern before the consolidation, brought out by iSCSI as an equal option next FC and the various NAS protocols. Today, many iSCSI and FC-based arrays to support at least. "

More choice
for users
Exactly this trend - several alternative interfaces in one device - has removed the compulsion to Gartner,
either to opt for a iSCSIoder an FC SAN. The user gets today are often both in one box and can therefore choose the cheaper option for him connect. Gartner also notes that at the same time the most successful start-ups iSCSI changed hands. Dell bought EqualLogic in 2007 and catapulted it into first place in iSCSI - 2008 market share was already 36.2%.HP took over in 2008, LeftHand Networks and is now in second place. Thus, the market has completely turned: Among the 15 manufacturers located in iSCSIMarkt not a single more, the only iSCSI has to offer.The volume of the iSCSI segment grew from 267 million U.S. dollars in 2005 to over 1.4 billion dollars (2008). Total added to the provider of SANund NAS storage systems in 2008 but $ 15.3 billion. Gartner in 2006 had a list of the disadvantages or limitations of iSCSI over Fibre Channel and submitted at the same time predicted that these would all be gone by 2008. The disadvantages were related to performance, security, management tools and scalability. Today, however, was 10-Gigabit Ethernet (10GE) available for iSCSI, and which - according to Gartner - a cost that correspond to those of 1-gigabit NICs. Its built-offload engines allowed high performance without additional costs. First hardware implementations of IPsec, which offered more protection for iSCSI are also available, and management software for storage arrays as well as resource management tools now support both SAN variants. Both FC and iSCSI are block-oriented protocols as the computers each complete volumes available. NASSpeichersysteme have however
its own file system and provide access to individual objects in it - for example pictures or Word documents.
Order to serve as instruments directories. They can usually be released individually for read or write access to one or more protocols, and individual computer users or user groups may have different access rights. Today, two protocols have prevailed or survived - Microsoft's CIFS (Common Internet File System) for the Windows world and was originally developed by Sun Network File System (NFS) for Unix and Linux systems.

NAS everywhere
used
Modern NAS systems can also represent volumes, which softened the contrast between block-based SAN caches and dateiorientiertem NAS increasingly blurred. The old maxim "fast databases in only one SAN," as no longer applies today, says EMC marketing Mika Kotro advice from the office of his company. There were also customers who have opted for an Oracle database in a NASUmgebung.
Users are therefore inconclusive first decision of a price - € per MB or GB. In addition
"soft" factors such as previous decisions and preferences for different IT infrastructures, constrained by the increasingly frequent acquisitions or by capping the cost-cutting measures enacted a role. The costs of a SAN are based on € per MB at least a factor of 2 higher than those for NAS, EMC says consultant Cemeras.For a SAN but could still come up with a higher throughput and guaranteed delivery capability for demanding applications. This remains affordable, as long as the space for a 500 GB database, perhaps to a few terabytes is. Growing data volumes continue the search for cheaper alternatives.Everything you need for a SAN? Is it not also a bit cheaper? Even before the advent of iSCSI, there was this alternative, even more so than in NAS systems cheaper plates were used. Even as the SAN world with Serial ATA (SATA), countered that could not stop the emergence of NAS, especially in a crisis. So that one has reached almost at "unified storage". The recipe is simple:
Take a memory array and add connecting different versions - for SAN Fibre Channel and Gigabit
10GE Ethernet or iSCSI and NAS.

A memory
for all (s)
Unified means that the block-based techniques with the FC and iSCSI protocols CIFS file-oriented NAS
NFS and must be reconciled. That means you can use a storage system must be different protocols.
The currently prevailing solution is that the storage system has a file system on which the
Vol umes in the form of files located and through which the block-based protocols "through" must. For the IBM storage specialists Ralf Colbus and Axel Koester is in the following consequence: "From the implementation
and the nature of these file systems, the performance depends of the unified memory system. File systems are typically optimized for multiple concurrent queries, only rarely for the fastest possible processing of a single application (such as a database). This is reflected in the characteristic response time or latency. Purely block-based storage systems can play in its short instruction path all the aces. " Who stands before the election, one must consider the purpose and type of applications. So my the IBM people, in the SME segment could a unified storage system to be useful, because if you buy only one instrument and its administration needs to learn. In larger environments such as data centers are separated both worlds, because
are preferred here in accordance with the requirements of rule applications and fine-tuned, specialized systems. Often the Management of these systems even different departments in charge.IBM has seen that one faces unified storage with a certain skepticism: "A unified storage system is like a Swiss Army knife with a blade, magnifying glass, corkscrew. Such a system would never operate, that is, here you need a scalpel, or rather a machete in the jungle. "

Large providers
are skeptical
At Hewlett-Packard is also argued more cautious, stressing the role of applications and purposes. It was important to use storage systems that have been developed precisely for that purpose.
Other architecture-related restrictions ensured that the memory systems do not meet the requirements
be said Guido Klenner, Business Manager HP Storage Works Division, and draws parallels to cars: "The sports car is here for the block data storage, while the truck for the file-serving memory stands. But even if both have the same strong engine, can not and should not afford them the same thing: While this engine
the sports car to its peak - brings speed, he makes the trucks that carry these heavy loads
can. Conversely, this does not work, because both designed from the outset on the application and all components are matched - and not just the engine. " Although the storage networking world seems provisionally arranged in solid sheets - here, SAN (FC and iSCSI), where NAS and finally the all-happy-unifying the world of unified storage - but appeared on the horizon nor a simplifier: Fibre Channel over Ethernet (FCoE). After all, in June this year were adopted basic standards, and early adopters are already busy trying around with the first products. That, at least according to the relevant interested industries. In a first step should be in the FCoE server in the form of so-called "Converged Network Adapter" materialize. Only the FcoESwitches - the so-called "top of the rack" switch - FC will then separate the two worlds of IP and clean. The next step, says Cemeras from EMC, the user would see it end-to-end FCoE across in server, network and storage.

Savior FCoE
Packet loss under TCP / IP should, so it means a thing of the past. The standards promise a lot: On
the side of the IP networks to guarantee Data Center Bridging (DCB), the loss-free transmission of each packet. The FC-world claims for himself, to have laid the foundation for seamless collaboration - finally, one might add.FCoE should prevail - the cost and general acceptance have already decided to the contrary on many technical innovations that have the worlds of the various networks approached each other, perhaps another piece. But there are also many different protocols,
and maybe come from the application side to other. In addition, there are still clusters, virtual machines and cloud, which in turn have different needs. Unified Storage is in any case - and therefore caution should be given - currently popular among manufacturers. But actually they want to sell what they have always been sold. Whether the words are more decorative than a new gateway of marketing, may be doubted.

New broom
Sense and Nonsense in unified storage
Can combine file-and block-oriented mass storage into a single unit is a logical step, the advantages bring. However, meeting the
currently available members of the genus by no means all user requests.

T he say the market or its significant protagonists of unified storage, it stands for a continuous, integrated storage infrastructure, which acts as a unification engine to the same Fibre Channel and IP Storage Networks (SAN) and Network Attached Storage (NAS) support.True, it is to start from a different definition: Unified Storage is a data repository - tory, which implements a controllable from the outside, while at least one block storage data access - method provides that an application
Layer according to the OSI model contains. It is obvious that the animals in unified storage is oriented to the available solutions and their functions. They like to suggest that it constitutes the ultimate in storage technology. The user in this way they convey the impression of unified storage is a complete, closed and consistent solution - an all inclusive package, which all his
Concerns and needs automatically removed at a stroke. That is not the case, turns out the latest in the detailed system design, if it get so, IAM (Identity and Access Management) to implement backup, recovery and archiving. Unified storage systems are a step in the right direction. So far, they put together but especially IT management tasks, which is not equivalent to a reduction of administrative burdens
ne From other tasks, especially the careful planning and administration of storage - resources, they free the user does not.

No holiday for
administrator
Above all, the issue latency IP network is usually underestimated. Delays of only a millisecond can cause considerable difficulties for data transport. The classical channel latency protocols such as ESCON, Fibre Channel and FCoE (Fibre Channel over Ethernet) is located to the microsecond range. Storage Systems with Solid State Disks (SSD) are the advantage of the storage channel protocols such as FC and SAS still
be more prominent. Careful network planning taking into account the special characteristics of
Storage protocols is another factor. This is especially true for iSCSI, because there exist two management domains: the administration of logical volumes (LUNs) and the assignment of names and addresses. iSCSI therefore initially increase the staffing requirements and a higher qualification required of the administrators. A networker has to deal with the issue of storage management deal, a storage administrator has the characteristics of a TCP / IP-based transport service learning. In comparison, represents the configuration of a FCSwitches like an exercise in elementary school, the effort is essentially restricted to the static
Configuration of a "power strip", because no one FC to know routing and assignment to other nodes in the classic SAN largely static. Unfortunately, the SAN configuration of interested party - the manufacturer of professional services - often represented as complex rocket science.

File Systems
inadequate
Also, the file system side has its pitfalls - particularly because the underlying paradigms on antiquated
Methods of data management oriented. The file system paradigm as we know it today can be traced to developments in the early 1960s, back, especially on the Multics file system. Unfortunately, it lacks additional protection mechanisms to ensure a rule-compliant commercial documentation. Access to documents as may be through an IAM is not, through the "back door" file system. Many users a unified storage system, want at least the option, the file system seamlessly integrate into any ECM system (enterprise content management) and an infiltration of the ECM, to prevent such as direct access to the underlying file system, reliable. On the other hand, now make up almost all ECMSysteme is a closed environment that can lead, especially in a system change to substantial costs. In general, higher-quality solutions can be realized with the FTAM (File Transfer Access and Management) defined access mechanisms. The discussions of the T10 Working Group on Object-based storage devices are promising. Unfortunately, Sun has developed the 5800 Object Store System (Honeycomb) any further and instead focused on ZFS-based, closed solutions, better known as "Fish Works" systems. The extent of this unified storage system, as the NAS system has some interesting ideas
shows, can establish itself as a block storage device remains to be seen. Currently only one iSCSI connector is available, native block-storage interfaces (FC, SAS, Infiniband) can be slow in coming. Likes to forget that the
enormous data growth is an almost unlimited scaling of the memory required. The situation in businesses is often described as "Users love its first appliance, they hate the sixth." Scale-out systems and
Storage pools, as they offer as Isilon, LeftHand Networks (now HP) or ONStor (now LSI) can help. However, they are still far from being part of the mainstream.

Many dogs
Alternative clustered NAS
NAS systems are struggling with traditional file systems restrictions and bottlenecks on the hardware. As with servers, but can be increased by clustering performance.

C Lustered NAS has existed for years, but remained largely unknown.In the market segment primarily start-ups were present, because the traditional file systems has not been particularly suitable for scaling. Josh Krischer, the analyst sees a fundamental drawback of NAS is that each NAS server has its own IP address. Who wants to move stored data between filers, must always go the way of the host, to the detriment of performance. In addition, a pack of NAS filers is difficult to manage, often that is only manually. It found that in the scaling case is an ineffective use of the filer. Than offer solutions, according to Krischer file virtualization and clustered NAS, three approaches are distinguished: a platform-based integration, storage subsystems based on cluster or on the network moved in, virtualized
Namespaces. Anyone toying with such solutions must consider whether the transition to a new, "disruptive"
Technology pays, what the requirements for the scaling ability and whether software agents or changes needed in the network structure. Almost all the alternatives available today are characterized by the fact that they build clusters of inexpensive standard components, to achieve the necessary scale effect and a high throughput. The latter is especially the editing, design work in the automotive industry or
required for oil exploration with their huge amounts of data. Within the cluster requires powerful network connections, so come often TCP / IP accelerators (offload engines) are used. Exanet ( www.exanet.com) will only sell software that is built on standard components, then as blade servers.Only
HP also comes with software to present: The company offers the technology as purchased by PolyServe software solutions for databases, specifically for Microsoft SQL Server.The software consolidates various
SQL instances on one server, thus saving Permits and licenses through a clustered storage base is a dynamic moving SQL instances. Next year is the integration with the also purchased
IBridge technology have been taken. So HP wants the restriction to 16 nodes and 1 PByte overcome data
and catch up so with the smaller competitors. Isilon (www.isilon.com) has one in the media industry
Established customer base and is partially aggressively against the top dog NetApp NAS. The main argument put restriction on the NetApp operating system on a volume is 16 TB. Scaling is only possible with Net App filer through the connection of further, with the declining performance was rising and the cost for the management. Isilon's own file system on the other hand allow OneFS vol - umes of 10 to 5 TB PByte and "seamless" scaling.Panasas (www.panasas. Com) focuses primarily on hardware cluster, with its own cluster file system and some software tools for more O and network performance. Thus, the company achieved volumes of 20 to 3.5 TB PByte largest customers in the installation. The approach is, according to Panasas' scalable "as pure software solutions such as Exanet or HP / PolyServe - a limitation that at least HP admits. While just in the context of clustered NAS file many proprietary systems are used, the Panasas has, like NetApp for a standardization of parallel I / O requests used, which is now in its version of the IETF NFSv4 also exists for the certification. Parallel NFS (pNFS) leads the ideas of parallel computing, in which servers partially practiced for decades
are one in the world of NAS storage to the targeted scale-out on the basis of standard components
to come.


No comments:

Post a Comment