When good
lies so close
Storage networks in LAN
In data centers to Fibre Channel for the networking of servers and storage farms has long been established. But
as you progress, the decoupling of the mass memory of each server and
its shift into the net in the middle and smaller environments, the more
urgent is the
the question of an easily manageable and the admins known connection technology: Ethernet. But no matter how easily can the storage systems are not integrated into the LAN.
T he development of transmission technologies for data storage is anything but a straight run.The
result should be known: instead of one has several to choose from, from
which the underlying interest groups argue their respective imagine the
ultimate dar. As
can be no question of a uniform, universally applicable standard, and
the number of acronyms and special features of storage protocols
presents challenges even for specialists. The following is a small attempt of rectification term and the historical background. Even from the very beginning, computers and mass storage devices were separate. Initially dominated the field dedicated connections. Via modem or network connections were built to be walked over leased or dedicated lines. The latter was expensive, the special connection was costly and often did not work really. Larger distances could be bridged while, but without the transparent access to remote storage devices. There was always a computer Rechnerzu coupling: One
signed up on a remote server on which you access on subsequent memory. FTP (File Transfer Protocol) functions still true today. At
the local level storage drives directly into PCs, workstations or
servers were built in or via an external enclosure connected with them. For servers, use was generally parallel SCSIVerbindungen - now called the Direct Attached Storage (DAS). With
the Small Computer System Interface (SCSI) there is usually a simple
plate bundle or JBODs (Just a Bunch of Disks) connected to a server and accessed the disks individually.Only later came RAID Controllers (Redundant Array of Independent Disks) in use.They
often overlook that SCSI is an acronym or an abbreviation that stands
for two things: One is a language, respectively, a protocol to
communicate with the two devices, the other level concerns the
underlying transport layer, including cable and signal definition. At
the beginning of the storage devices connected directly via SCSI cables
with parallel wires were connected to the server using the SCSI
protocol defined as a common medium of speech commands and data packets. SCSI
"speak" most systems today, however, set up as a shift to other
transport protocols such as Fibre Channel (FC), TCP / IP in the form of
iSCSI (Internet SCSI) or over InfiniBand, which with iSER (iSCSI
Extensions for RDMA), an iSCSI interface features.
SCSI - dead and
still alive
The SCSI dominance is sichnach view of Christian Bandulet, principal engineer at Sun, for the foreseeable
Time does not change: In addition to the - restricted - ATA SCSI command set of the PCs is something akin to the optimum for the
Communication in the storage area. But
over the transport routes - especially the long haul - opinions are
divided: whether, in future versions of the Ethernet iSCSI or Fibre
Channel over Ethernet (FCoE) will be, or at
Infiniband. A backing layer is expected in the coming years, so Bandulet gain the upper hand - but the language
SCSI - now in version 3 - will remain.As SCSI bus failed in what constituted him then: his parallel lines.They brought in duration but high latency and packet losses. Triggered by an appropriate protocol from the receiver to the sender of the packages had to send it again. The basic dilemma of parallel buses could be on the different versions of SCSI only temporarily
fix: parallel bits are sent over multiple channels at the receiver despite not like long lines at the same time, but delayed. These
run-time differences (skew) cause the receiver requires a time window
in which he can collect all the bits of a program (latency). This
must not fall below the station with its transmission cycle, otherwise
the receiver again and again must decide whether the bit that it has
received on a specific line to this, the preceding or following one
byte.
The end
a legend
With the increase of the clock is more intense distortion. The only way that one can oppose the is
reducing the distance. With Ultra640 SCSI bag of tricks was empty then: The developers did not succeed, the
To keep interference within the tolerance range - Ultra640 SCSI has never left the laboratory. The second reason why
parallel SCSI is almost extinct, is in his character of bus architecture. Similar
to the early days of Ethernet was initially also the principle that
only one participant was able to send on the bus (half duplex). The other sites were blocked. This
limitation is no longer in a serial architecture, as there are
point-to-point links: the participants have their own channels and thus
can not block each other. From
the perspective of Axel Koester, storage technologist at IBM ended the
SCSI development at the transport layer in a paradox: With every
technological improvement in latency and timing were the cables and the
ever-widening distance between server and storage device shorter. 8
could only be attached later up to 16 devices at the beginning,
compared to Fibre Channel SCSI node with its millions of potential here
fell behind. In
addition, technical difficulties arose with the plugs or the correct
rectangular seat (Krimmung), which also led to distortions in the
arrival of the data. With
the swansong of the parallel SCSI cabling began to conquer the serial
data transfer (Firewire, Fibre Channel, iSCSI, SAS (Serial Attached
SCSI)), while retaining the SCSI protocol language was stopped and
improved. Ethernet
has become not only independent of the first attempts to connect
storage devices to servers, but explicitly served another purpose: the
exchange of messages over long distances. The
main difference to the transport of storage data is that in principle
one Ethernet protocol and a transport route is to exchange messages
between hosts. First Ethernet was designed as a bus on which communication line kicks all participants in a section about the dominance of the.
Ethernet -
Communication only
Methods
such as CSMA / CD (Carrier Sense Multiple Access / Collision Detection)
should provide a regulated flow of communication. The bus structure is now replaced by a star or mesh form. At least since gigabit
Ethernet
(GE) have also used there, the soft switches, hubs, to which
manufacture exclusively dedicated communication paths between two
communication partners and also serve as temporary storage for data
packets. Generally, Ethernet is a transport protocol and is defined as critical only about
which
layer to the next level down, Koester said of IBM. Historically, there
were initially different proprietary approaches, including Xerox,
Novell, or digital Equipment.Through the ARPANET (Advanced Research
Project Agency Network was), the Network Control Protocol (NCP established). Only when the ARPANET increasingly
became
more complex and extended over state, Phone and satellite lines,
including their gateways, shared the developers NCP into two layers: the
Internet Protocol (IP) for reliable addressing of communication
partners and the robust TCP (Transmission Control Protocol) to maintain
communication sessions over several different networks. TCP
/ IP is divided into a layer that contains the IPAdressierung other
hosts, and the overlying load stops (Transmission Control). TCP
also maintains the sessions for the overlying application protocols
such as CIFS (Common Internet File System), NFS (Network File
System), HTTP (Hypertext Transport Protocol) or SMTP (Simple Mail Transport Protocol) upright. Besides
Etherne there was initially built star-shaped networks such StarLAN NCR
/ AT & T or Token Ring, developed by IBM and used in their own
networks. While walking around the packages in a logical circle, attached to a kind
Baton, the token. Worked
in the same way the bus-type or ARCNET token bus from DEC, on which the
token runs back and forth, thus forming a logical ring. With
their deterministic response times and guaranteed throughput methods
were much faster and more stable than the common long-term, working with
coaxial cable or Ethernet 10Base2 Cheapernet. They
lost this advantage, not least because of their proprietary nature to
the 100 Mbit / s fast and still widely used in desktops 100BaseT. For
Bandulet Ethernet has prevailed over its competitors because it had some
technical advantages, but in the end mainly because certain quantities
and production effects have been decisive. This
is reminiscent of other examples from the industry, showing how big can
the influence of market factors beyond the maturity of a product - see
the victory of VHS over Betamax in the video tape or Blu-ray vs. HD-DVD
in the DVD successors. Bandulet
believes that customers would ultimately have the intrinsic desire to
simplify things, so do not go to different systems, but
Things together, and that they rely on market trends and market leader. Sun probably have won even when Ethernet is the de facto standard on other, perhaps better solutions.
From Mainframezum
Fibre Channel
Mario
Vosschmidt, Technical Consultant at LSI, it is obvious that it is an
evolution of Fibre Channel data channel (data channel) is from the
mainframe area. Disk
subsystems hung on the mainframe data channel, which was the control
channel (Escon and Ficon later), detached, not unlike the SCSI
connection on smaller computers. Is in the development of Fibre Channel, so Vosschmidt, there was the same motivation: to no longer of use make use of standard Ethernet network connections depends.High latency and should be excluded for low performance for the transport of storage data. Manufacturers
such as EMC, IBM and Emulex have led to the development of FC and
eventually created a kind of mixture of mainframe data channel and Token
Ring features such as Class of Service. The aim was to create deterministic connections as they needed urgent time-critical applications. After
all, no slow user sitting in front of their workstations, which do not
even notice when the Ethernet cables to the data traffic for a couple of
milliseconds comes to a halt. FC
SCSI is similar to a number of levels: the protocol layers and even the
definition of the physical transport media - including cable and
connector. Among
the design goals of FC, which was originally planned as a backbone
technology for connecting LANs that included the serial transmission of
high speed and long distances, a low transmission error rate, a slight
delay (latency) of data transmitted and the implementation of the FC
protocol in hardware on host bus adapter cards (HBAs) to offload the
server CPU.
Data transport
Rails
The Fibre Channel achieved by its compact structure and mechanisms by which Ethernet is far stranger:
Certain
classes of service guarantee for individual sessions a proportionate
rate; chaining (chaining) of frames to sequences and on sequences
Exchanges builds a kind of freight traffic (an engine with many cars) to
enter buffer to buffer credits and end to end credits provide a
instantaneous flow control
(Flow
Control), ie, transmitter, receiver and switch ports only vote their
capabilities (speed and size of the buffer to be provided) and agree on a
starting rate. Fibre Channel storage networks today are often equated with the so-called Storage Area Networks (SANs). This is true only conditionally, because a SAN may be based on other transmission technologies such as iSCSI or FCoE. FC also had to fight for a while with a competing approach. IBM set about on their own Serial Storage Architecture (SSA). She had a physical and a logical bus double-ring topology, similar to the token bus. Were there any interruptions in the data stream, the tokens in the other direction opposite circular was sent back. SSA was not like a parallel SCSI with special cables and custom protocol, but the SCSI command set used. It
is considered a forerunner of FC, but with the Fibre Channel arbitrated
loop topology also sent a ring into the race, which took place in the
early days especially popular in smaller installations. The third man serial SCSI versions of the first generation the way, Firewire (IEEE1394) was. SAN
islands to several continents over WAN (Wide Area Network) to connect
to each other, there are two ways: First, can the islands on their own
or rented Dark Fibre cables send, secondly use the existing global IP network.For
the latter already exist for a while at least two implementations of IP
Storage: FCIP (Fibre Channel over IP) and iFCP (Internet Fibre Channel
Protocol). Both
wrap FC-Trafik in TCP / IP packets, the former is based on an IP tunnel
between two points, the latter makes itself as a hybrid routing
protocol, the IP capabilities of its own.
iSCSI - Ethernet
conquered the SAN
iSCSI (the "i" stands for Internet, but should actually be "e" for its Ethernet) is a direct competitor to
FC. Some manufacturers said, since users already have their own LAN, they do not have a second, expensive
Network with its own hardware to operate in parallel. As data to SCSI protocol, but wanted to hold you. From this
Premise
is born out of thought, SCSI commands, instead of sending a separate
FC-net, to encapsulate in TCP / IP packets and can thus be transported
over the existing infrastructure of the company. FC eliminated as an intermediate layer. Nor
must it be forgotten that serial SCSIVarianten if need be measured over
the network or connected directly to the final parallel SCSI version:
the 320 MB / s correspond to a net throughput - without overhead of the
underlying protocols - from 3.2 Gbit / s over serial lines (using 8B/10B
encoding). The
serial SAS (Serial Attached SCSI) has just started including 3 Gb per
second per channel (12 Gbit / s for the four-channel external
connections and 6 Gb / s SAS 2.0), iSCSI requires at least separate
gigabit lines - Many experts consider iSCSI to be a 10-Gigabit Ethernet
(10GE) as to be taking seriously. The
initial difficulty also stems from iSCSI, TCP / IP is not designed for
the block-based transfer of bulk data, but designed for the transmission
of messages over long distances and uncertain, and alternative routes
is. But
the connection-oriented TCP, which requires the recipient to confirm
each individual package, provides an unsightly for the SCSI data traffic
latency. The long header of each protocol - Ethernet, IP, TCP - cause a large overhead that the
Net data throughput decreases. SCSI,
however, is no communication or messaging protocol, but a command
protocol, the mass data in one direction sends or from there outdated as
possible after the freight train Principle: It uses a command generator
or initiator on the SCSI host - usually the controlled by the operating
system onboard or offboard controller - on the one hand, and the
command receivers or targets - the end devices such as hard disks, CD
and tape drives, printers, or media changer - on the other side.
Packet packing
is (computational) work
In
addition, the large overhead provides elsewhere for headaches, namely
before the cable, ie on the server side: The dismantling of the SCSI
data stream into small pieces and packing into TCP segments, IP packets
and Ethernet frames is not without raking vonstatten. Right
at the beginning of iSCSI is offered several times "SCSIEngines" or
iSCSI initiators for the servers - which are for the iSCSI storage
systems to clients. Adaptec
and Qlogic iSCSI HBAs had quickly for servers in the program, the iSCSI
initiator and the "Pack-engine" in the form of firmware and processor
are "on board". Cisco
and Microsoft, meanwhile, jumped on the proliferation of freely
available software initiators for Windows and Linux, which make do with a
normal network card - which earned it the reputation of the iSCSI
storage network of low cost. While
Cisco is fairly quick project to the open source community, ceding
(Open iSCSI), put another operating system makers, under
them HP (OpenVMS, Tru64), IBM (AIX), Novell (Netware) and Sun (Solaris), after. Today bring all server operating systems with a software iSCSIInitiator. The catch: work that one lets the operating system is, ultimately depend on the host CPU. For servers that will primarily do one thing: SQL, HTTP / HTTPSund
handle similar client orders or e-mail traffic, which is an unfortunate side effect. If in doubt they would even be dimensioned to trap the new peak demand - especially when 10GE is coming up.
One
could conclude that poured in hardware iSCSI initiators, the better,
because there are higher performance, but more expensive alternative. But iSCSI benchmarks have quickly shown that the underpowered built on them in
Processors (ARM, Motorola) to underutilized servers with the software versions do not even begin to
could keep up. Correspondingly, no 10 Gb / s fast iSCSIHBAs in sight. In
addition, let the interoperability - to be desired, joined by unstable
compounds - particularly between the software initiators and targets. The standards were in place, but still very open to interpretation. All of the iSCSI initially gave a niche. The only bright spot is currently called TCP/IP-Offload- engines that are available in 1-GBitund 10-Gbit versions.They relieve the host CPU, at least from the lower part of the protocol stack.
Enforced
as a serious iSCSI storage networking technology has been in smaller
environments where high availability as irrelevant or databases do not
define their own requirements. Have certainly contributed to the success of the software initiator from Microsoft and open-iSCSI.org. The acquisition of specialized iSCSI RAID systems startups EqualLogic by Dell, was another little milestone
:
1.5 billion dollars for a relatively unknown company and the change in
strategy of a manufacturer who had previously excelled in
Assemblierungsgeschäft, towards our own technology in the storage
environment, refer at least that can be now relevant sales reach with a
former niche service.
And more
Work for the admin
The
ever-cherished advantage of existing, inexpensive and well-known IP
networks has become but leveled at many places: iSCSI brings first own
services - such as iSNS (Internet Simple Name Service) - with and its
own naming scheme that is in terms of administration training like
made-up bill TCP / IP, iSCSI + SCSI = ad absurdum.Second,
it makes demands on the infrastructure that they do not just design
inexpensive, about separate lines and switches, and jumbo frames (large
packets) dominate both the network interface cards or NICs (network
interface cards) and the switches must - so good " fall unmanaged
"switches from the short list. Thirdly,
the mixing of two fundamentally different networks like the Internet,
which extends from the PC (employees) to the servers (service
providers), and the SAN, which draws from the servers (employees) to the
storage subsystem (service provider) to additional requirements. Basically
require different services, such as iSCSI, VoIP, SQL or HTTP different
Quality of Service (QoS), which partly contradict each other. To do it justice, requires a thorough and thoughtful planning, LAN, SAN and storage management systems separates cleanly. In
addition, the collection represents not only the Admins, but crackers
as well as well-known LAN protocols, with all its loopholes and
weaknesses in the SAN admins before the all-new challenges in terms of
safety [7].
Also plan to clean and to implement the backup and recovery paths.
Costs fall
with good camouflage
Especially
the requirements to be by the benefits, and caused the same
transmission technology for two different networks in the past have cost
some users expensive: In the best case, they have recognized the work
involved quickly and leave the restructuring and implementation of an
external service . Less of insight have only got help from the outside, as the accumulation of bottlenecks and confusion abound participated. In terms of speed iSCSI advocates refer you to the roadmaps and the theoretical throughput. Fond of quoting: FC will increase according to the official road map in 2011, the performance at 16 Gbit / s. For 2011, however, one goes from the field in the Ethernet already 40 and 100 Gbit / s network. But even here the theory and practice gape widely in many respects. First, just the theoretical throughput of the non-deterministic Ethernet is treated with extreme caution. Information such as 40 or 100 Gbit / s read and remember the beautiful gigahertz or terabyte battles enthusiastic consumers. The situated speed records of iSCSI over 10GE are a meager 4-8 Gbit / s, ie in
Range of FC installations. Second,
such a blur Roadmap citations often the difference between installed
base and market introduction of new products or technologies - mostly
bananas hardware, the ripening leaves you the customer happy. Third, the congruent mental constructs of market and technology strategist with the rare actual market development. Actually
has 10-Gigabit Ethernet - despite years of market maturity - not even
in the corporate backbone enforced across the board, is on server boards
are not 10GE found, and also the cost of 10GE components - whether NICs
or switches - irritating not just a conversion. A
look at the prices of 40-Gbit components, which are for providers and
Internet backbone has been around for years, should help to further
disillusionment. On the side of the Fibre Channel but it is not much better: its speed is the latest expansion for 8 Gb / s is slow and the prices also give no cause for celebration.The additional investment to make iSCSI less favorable than it appeared initially. And in a move to 10GENetze is expected to further additional costs. Bandulet even sees the possibility that one could thus drive the price even further in iSCSI in the air and eventually even to lie about the FC.
FCoE - Ethernet
takes
Newest darling of the industry, Fibre Channel over Ethernet (FCoE) is. Their
main argument of the through crisis turmoil only slightly slowed
hype-makers that could carry with him respectively, the underlying
advanced Ethernet is a consolidation of various data center networks,
while retaining the proven FC protocol for the memory interface - Each
server would then use a single Ethernet port from. However, not to overlook the fact that certain elements still missing, among them said Ethernet extensions. There,
among other things to produce a single name and lacks standards: As a
Data Center Bridging (DCB), it is the subject of IEEE standards, some
manufacturers call their partly dissenting
Variants Converged Enhanced Ethernet (CEE) or Data Center Ethernet (DCE).
First CNAs (Converged Network Adapter), ie DCB / Ethernet Fibre Channel HBAs with built-tunneling are
now available. Thus equipped server may choose not to but separate FC tickets, but require the
other end of the wire - or the fiber - also DCB-capable switches. The first of the DCB / FCoE switches also have pure FC ports, which are the classical approaches to the SAN. The firmware of the FC-packed packages of the DCB / Ethernet frames and sends them through the FC network. In addition, there are first controller chips for disk arrays with FCoE support in development. Them
equipped with storage systems no longer hang in a pure FC-SAN, but
directly in the DCB-LAN and decompress the received packets over the
Internet itself: first the ethernet and then the FC packages.
And Ethernet
must adapt to
So far the plan. What the developers still are not in control, the necessary improvements
respectively extensions of the Ethernet standard itself Which
in turn are subject to the territorial definition of the IEEE
(Institute of Electrical and Electronics Engineers), whose task is to
standardize the extensions and making the whole package around. For example, the Ethernet as a transport protocol has currently no mechanism to guarantee a minimum throughput. That should change the in-process Enhanced Transmission Selection (ETS or IEEE 802.1Qaz). She assigns the data streams of different priority groups and ensuring them a configurable minimum throughput. Ethernet also has no suitable flow control
Mechanisms. For Fibre Channel, the sender can start only when the recipient him on
the above-mentioned buffer to buffer credits, respectively, end to end
credits has signaled that it has enough capacity to receive and process
the frames or sequences.In the Ethernet transmitter sends something going on constantly flooded and so the receiver port. This
is the so-called packet-dropping result: The receiver shall notify by
PAUSE signal that he could no longer have any resources - and drops the
packets simply, that is, he throws it away. The transmitter must be set up to send the revocation. The overlying TCP wakes with his acknowledgment mechanism
the fact that all packets arrive really. This
allowed the Ethernet standard pause, throwing away and resending of
packets in communication networks basically works very well. But
already in the SCSI communication - such as the iSCSI - can lead to
nasty side effects, is unacceptable to the Fibre Channel traffic. Because
FCoE is just the oversized Prokollstack the FCIP (Ethernet - IP - TCP -
FC - SCSI) and reduce FC put in place of TCP / IP. However,
since the lower layers of the FC packet dropping was never allowed to
the upper layers have never needed a connection-oriented session layer à
la TCP. Moreover, contrary to the packet dropping of throughput guarantee. New mechanisms such as the Priority-based Flow Control
(PFC or IEEE 802.1Qbb) and congestion notification (CN or IEEE 802.1Qau) should be at least the worst
Prevent impacts. Also an extension to be subjected to the Link Layer Discovery Protocol
(LLDP and IEEE 802.1AB2005) and Multiple Spanning Tree Protocol (MSTP or IEEE 802.1Q-2003), which is currently in
the standardization bodies happened. Therefore, one speaks in this context is not more of the Ethernet,
so far as it knows, but from the Data Center Bridging or Data Center Ethernet and Converged Enhanced Ethernet.
Changing of the guard
Bandulet
is of the opinion that it would be wrong to speak of defects in the
Fibre Channel, which would now lead to detachment of FCoE. Actually, the FC had no significant defects, only the transport
no longer state of the art was mainly affects the speed: At the moment most customers go
4-Gb FC, some are planning to move to 8 Gbit / s - which surely takes one to two years before coverage
is completed, and 16-Gb FC is still in its horizon. That does not mean extinct FC networks, but
that - simply put - only changes the wiring. The
interesting thing about FCoE is: In the past behind the DCB / FCoE
switches, FC-SAN will change the view of the administration, handling,
and the tools for now, nothing. Remain mechanisms such as zoning or LUN masking and mapping exist. For
the DCB raises enough questions: For example, are unclear about the
responsibilities of network and storage administrators, as well of
support and monitoring. The topic of security is again on the agenda.
No comments:
Post a Comment