BladeCenter™

Transcripción

BladeCenter™
BladeCenter™
Barcelona Supercomputing Center
26 de septiembre de 2005
Albert Valls Badia
I & I solutions architect
© 2005 IBM Corporation
IBM Systems Group
Agenda
 Tendencias de mercado
 BladeCenter
► Chasis
► Blades
 Opciones de conectividad
► Switches
GB/CPM
► Switches
FC
► OPM
 Gestión del sistema
 Cosumo de potencia
© 2005 IBM Corporation
Tendencias de mercado
© 2005 IBM Corporation
IBM Systems Group
Today systems view
Efficiency increase by
consolidating servers
High density blades for
infrastructure migration
Performance and
manageability increase
Big SMP systems
Blades
Rack optimized and blade
servers fulfill traditional and
new customer requirements.
© 2005 IBM Corporation
Rack optimized
Tower
IBM Systems Group
Blade market view
RISC and Intel Server Units
7.9
4.4
5.1
5.7
6.4
$60
7.0
$50
Blades
Rack
Tower
4,0
Billions
Millions
8,0
6,0
RISC and Intel Server Revenue
2,0
0,0
Blades
Rack
Tower
$30
$20
$0
2002 2003 2004 2005 2006 2007
2,203
2000
1,480
1500
868
1000
470
39
209
2002 2003 2004 2005 2006 2007
2002 2003 2004 2005 2006 2007
RISC and Intel Blade Revenue
RISC
Intel
Millions
Thousands
44
$10
2500
0
43
52
$40
RISC and Intel Blade Units
500
42
49
47
$7.000
$6.000
$5.000
$4.000
$3.000
$2.000
$1.000
$0
6,288
4,275
2,599
RISC
Intel
1,454
98
640
2002 2003 2004 2005 2006 2007
Source: IDC September 2003 Forecast
© 2005 IBM Corporation
IBM Systems Group
BladeCenter Revenue Share Momentum
44%
32%
IBM #1 revenue/volume share position last 4 quarters
Source: IDC and Systems and Technology Group Market Intelligence
© 2005 IBM Corporation
IBM Systems Group
IBM eServer architectures
Big SMP systems
Scale Up / SMP Computing
i890 / p595 / z9
virtualization
Clustering
Highly populated
racks
BladeCenter™
x460
e326
Scale Out / Distributed computing
© 2005 IBM Corporation
IBM Systems Group
Service
Providers
Intranets
Extranets
Edge Servers
BladeCenter: Target environments
Web
Presentation
Services
Transaction
Servers
Workloads
►Database
Application
Servers
Data
Servers
Infrastructure
Servers
Storage
Internet
 Enterprise Server
●DB2, SQL, Oracle
►Transaction
●EAS applications
Application Server Workloads
►Collaboration
●Exchange, Domino, SendMail, Bynari
►Advanced
Edge Server Workloads
►Edge
of network apps
●DNS, caching, load
balancing
Web Serving
●WebSphere, MS IIS, MS Content
Server, BEA WebLogic, appliances
►Workgroup
Infrastructure
●File/print: Novell, MS, Samba
●Terminal serving: Citrix MetaFrame
●Small homegrown apps
●SteelEye LifeKeeper
►S&TC
applications
●EDA: Cadence, Mentor
●Digital rendering farm
●Homegrown apps
© 2005 IBM Corporation
 Application Server
Workloads II
►Commerce
►EAS
front-end app servers
●SAP, PeopleSoft, Siebel,
Kana
►Data Marts
►Point industry apps
IBM Systems Group
BladeCenter key considerations:

eServer BladeCenter technology is not only a new product:
- Cornerstone for “scale-out” and distirbuted computing offering

On Demand platform open standards based:
- Automation (IBM Director, PFA, EXA...)
n
io
t
a
ov
n
IBM ^
In
BladeCenter
- Virtualization (VMware, HPC Linux clusters...)

Core value added offering is INTEGRATION:
- Servers, network, storage and applications.
- Easy deployment with Advanced systems management.
© 2005 IBM Corporation
Au
teLiza
on
om
ic
s
ard
tand
nS
Ope
- Integration (Intel, PowerPc, Linux, Microsoft, Novell...)
IBM Systems Group
Step 1
Consolidate Servers
Firewalls
Routers
(Layer 3
Switches)
Layer 4-7
Switches
SSL Appliances
Public
Internet/
Intranet
Clients
SSL Appliances
Clients
Layer 2
Switches
Caching Appliances
Network
Servers
Application
Servers
Security
Servers
Application
Servers
Storage
Fibre
Switches
Storage
Fibre
Switches
SAN
© 2005 IBM Corporation
Caching Appliances
Web
Servers
Security
Gateway
File
Servers
WebSphere
Application
Servers
IBM Systems Group
Step 2
Integrate First Layer
of the Network (L2)
SSL Appliances
Caching Appliances
Firewalls
Layer 4-7
Switches
Layer 2
Switches
Storage
Fibre
Switches
Storage
Fibre
Switches
SAN
© 2005 IBM Corporation
Routers
(Layer 3
Switches)
Public
Internet/
Intranet
Clients
SSL Appliances
Caching Appliances
IBM Systems Group
Step 3
Integrate
Storage Fabric
SSL Appliances
Firewalls
Layer 4-7
Switches
Caching Appliances
SSL Appliances
Caching Appliances
Storage
Fibre
Switches
Storage
Fibre
Switches
SAN
© 2005 IBM Corporation
Routers
(Layer 3
Switches)
Public
Internet/
Intranet
Clients
IBM Systems Group
Step 4
Integrate Second Layer
of the Network (L4-7)
SSL Appliances
Firewalls
Layer 4-7
Switches
Caching Appliances
SSL Appliances
Caching Appliances
SAN
© 2005 IBM Corporation
Routers
(Layer 3
Switches)
Public
Internet/
Intranet
Clients
IBM Systems Group
Step 5
Consolidate Applications
Firewalls
SSL Appliances
SSL Appliances
Caching Appliances
Caching Appliances
SAN
© 2005 IBM Corporation
Routers
(Layer 3
Switches)
Public
Internet/
Intranet
Clients
IBM Systems Group
Step 6
Result
BladeCenter Clients
Consolidate
Collapses Complexity
Firewalls
SAN
© 2005 IBM Corporation
Routers
(Layer 3
Switches)
Public
Internet/
Intranet
Clients
IBM Systems Group
Simplifying Datacenter Topology
2
3
7
5
9
6
4
8
10
1
Typical Datacenter Configuration
1. Ten x86 1U 2-way
servers
1.
Layer 2 GbE switches
2.
KVM switches
2. RISC-based 2-way server
3.
Ethernet cables
3. HPQ 4-way server
4.
KVM cables
4. Alteon L7 E’net switches
5.
Power cables
5. FC SAN switches / Cables
© 2005 IBM Corporation
Bladed Datacenter Configuration
IBM eServer BladeCenter
IBM Systems Group
BladeCenter: where to use it
 Phisycal consolidation
 Rack/ Environment space constrained customers
 Customers looking for fast and simple deployments
 High scalability customer requirements
 High availability and reliability customer requirements
 Remote management for all IT infrastructure
© 2005 IBM Corporation
BladeCenter
© 2005 IBM Corporation
IBM Systems Group
What a blade is?
 A "server on a card" - each "Blade" has its own:
►processor
►ethernet
►memory
►optional
storage
IBM Blade - in its own ruggedized chassis
►etc.
 The chassis= BladeCenter provides shared:
►Redundant
management module* (KVM) (Keyboard,
Video, Mouse)
►Redudndant Power supplies*
►Redundant blowers*
►Redundant Network and Fibre Channel switches, OPM*
►Phisycal* and virtual CD-ROM
►Phisycal* and virtual floppy
►USB port
►etc.
* Hotswap devices
© 2005 IBM Corporation
IBM Blade - with its cover on - ready
for insertion into the BladeCenter
Chasis IBM BladeCenter - 7U
IBM Systems Group
IBM eServer BladeCenter
Up to 4 processors
per blade
Up to 14 blades
per chassis
Up to Six 7U
chassis per rack
Full performance and manageability of rack-optimized platforms ...
... at TWICE the density of most comparable non-blade 1U servers
© 2005 IBM Corporation
IBM Systems Group
IBM eServer BladeCenter: Chasis
 Switches Gigabit Ethernet (Layer 2)
►
Commodity level networking
►
Link aggregation
►
VLAN creation and management
 Switches Nortel Layer 2-7
►
Advanced networking
►
Content-based routing
 Fibre Channel Switches (2Gb FC Fabric)
►
Lower cost via integration
►
Full support of FC-SW-2 standards
 Power (4 x 2000W load-balancing)
►
Upgradeable as required
►
Redundant and load balancing for high availability
 Calibrated, vectored cooling™
►
Fully fault tolerant
►
Allow maximum processor speeds
 KVM Switches / Management Modules
►
Full remote video redirection
►
© 2005/IBM
Corporation
Out-of-band
lights
out systems management
KVM Switch /
Redundant
Power
Nortel
Layer 2-7
Switch
Ethernet
Switch
Redundant
Blower
Management
Module
IBM Systems Group
Redundant Midplane for server connectivity. No Single Point Of
Failure (NSPOF) design.
© 2005 IBM Corporation
IBM Systems Group
Sharing Redundant Power to Blades
1
2
1800/2000 watt
Power Modules
Blades 1 through 6
© 2005 IBM Corporation
3
middle plane
4
1800/2000 watt
Power Modules
Redundant middle plane
Blades 7 through 14
IBM Systems Group
BladeCenter System Cooling and Airflow

Two Curved Impeller Blowers
Capable of 325 Cubic Feet Per Minute
(CFM) each
250 CFM each in standard operation
Hot Swap, Redundant
Predictive blower failure by monitoring the
blower RPM
Back flow dampers (louvres)
Fan speed control

Acoustic Attenuation Module
Noise Reduction for acoustically sensitive
(or regulatory) environments
© 2005 IBM Corporation
IBM Systems Group
BladeCenter Management Module (MM)

Service Processor

Hot Swap

Interfaces via midplane
10/100Mb Ethernet
KVM not supported on JS20 (SoL)
RS-485 interface
I2C interfaces (serial interconnect daisy chain technology used for hardware
level functions)

Optional redundant Management Module
© 2005 IBM Corporation
IBM Systems Group
Blade Servers portfolio
HS20 2-way Xeon
Features
HS40 4-way Xeon
 Intel Xeon DP
 Intel Xeon MP
processors
processors
 EM64T
 Delivers bladed 4-way
 Mainstream rackdense blade server
SMP capability
 Supports Windows
and Linux
 Edge and mid-tier
Target
Apps
workloads
JS20 POWER-based
 Two PowerPC 970
processors
 64-bit performance at
IA32 price
 Performance for VMX
 Back-end workloads
deep computing
clusters
 64-bit HPC
 Large mid-tier apps
 Web Serving
 Collaboration
 Web serving
 Infrastructure
One Common Chassis and Infrastructure
© 2005 IBM Corporation
IBM Systems Group
IBM BladeCenter HS20
Overhead
View
SCSI Expansion
Processor slot 2
(air baffle
connector terminator
IDE connector 1
IDE connector 2
Upper
midplane
connector
Daughter
Card
connectors
Lower
midplane
connector
Processor slot 1
© 2005 IBM Corporation
DIMM 4
DIMM 1
IBM Systems Group
Blade HS20 –Intel Xeon DP processor
 Up to Two Intel High Performance Xeon Processors
► 2.8Ghz,
3.06Ghz, 3.2Ghz-1M cache, all with 533Mhz Front Side Bus
► 3.2Ghz-2M
cache (coming soon!)
 Integrated Mirroring for Local IDE Drives (80GB total capacity)
 Support for Optional pair of local HS SCSI drives with Mirroring
 Support for one optional Expansion Card (FC, GBE, Myricom)
 Integrated pair of GBE connections
 Dedicated Systems Management Connection
 OS Support
► Windows
► Red
2000, 2003
Hat and SUSE Linux
► VMWare
and Novell Netware
© 2005 IBM Corporation
IBM Systems Group
BladeCenter HS20 - Intel Xeon Processors
 IBM first blade vendor to ship an EM64T-enabled blade
offering
HS20 2-way Xeon
► Intel Xeon
3.06GHz/1MB-L2 533MHz FSB
► Based on same proven Xeon HS20 architecture
 Substantial performance increase over Xeon 533MHz FSB
 Full complement of supported options
 Support for integrated networking and storage connectivity
such as Cisco, Nortel, Brocade, etc.
 Dedicated systems management connection
 Concurrent Serial Over LAN connectivity
 Complete list of supported OS’s including several 64-bit
enabled systems
 Price parity with current 533MHz speed bins
Bottom Line: HS20 with EM64T has the performance, compatibility, and pricing to
make it production ready today and 64-bit enabled for tomorrow
© 2005 IBM Corporation
IBM Systems Group
HS20 Feature Comparison
IBM eServer BladeCenter HS20
(533MHz)
 (8832) Dual Intel Xeon 2.8GHz/3.06/3.21MB/3.2-2MB with 533MHz Front Side Bus
 14 Blades per Chassis (30mm blade width)
 2 Gb Ethernet Ports standard
 4 DIMM slots
 Up to (2) 40GB IDE with IDE RAID 1
standard
 Internal switches (Enet/FC/KVM)
 Redundant/hot swap fans standard
 Hot swap power optional for bays 7-14
 Redundant/hot swap mgmt optional
 Support for internal IDE and SCSI Storage
Expansion Unit
 Support for IBM Director/RDM
© 2005 IBM Corporation
IBM eServer BladeCenter HS20
(800MHz)
 Dual Intel Xeon EM64T 3.2GHz / 3.4GHz
/ 3.6GHz with 800MHz Front Side Bus
 14 Blades per Chassis (30mm blade width)
 2 Gb Ethernet Ports standard
 4 DIMM slots
 Up to (2) 73GB SFF SCSI with RAID 1 stnd
 Internal Switches (Enet/FC/KVM)
 Redundant/hot swap fans standard
 Hot swap power optional
 Redundant/hot swap mgmt optional
 Support for NEW SCSI Storage
Expansion Unit
 Support for dual SCSI drives and
Expansion Card
 Support for IBM Director/RDM
IBM Systems Group
HS20 Improvements - Performance
 800 MHz Front Side Bus
1.5 times the system bus bandwidth when compared to 533Mhz
Front Side Bus
► Helps support faster Web site response times, more users, and
greater business
►
With Intel®
EM64T
 64-bit CPU core extensions (EM64T)
Improved throughput in targeted applications
► Full support for 64-bit OS with legacy support for 32-bit and 16-bit
►
 DDR2 400 Memory
20% increase in memory bandwidth over DDR333
► 40% reduction in the power required to run the memory
►
 PCI-Express expansion capability
Bottom Line: operational enhancements to increase performance, efficiency
and timing margins for high performance computing
© 2005 IBM Corporation
IBM Systems Group
HS20 Improvements - Flexibility
 On-board SCSI HDDs replaces IDE
Two U320 small form factor non hot-swap HDDs- 36 or 73GB
► Better performance, better reliability, and choice of capacity
►
 Support for two HDDs + a new SFF Daughter Card
►
Improved I/O: no longer need to sacrifice an HDD to get Fibre or Ethernet
connectivity
 SCSI RAID with BSE-2 Option
RAID controller in BSE delivers RAID1 and RAID1E
► Four additional I/O ports available when adding Eth expansion cards
► Two hot swap U320 drives at capacities up to 144GB currently
►
 Smart power management
Processor can adapt to changes in utilization that allow reduced power
consumption during non peak hours
► Smarter power management methods help customers reduce power infrastructure
requirements
►
Bottom Line: choice and flexibility to further the leadership
position of IBM BladeCenter
© 2005 IBM Corporation
IBM Systems Group
New HS20 Options
 BladeCenter SCSI Expansion Unit 2
►
►
►
New SCSI Expansion for Xeon EM64T only
Two hot swap hard drives, increased RAID function, and more I/O capabilities
Up to 8 ports per blade for network connectivity
 Set of processor options
►
2.8 through 3.6MHz 800FSB Xeon EM64T
 Two new SFF U320 HDDs
►
36GB and 73GB non hot-swap offerings
 New 2000W power supply and DVD in
3XX chassis already shipping
 Gigabit Ethernet Expansion Card
 Fibre Channel Expansion Card
© 2005 IBM Corporation
IBM Systems Group
Scale Up Meets Scale Out: HS40
 Uses existing infrastructure
►Same
chassis as HS20 and JS20
►Same options, 4Gb Ethernet standard
►Seven 4-way systems in 7U
►Four with Local SCSI option
 Intel Xeon MP 2.8 and new “double cache”
processors
 Application targets
►Back
end workloads (SAP, PeopleSoft, JD Edwards)
►Larger Mid-Tier Applications (Exchange, Notes)
 Supports Microsoft Windows, Linux, and
VMware
© 2005 IBM Corporation
HS40 4-way Xeon
IBM Systems Group
JS20 BladeCenter based on POWER4
64-bit POWER at 32-bit Price
 Two 2.2 GHz PowerPC 970 processors, derived from the
POWER4 architecture
 VMX capabilities provide enhanced compute-intense
performance
 AIX 5L v5.2 supported today
 SuSE SLES8 y SLES9, Red Hat Enterprise Level 3 U2
support today
 IBM Director and Cluster Systems Management support
 Heterogeneous platforms integrated into single chassis
 PowerPC 970 performance features



90-nanometer silicon-on-insulator
8-way superscaler design, issues up
to 8 instructions per clock cycle
Vector-processing unit with more than
160 specialized vector instructions
© 2005 IBM Corporation
JS20 POWER-based
IBM Systems Group
Feature Comparison vs first model GAed
IBM ~
BladeCenter JS20
 Dual IBM PowerPC 970 1.6 GHz processors
with 800 MHz Front Side Bus
 SIMD VMX extensions providing exceptional
performance for compute- intensive floatingpoint applications
 14 blades per Chassis (30mm blade width)
 2 Gbps Ethernet Ports standard
 4 DIMM slots
 Up to (2) 40GB IDE with IDE RAID 1
standard
 Internal switches (Ethernet/Fibre Channel)
 Redundant/hot-swap fans standard
 Hot-swap power optional
 Redundant/hot-swap mgmt optional
 Support for IBM Director/CSM
 Support for Linux SLES 8
© 2005 IBM Corporation
Since october 2004
IBM ~
JS20
PowerPC 970 2.2
GHz processors
 Dual IBM BladeCenter
with 1.1 GHz Front Side Bus
 SIMD VMX extensions providing exceptional
performance for compute- intensive floating-point
applications
 14 blades per chassis (30mm blade width)
 2 Gbps Ethernet Ports standard
 4 DIMM slots
 Up to (2) 40GB IDE with IDE RAID 1 standard
 Internal Switches (Ethernet/Fibre Channel)
 Redundant/hot-swap fans standard
 Hot-swap power optional
 Redundant/hot-swap mgmt optional
 Support for IBM Director/CSM
 Support for SLES 8, SLES 9
 Support for Red Hat Enterprise Linux AS 3
 Support for AIX 5L V5.2
Opciones de conectividad
© 2003 IBM Corporation
IBM Systems Group
Expanding BladeCenter Ecosystem
 Wide range of companies
convinced that BladeCenter
architecture will add value to their
customers’ solutions
 Industry-leading technology
companies delivering innovative
business solutions running on
Windows, Linux, Novell
 More choices for customers
© 2005 IBM Corporation
IBM Systems Group
Expanding BladeCenter Capabilities
Software
IBM Director
RDM
Virtualization Engine
Deployment / Provisioning
Partitioning/VMware
IBM Cluster Sys Mgmt
2-way PowerPC Blade
Blade Solutions
2-way Xeon Blade
EM64T Blade
Telco Chassis
Hosted Clients
4-way Xeon MP Blade
Networking
L2 Ethernet Switch
L4/7 Ethernet Switch
Cluster Switch
Storage
InfiniBand Switch
Cisco Switch
Local IDE & SCSI
iSCSI
Fibre Channel HBA
NAS
© 2005 IBM Corporation
FC Switch
Brocade Fibre Switch
IBM Systems Group
Fibre Channel Expansion Cards
 High Performance Host Bus Adapter supporting
both 1 & 2 Gbps devices
 Provides TWO 2Gbps fibre channel port connections for
the HS20, HS40 and JS20
 “Boot from SAN” support in a variety of storage
environments
 Extensive certification from major storage and SAN
manufacturers
 Equivalent function to QLA2342 and uses ISP2312 chip
technology
Low-cost design for high-density BladeCenter servers
© 2005 IBM Corporation
IBM Systems Group
Myrinet Cluster Expansion Card Overview
 Announce & General Availability: 9/9/2003
 Myrinet is a high-speed/low-latency cluster interconnect for High
Performance Computing (HPC) applications. It provides high
performance system-to-system connection for distribution of computing
over several blades.
 The Myrinet Cluster Expansion card on IBM eServer BladeCenter
provides the same functions and performance, and uses the same
software as the standard Myrinet PCI-X card on highly successful
server clusters using xSeries rack-optimized servers.
 Myrinet is an ANSI standard, with open-source software, and is the
market leader in high-performance, high-availability, cluster
interconnect. Approximately one third of the TOP500™ supercomputer
sites use Myrinet technology.
Single port card
► BladeCenter Myrinet HCA is designed to fit the BladeCenter form factor. It
is analogous to the 2MB version of the PCI-XD interface
► High performance by distributing demanding computations across an array
of cost-effective servers
► High availability by allowing a computation to proceed with a subset of the
hosts. The interconnect is capable of detecting and isolating faults and
using alternative communication paths
►
http://www.myricom.com/
© 2005 IBM Corporation
IBM Systems Group
Myricom Enables High Speed Network
Connectivity
 High-speed/low-latency cluster interconnect for HPC applications
 Provides high performance system-to-system connection for distribution of
computing over several blades
•
Single port card
•
BladeCenter Myrinet HCA designed to fit the BladeCenter form factor
•
High performance by distributing demanding computations across an array of cost-effective
servers
•
High availability by allowing a computation to proceed with a subset of the hosts.
•
Interconnect capable of detecting and isolating faults and using alternative communication
paths
High performance connectivity for high performance clusters
© 2005 IBM Corporation
IBM Systems Group
BladeCenter Copper/Ethernet Switch Portfolio
NEW
IBM eServer
BladeCenter
4-port Gb Ethernet
Switch Module
NEW
IBM eServer
BladeCenter
Copper Pass-thru
Module
 Supplier: DLink
 Layer 2 Switching
 Trunking and Link Aggregation
 Supplier: IBM
 GbE Pass-thru
 No switching function
© 2005 IBM Corporation
Nortel Networks ®
L2-7 GbE Switch
Module
Cisco Systems ®
Intelligent Gb
Ethernet Switch
Module
 Supplier: Cisco
 Layer 2 Switching
 Layer 3/4 services
 Supplier: Nortel
 Layer 2 - 7 functionality
 Layer 3/4 services
 Load balancing
 Routing / switching
 Advanced filtering
 Content intelligence
A full suite of integrated offerings to provide
additional flexibility and choice!
IBM Systems Group
IBM eServer BladeCenter Copper Pass-thru
Copper Pass-thru Module Cable
Copper Pass-thru Module
© 2005 IBM Corporation
IBM Systems Group
BladeCenter Copper Pass-thru Module
Server bay – ports – cable assignments
© 2005 IBM Corporation
IBM Systems Group
Expanding BladeCenter Networking Hardware
Cisco Systems Intelligent Gigabit Ethernet Switch Module
 Integrates Cisco networking technology
into BladeCenter
 Helps reduce datacenter complexity and
networking complexity
 Comprehensive set of Layer 2 features with Layer 3/4 services
► Multicast
– IGMP Snooping
► QoS features
 Supports IOS (Cisco Internetworking Operating System)
 Reduces deployment and configuration time
 Only blade solution in industry with
embedded Cisco switching
© 2005 IBM Corporation
IBM Systems Group
Cisco Systems IGESM Description
 Equivalent software feature set to Cisco Systems® Catalyst 2950 providing Layer
2+ functionality
 High Availability: Enhanced Spanning Tree Protocol, IGMP snooping
 Enhanced Security: 802.1x, Port Security, MAC address notification,
RADIUS/TACACS+
 Advanced QoS: 802.1p, WRR, Strict Priority Queuing
 Interfaces
Wire-speed switching
► 4 - 1GB External Ethernet (Copper) interfaces
► 14 -1GB Internal interfaces to blades
►
 Management / Monitoring
Cisco IOS Command Line Interface
► Cluster Management Suite
► SNMP - Management Information Base (MIB) based applications such as CiscoWorks
► Management and Power through Management Module
► Console Port on faceplate
►
 Enhanced Default Configuration
►
Multiple VLANs configured as default at power-up
© 2005 IBM Corporation
IBM Systems Group
Nortel Networks Layer 2-7 GbE Switch Module
 Availability
► Reduce
unplanned application down-time in the event of a
switch module, server blade, or chassis failure
► Reduce need for planned application downtime
 Performance
► Enable
on demand computing
► Better serve the processing demands of bandwidthintensive applications
► Enhance application performance
 Manageability
► Reduce
time/effort required to deploy new datacenter
infrastructure
► Simplify datacenter administration
 Greater infrastructure scalability
 Enhanced server security
 Integrating L2-7 switch into blade chassis reduces
datacenter
infrastructure TCO by as much as 65%
© 2005 IBM Corporation
Layer 5-7
Layer 4
Layer 2/3
IBM Systems Group
BladeCenter Optical Module/SAN Switch Portfolio
Brocade®
Enterprise
NEW
QLogic™ Enterprise
IBM eServer BladeCenter
Brocade® Entry
SAN Switch Module
6-port Fibre Channel
Switch Module
Optical Pass-thru Module
 Supplier: IBM
 Provides unswitched / unblocked
optical connection
 Up to 14-optical connections to
SAN Switch Module





Supplier: QLogic
Equivalent to SANbox 5200
6-1/2Gb Auto sensing ext ports
Cascades to (239) Switches




Supplier: Brocade
Equivalent to Silkworm 3900
2-1/2Gb Auto sensing ext ports
Cascades to (2) Switches
 Supports Brocade Advanced
Feature Key options
 Supplier: Brocade
 Equivalent to Silkworm 3900
 2-1/2Gb Auto sending external
ports
 Cascades to (239) switches
 Supports Brocade Advanced
Feature Key options
Supports performance monitoring
and advanced zoning
 FC-SW-2 Compliant
external SAN (requires breakout
cable option)
A full suite of integrated offerings to provide
additional flexibility and choice
© 2005 IBM Corporation
IBM Systems Group
Fibre Channel Switch Module

The Fibre Channel Switch Module supports the following:
Two small form factor pluggable (SFP) ports
Long Wave option
Short Wave option
Self-Configuring according to type of device attached

Full Interoperability with Brocade Fabric
SFP Transceiver
(not included)
© 2005 IBM Corporation
IBM Systems Group
Brocade® Switch Modules
 Delivers datacenter standards for Brocade customers
 Seamless connectivity to over 3.5M existing Brocade switch ports
 Compatibility with Brocade Fabric OS features
► Trunking,
Advanced Performance Monitoring, Advanced Security, Zoning,
Extended Fabric and Remote Switch
 Simplifies SAN Management
► IBM
Director integration – Q304
► Tivoli SAN Manager
► Fabric Manager and W ebTools are also supported
 Modular scalability
► Available
as a 2 domain or full fabric switch module
 Flexible deployment
► High
availability and ease of serviceability
Fabric switch delivering Brocade functions including performance,
manageability, scalability and security to support demanding SANs
© 2005 IBM Corporation
IBM Systems Group
Optical Pass-Thru Module

Provide direct connectivity between server blade
and external devices
Ethernet network devices
Fibre Channel network devices
Myrinet cluster switches
TIP
OPM cables do not come
with the OPM (must be ordered
separately)
The 4 SC or LC duplex optical
connectors are keyed to the
processor blade bays via the
ports on the OPM
© 2005 IBM Corporation
IBM Systems Group
Optical Pass-thru Module Overview
 Announce: 9/9/2003
General Availability: 9/9/2003
 The Optical Pass-Thru (OPM) is a module that inserts into any switch module bay and
provides connectivity to each blade bay. The OPM provides an unswitched/unblocked
network connection to each blade server.
 The BladeCenter Optical Pass-thru module can be used in conjunction with the Myrinet®
Cluster Expansion card to deliver a high-performance, high-availability interconnect for High
Performance Technical Computing and other cluster-computing applications.
 The BladeCenter Optical Pass-thru module can be used in conjunction with the Fibre
Channel Expansion card to allow an alternative connection to Storage Area Networks.
► The
Optical Pass-thru Module can be an alternative to the IBM eServer BladeCenter 2-port Fibre
Channel Switch Module. As with the integrated Fibre channel switch module, the pass-thru module
provides connectivity to the IBM TotalStorage™ family of products including FAStT, Enterprise Storage
Server, SAN switches and tape storage.
 OPM includes (4) Optical Transceivers and (0) cables
► 02R9080
- IBM eServer BladeCenter Optical Pass-thru Module
► 73P5992
- IBM eServer BladeCenter Optical Pass-thru Module SC Cable
► 73P6033
- IBM eServer BladeCenter Optical Pass-thru Module LC Cable
© 2005 IBM Corporation
IBM Systems Group
OPM Functions
 The OPM provides the ability to transmit and receive network data traffic
between all (14) blade bays and the networking environments below
► Gb
Ethernet
● To enable Ethernet, the OPM can be inserted into switch module bays 1, 2, 3 and 4
● If the OPM is inserted into switch module bays 1 or 2, the OPM interfaces with the
integrated dual Gb Ethernet controllers on the blade server
● In order for the OPM to function in bays 3 and 4, the Gb Ethernet Expansion Card is
required on the blade server
► Fibre
Channel
● To enable Fibre channel, the OPM can be inserted into switch module bays 3 and 4
● In order for the OPM to function in bays 3 and 4, the Fibre Channel Expansion Card is
required on the blade server
► Myrinet
● The Myrinet® Cluster Expansion Card is a SINGLE port card and is hardwired to
switch module bay 4
● To enable Myrinet, the OPM must be inserted into switch module bay 4
● The Myrinet Cluster Expansion Card is required on the blade server
© 2005 IBM Corporation
IBM Systems Group
Switch Module Bay Configurations Supported
BladeCenter Chassis Rear View
SM BAY 3 & 4
L2-7 Gb
Ethernet
(Nortel ESM)
SM
SM
PS
3
1
3
BLOWER 1
PS
1
M
M
1
Fiber Channel
(IBM FCSM)
OPM
SM BAY 1 & 2
SM
SM
PS
4
2
4
BLOWER 2
PS
2
M
L2 Gb
Ethernet
(IBM ESM)
L2-7 Gb
Ethernet
(Nortel ESM)
M
2
OPM
NOTE:
2. Any combination of modules listed under SM BAY 1 & 2 can be inserted into BAYS 1 & 2
3. The switch module in BAY 4 must match the switch module in BAY 3 and the corresponding I/O Expansion card is required on the blade
server to enable BAYS 3 and 4
© 2005 IBM Corporation
IBM Systems Group
2
Mo l e x M o d u l e Co n n e c to r
1
Vi te s s e
VSC3 1 39
3
Pro g .
Cl o c k
4
Se ri a l
Pro m
8055
Po we r Re g .
Transceivers:
4 standard
No cables will be included in base OPM option.
Fan out cables must be ordered separately.
© 2005 IBM Corporation
To BladeCenter MidPlane
Purchase cable (s) to connect
to External Device
High level schematic of standard configuration
IBM Systems Group
Fanout Cable
Connection A
Connection B
Connection C
Connection D





The OPM supports up to (4) fanout cables (one shown above)
Each cable breakout will be labeled to identify the transceiver port
One end of the fanout cable plugs into the transceiver, the other end fans out to four cables
Depending on which cable is purchased, the connection at A, B, C, or D will be SC or LC
Total cable length 1.5 meters
© 2005 IBM Corporation
IBM Systems Group
Blade Bay - Transceiver Configurations
 Each OPM transceiver and port has a
dedicated connection to a blade bay
Blade Bay
Num ber
1
Tr ansceiver
Number
1
Tr ansceiver
Por t
A
2
1
B
3
1
C
4
1
D
5
2
A
6
2
B
7
2
C
8
2
D
9
3
A
10
3
B
11
3
C
12
3
D
13
4
A
14
4
B
© 2005 IBM Corporation
OPM Rear View:
1
2
3
BladeCenter Chassis Front View:
LED PANEL
1
2
3
4
USB CD
5
6
7
8
USB FDD
9
10 11 12 13 14
4
IBM Systems Group
OPM offers additional configuration flexibility
The OPM allows the chassis to support multiple Expansion cards given the direct link
from blade bay to OPM transceiver port
Blade 3
FC
Expansion
Card
Blade 6
Gb
Ethernet
Card
Card Port 1
Chassis
SM BAY 3
OPM 1
Card Port 1
Blade Bays 1-14
Transceiver 1, Port C
To SAN Fabric
Transceiver 2, Port B
To Ethernet Network
Blade 10
Gb
Ethernet
Card
Card Port 1
Card Port 2
SM BAY 4
OPM 2
Card Port 2
Blade Bays 1-14
Transceiver 3, Port B
To Ethernet Network
Example 1: Blade #3 has Fibre Channel connection on Port 1 to OPM 1, Transceiver 1, Port C
Example 2: Blade #6 has Gb Ethernet connection on Port 1 to OPM 1, Transceiver 2, Port B
Example 3: Blade #10 has Gb Ethernet connection on Port 2 to OPM 2, Transceiver 3, Port B
© 2005 IBM Corporation
Gestión
© 2005 IBM Corporation
IBM Systems Group
BLADE CENTER: POSIBILIDADES DE GESTIÓN
Gestión Nativa (HW) Basada en Navegador Web
Control total via Módulo de Gestión
Gestión total de switches ethernet y fibra via Módulo de Gestión
Actualización remota de firmware
IBM Director 4.12
Soluciones de gestión inteligente simplificada
Optimización
Asistentes para despliegue
Herramientas para configuración masiva
Planificación de tareas para eventos
Remote Deployment Manager
Herramientas de despliegue
Rapida restauración y despliegue de imágenes
Integración con IBM Director 4.12 (Blade en espera)
Retirada/Baja segura del servidor (datos)
Remo te Targ ets
Deployment
Image Capture
RDM Imag e L ib rary
Preco n fig u red "Do n o r"
system
© 2005 IBM Corporation
IBM Systems Group
GESTIÓN BASADA EN HW
© 2005 IBM Corporation
IBM Systems Group
IBM DIRECTOR: HERRAMIENTAS ESPECÍFICAS PARA BLADES
Detección Automática
Usa SLP (Service Location Protocol) para automaticamente identificar nuevos
chasis y blades
Nuevos Datos de Inventario
Datos vitales del chasis
Slots Ocupados
chassis Management Module out-of-band IP Address
Nuevos Eventos
Alertas de entorno para temperaturas, velocidad de los ventiladores, etc...
Alertas para cambios de la configuración como inserción y retirada de blades
Habilita automaticamente blade en espera via RDM
Nueva Gestión de Objetos para Chasis y Blades
Dos Nuevos Grupos Dinámicos
BladeCenter Chasis
BladeCenter Chasis y Miembros del Chasis
© 2005 IBM Corporation
IBM eServer™
GESTIÓN DEL BLADECENTER EN IBM DIRECTOR 4.12
Rack Manager
Interfaz gráfica incluyendo soporte para chasis y blades
Agregar / Retirar chasis y blades del rack
Estado de la "salud" del sistema para chasis y blades
Asistente para el Módulo de Gestión
Soporte para Módulo de Gestión del BladeCenter y configuración
de los switches
Otras Extensiones IBM Director
(Capacity Manager, Software Rejuvenation, System Availability)
Funcionan en los blades individualmente
© 2005 IBM Corporation
© 2003 IBM Corporation
IBM Systems Group
Pueden ejecutarse Tareas
Drag and Drop en cada Blade
o en el grupo BladeCenter
© 2005 IBM Corporation
El grupo BladeCenter
contiene Blades con Notes
para cada grupo de trabajo
IBM Systems Group
Vista detallada del
Blade seleccionado
BladeCenter con los
resaltado
blades instalados
Vista gráfica del
BladeCenter instalado
en el rack
© 2005 IBM Corporation
IBM Systems Group
Visión general de la Gestión de Sistemas
*
Tivoli Enterprise
Remote
Deployment
Manager
Server Pack
Plus
Workload
Management
Software
Distribution
IBM Director: Entorno Base
Gestión de Sistemas Integrado
en el Servidor(Service Processor)/
Módulo de Gestión
© 2005 IBM Corporation
Remote Supervisor
Adapter (RSA II)
IBM Systems Group
IBM Director Portfolio
- Advanced, predictive
tools with selfmanaging technologies
- Deliver optimal server
performance and high
availability
Save time and money by
remotely replicating the
install of multiple systems,
including blades
-
Easily distribute
application packages
remotely from a
single console,
saving travel & labor
costs
-
Higher server utilization
by protecting availability
and performance of
workloads on that server
-
RDM
Plus Pack
Server Plus Pack
Capacity Manager
Software Rejuvenation
Rack Manager
System Availability
Active PCI Manager
Remote Deployment
Manager
Remote, unattended system
deployment
Updates system and option
firmware
No limitation on number of
systems installs
Restores system hard drives w/
PowerRestore
Basic Hardware Management
Inventory
Monitoring
Alerting
Group Management
RAID Manager
Mgmt Processor
© 2005 IBM Corporation
AWM
SWD
Software Distribution
Premium Edition
Can package and distribute
software targeted to an end
user or group of users
IBM Director v4.1
5000 Managed Nodes
Upward Integration
(Tivoli, CA, HP, MS
SMS, BMC , NetIQ)
IBM Director Agent
Application Workload
Manager (AWM)
Allows multiple applications
to share a server efficiently
and reliably
Manages resource
contention
Help Desk & Support
Remote Control
Remote Session
File Transfer
Real Time Diagnostics
IBM Systems Group
Topología de IBM Director
IBM Director Server
Application Logic
Database
(Windows or Linux)
Management
Console(s)
Managed Clients
(Servers, Desktops, Laptops)
Hardware
Management
Inventory
Monitoring
Alerting
Group Management
RAID Manager
Mgmt Processor
© 2005 IBM Corporation
Java GUI
(Windows or Linux)
IBM Director v4.1
5000 Managed Nodes
Upward Integration
(Tivoli, CA, HP, MS
SMS, BMC , NetIQ)
IBM Director Agent
Help Desk & Support
Remote Control
Remote Session
File Transfer
Real Time Diagnostics
IBM Systems Group
Consola Centralizada de Gestión
►Interfaz
de gestión intuitiva
►Indicadores
►Ejecución
de “Salud del Sistema” identifican sistemas con problemas
de tareas sobre grupos de sistemas con una simple acción
de “arrastrar y soltar” con el ratón
© 2005 IBM Corporation
IBM Systems Group
© 2005 IBM Corporation
IBM Systems Group
Server Plus Pack para IBM Director v4.1
 Capacity Manager
► Monitoriza
la utilización de recursos Hw (procesador, memoria, HDD, y
tráfico de red ) . Capacidad para detectar y predecir cuellos de botella
 Rack Manager
 System Availability
► Proporciona
información acerca de la disponibilidad y tiempo de parada de
un sistema o grupo de sistemas
 Software Rejuvenation
► Predicción
de posible agotamiento de recursos del SO
 ActivePCI Manager
► Optimiza
el rendimiento del servidor ayudando a diseñar la colocación
óptima
de los adaptadores PCI
© 2005 IBM Corporation
IBM Systems Group
Server Plus Pack - Capacity Manager
 Diagnostica problemas de rendimiento y analiza posibles cuellos de botella
► Muestra
la información de rendimiento gráficamente , en un formato de fácil análisis
► Los
gráficos y datos son exportables para ser utilizados en otras herramientas de
generación de informes.
© 2005 IBM Corporation
IBM Systems Group
Remote Deployment Manager
Actualización de Microcódigo de Servidores /Actualización de
parámetros de CMOS/ Instalación de Sistema Operativo
Rápido despliegue de Imágenes
Eliminación de datos segura
Power Restore , partición oculta para salvaguarda.
Configuración automatizada de controladoras RAID
© 2005 IBM Corporation
IBM Systems Group
Distribución de Software
 Extensión a IBM Director para distribuir
remotamente paquetes de software
 Puede distribuir paquetes de software a
usuarios o grupos de usuarios
 Microsoft Windows Installer
 InstallShield
 Linux Red Hat Package Manager
© 2005 IBM Corporation
IBM Systems Group
Application Workload Manager
 Extensión a IBM Director para controlar la utilización de
los recursos del servidor por parte de múltiples
aplicaciones
 Protege contra el agotamiento de los recursos por parte de
una aplicación
 Guarda un registro con la totalidad de los recursos
consumidos por los procesos durante su ejecución.
 Facilita una mejor utilización de los recursos del servidor
por parte de varias apliaciones
© 2005 IBM Corporation
Consumo
© 2005 IBM Corporation
IBM Systems Group
Contents
 Power trends
►
Going to get more difficult before it gets better
 Where’s the power used in a server?
►
You might be surprised
 BladeCenter was designed to reduce power and heat
►
BladeCenter is more than a 1U turned on it’s side
 Airflow requirements
© 2005 IBM Corporation
IBM Systems Group
The Industry problem
 Moore's Law continues – greater performance = more power
 Customers need help to make new kit fit into old data centers
175
150
CPU Power
Rack Power
125
100
This is
where we
are now
75
50
25
0
2001
2003
2005
2007
2009
2002
2004
2006
2008
2010
© 2005 IBM Corporation
35
32.5
30
27.5
25
22.5
20
17.5
15
12.5
10
7.5
5
2.5
0
Thousands
CPU Power - watts
dual core solution in late 2006
Rack Power - watts
 Processor power expected rise over coming generations until the arrival of smarter
IBM Systems Group
Trends
 More powerful processors – Moore’s law at work
 More being used by HDDs
 More power being used by memory
► More
DIMMs
► Future
Fully Buffered DIMMs get even worse
 More power in = more heat out
 More air required to cool the servers
 2006
► 1U
servers are likely to have 700W power supplies
► 2U
servers closer to 800W power supplies
► BladeCenter
chassis in 2006 can draw close to 7KW per chassis
© 2005 IBM Corporation
IBM Systems Group
What limits density in the data center?
 Traditionally real estate or “U space” was the limiting factor
 Today it is far more likely that other factors drive down your
density
 Power input
►
UPS not large enough
►
PDUs not capable of handling load
►
Municipal power issues
 Thermal limits
►
Old data center
►
Not enough cooling capacity
►
Limited air flow capabilities of room
 Weight limits of older floors
►
Not designed to handle the 1500-2000lbs of today’s 1U/2U/blade
racks.
© 2005 IBM Corporation
25% less power
25% less heat
33% less than 1U
IBM Systems Group
What’s using the power?
 The processor power growth is the largest single contributor but there are
many other areas- the more you pack into a server the more power it
needs!
BladeCenter helps
in this area
Low Voltage
processors help in
this area
OTHER?
Other
25%
•AC to DC Transitions
•DC to DC Deliveries
Standby
2%
•Fans and air movement
Other
44%
HDD
7%
Memory
HDD
13%
6%
Planar
5%
PCI
4%Standby
2%
© 2005 IBM Corporation
Processor
30%
Processor
46%
Memory
11%
PCI
Planar 3%
4%
Processor
Memory
PCI
Planar
HDD
Standby
Other
IBM Systems Group
What IBM can effect . . . We have
Power deliver
–Super energy efficient power supplies deliver more
power to the server – less wasted watts in AC to DC
2.8GHz Low Voltage
Nocona
2.8GHz full power Nocona
transition
3380 Watts per chassis
4920 Watts per chassis
2.8GHz processors, 4GB memory,
Less parts
two 36GB U320 SCSI, Two Port
44%
Fibre HBA,
Dual ethernet
–Smarter shared
infrastructure
design means less
**Same performance**
components that draw power – less hardware means
How
less does
watts that compare per blade?
HS20 2.8GHz full power Xeon EM64T -
351W
HS20 2.8GHz low
voltage EM64T
Smarter
thermal
241W
solution
HS20 2.8GHz Xeon
400MHz
–Smarter
thermal
solution reduces 266W
the number of fans Nearly half
HP Blade 3.2GHz
Xeon
EM64T*
469W
from
112
down- to just 2 low
power use blowers
the power
30%
Low Voltage Processor
–Full performance 2.8GHz Xeon processor at
substantial power savings over standard Xeon
*HP entry Xeon EM64T is 3.2GHz. Intel guidance is all Xeon EM64T draw 103W per processor
© 2005 IBM Corporation
IBM Systems Group
Air flow requirements
 BladeCenter blowers are completely redundant and
hot swappable
►Note:
These two blowers take the place of 112 individual fans
found in a comparable 14 server 1U installation (8 fans/server)
 The MM controls blowers, they respond to external
temperature changes
 Inlet temperature is critical- temperatures at the back
of the rack are less critical
 At 25oC (77F) the airflow required by BladeCenter
will be 250CFM (cubic feet per minute)
 As temperatures rise to 32oC the fans increase in
speed to a maximum of 450CFM. Blowers also go to
max in the event of a management module failure
 The increase is linear to the temperature increase
 For a 3 BladeCenter installation under normal data
center conditions we need 750CFM of air flow
© 2005 IBM Corporation
IBM Systems Group
Airflow comparisons
• BladeCenter
•
265 CFM per chassis at <25C
•
28 processors
•
9.5 CFM per processor
• X336
•
37 CFM per server at <25C
•
2 processors
•
18 CFM per processor
• X346
•
52 CFM per server at <25C
•
2 processors
•
26 CFM per processor
© 2005 IBM Corporation
IBM Systems Group
BladeCenter Delivers IT Integration
Storage
ED
AT
MIC
NO
EN
IBM
IBM BladeCenter
BladeCenter
AUT
O
OP
Servers
VIR
TU
IZED
AL
INTE
GR
Applications
Networking
Integration can help dramatically reduce infrastructure costs
© 2005 IBM Corporation
IBM Systems Group
Gracias
© 2005 IBM Corporation

Documentos relacionados