01pe m1000e12

Upload: ryan-belicov

Post on 03-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 01PE M1000e12

    1/181PowerEdge M1000e Administration and Configuration

  • 7/28/2019 01PE M1000e12

    2/182PowerEdge M1000e Administration and Configuration

  • 7/28/2019 01PE M1000e12

    3/183PowerEdge M1000e Administration and Configuration

    A blade server is a server chassis housing multiple thin, modular electronic circuit boards,known as server blades. Each blade is a server in its own right, often dedicated to a singleapplication. The blades are literally servers on a card, containing processors, memory,

    integrated network controllers, an optional fiber channel host bus adaptor (HBA) and otherinput/output (IO) ports.

    Blade servers allow more processing power in less rack space, simplifying cabling and reducingpower consumption. The PowerEdge M1000e solution is designed for the following markets:

    Corporate

    Public

    Small/Medium Business (SMB)

    Customers that require high performance, high availability, and manageability in the most rack-dense form factor.

    A blade server are implemented in:

    Virtualization environments

    SAN applications (Exchange, database) High performance cluster and grid environments

    Front-end applications (web apps/Citrix/terminal services)

    File sharing access

    Web page serving and caching

    SSL encrypting of Web communication

    Audio and video streaming

    Like most clustering applications, blade servers can also be managed to include load balancingand failover capabilities.

  • 7/28/2019 01PE M1000e12

    4/184PowerEdge M1000e Administration and Configuration

    The PowerEdge M1000e Modular Server Enclosure solution is a fully modular blade

    enclosure optimized for use with all Dell M-series blades. The PowerEdge M1000e

    supports server modules, network, storage, and cluster interconnect modules

    (switches and pass-through modules), a high performance and highly available

    passive midplane that connects server modules to the infrastructure components,

    power supplies, fans, integrated KVM (iKVM) and Chassis Management Controllers

    (CMC). The PowerEdge M1000e uses redundant and hotpluggable componentsthroughout to provide maximum uptime. These technologies are packed into a

    highly available rack dense package that integrates into standard Dell or 3rd party

    1000mm depth racks.

    Dell provides complete, scale on-demand switch designs. With additional I/O slots

    and switch options, you have the flexibility you need to meet increasing demands for

    I/O consumption. Plus, Dells FlexIO modular switch technology lets you easily scale

    to provide additional uplink and stacking functionalityno need to waste your

    current investment with a rip and replace upgrade. Flexibility and scalability to

    maximize TCO.

    Best for environments needing to consolidate computing resources to maximize

    efficiency. Ultra dense servers, easy to deploy and manage, while minimizing energy

    and cooling consumption.

    Note: The PowerEdge M1000e Chassis has been created as a replacement for the

    current 1855/1955 chassis unit. The existing 8/9G blades will not fit or run in the 10G

    chassis, nor will the 10G blades fit or run in the 8/9G chassis.

  • 7/28/2019 01PE M1000e12

    5/185PowerEdge M1000e Administration and Configuration

    IT IS ALL ABOUT EFFICIENCY. Built from the ground up, the M1000e delivers one of the most energy efficient,

    flexible, and manageable blade server product on the market. The M1000e is designed to support futuregenerations of blade technologies regardless of processor/chipset architecture. Built on Dells energy smart

    technology the M1000e can help customers to increase capacity, lower operating costs, and deliver betterperformance/watt. The key areas of interest are Power Delivery and Power Management.

    Shared power takes advantage of the large number of resources in the modular server, distributing power

    across the system without the excess margin required in dedicated rack mount servers and switches. ThePowerEdge M1000e introduces an advanced power budgeting feature, controlled by the CMC and negotiatedin conjunction with the Integrated Dell Remote Access Controller (iDRAC) on every server module. Prior to

    any server module powering up, the server module iDRAC performs a power budget inventory for the servermodule, based upon its configuration of CPUs, memory, I/O and local storage. Once this number is generated,the iDRAC communicates the power budget to the CMC, which confirms the availability of power from the

    system level, based upon a total chassis power inventory, including power supplies, iKVM, I/O Modules, fansand server modules. Since the CMC controls when every modular system element powers on, it can now set

    power policies on a system level. In coordination with the CMC, iDRAC hardware constantly monitors actualpower consumption at each server module. This power measurement is used locally by the server module toinsure that its instantaneous power consumption never exceeds the budgeted amount. While the system

    administrator may never notice these features in action, what they enable is a more aggressive utilization ofthe shared system power resources. No longer is the system flying blind in regards to power consumption,

    and there is no danger of exceeding power capacity availability, which could result in a spontaneousactivation of power supply over current protection without these features.

    The cooling strategy for the PowerEdge M1000e supports a lowimpedance, highefficiency design. Lowerairflow impedance allows the system to draw air through the system at a lower operating pressure than

    competitive systems. The lower backpressure reduces the system fan power consumed to meet the airflowrequirements of the system, which correlates directly into power and cost savings. This highefficiency design

    philosophy also extends into the layout of the subsystems within the PowerEdge M1000e. The servermodules, I/O modules, and power supplies are incorporated into the system with independent airflow paths.

    This isolates these components from preheated air, reducing the required airflow consumptions of eachmodule. This hardware design is coupled with a thermal cooling algorithm that incorporates the following:

    Server module thermal monitoring by the iDRAC

    I/O module thermal health monitors

    CMC monitoring and fan control (throttling)

  • 7/28/2019 01PE M1000e12

    6/186PowerEdge M1000e Administration and Configuration

    Dells PowerEdge M1000e modular server enclosure delivers major enhancements inmanagement features. Each subsystem has been reviewed and adjusted to optimizeefficiencies, minimizing the impacts to existing management tools and processes, andproviding future growth opportunities to standards based management. The M1000ehelps reduce the cost and complexity of managing computing resources so you can focus

    on growing your business or managing your organization with features such as: Centralized CMC modules for redundant, secure access paths for IT

    administrators to manage multiple enclosures and blades from a single interface.

    One of the only blade solutions with an integrated KVM switch, enabling easyset up and deployment, and seamless integration into an existing KVMinfrastructure.

    Dynamic and granular power management so you have the capability to setpower thresholds to help ensure your blades operate within your specific powerenvelope.

    Real-time reporting for enclosure and blade power consumption, as well as theability to prioritize blade slots for power, providing you with optimal control overpower resources.

    And power is no longer just about power delivery, it is also about power management.Dynamic power management provides the capability to set high/low power thresholds toensure blades operate within your power envelope. The PowerEdge M1000e System addsa number of advanced power management features that operate transparently to the user,while others require only a one time selection of desired operating modes. The systemadministrator sets priorities for each server module. The priority works in conjunction withthe CMC power budgeting and iDRAC power monitoring to insure that the lowest priorityblades are the first to enter any power optimization mode, should conditions warrant theactivation of this feature.

  • 7/28/2019 01PE M1000e12

    7/187PowerEdge M1000e Administration and Configuration

    Server blade modules are accessible from the front of the PowerEdge M1000eenclosure. Up to sixteen half-height or eight full height server modules (or a mixtureof the two blade types) are supported. At the bottom of the enclosure is a flip outmultiple angle LCD panel for local systems management configuration, systeminformation, and status. The front of the enclosure also contains two USBconnections for USB keyboard and mouse (only, no USB flash or hard disk drives can

    be connected), a video connection and the system power button. The front controlpanels USB and video ports work only when the iKVM module is installed, as theiKVM provides the capability to switch the KVM between the blades.

    Not visibly obvious, but important nonetheless, are fresh air plenums at both top andbottom of the chassis. The bottom fresh air plenum provides nonpreheated air to thepower supplies. The top fresh air plenum provides nonpreheated air to the CMC,iKVM and I/O Modules. It is also important to note that any empty blade server slotsshould have filler modules installed to maintain proper airflow through the enclosure.

    The high-speed midplane is completely passive, with no hidden stacking midplanesor interposers with active components. The midplane provides connectivity for I/Ofabric networking, storage, and interprocess communications. Broad managementcapabilities include private Ethernet, serial, USB, and low level management

    connectivity between the CMC, iKVM switch, and server modules. Finally, themidplane encompasses a unique design in that it uses female connectors instead ofmale connectors, in case of bent pins, only the related module need to be replaced,the midplane does not.

    System Control Panel Features:

    System Control Panel w/ LCD panel and two USB keyboard/mouse and onevideo crash cart connections.

    The system power button turns the system on and off. Press to turn on thesystem. Press and hold 10 seconds to turn off the system.

    Caution: The system power button controls power to all of the blades and I/Omodules in the enclosure.

  • 7/28/2019 01PE M1000e12

    8/188PowerEdge M1000e Administration and Configuration

    The back panel of the M1000e enclosure supports:

    Up to 6 I/O modules for three redundant fabrics; available switches include

    Dell & Cisco 1Gb/10Gb Ethernet with modular bays, Dell 10Gb Ethernet

    with modular bays, Dell Ethernet pass-through, Brocade 4Gb Fibre

    Channel, Brocade 8Gb Fibre Channel, Fibre Channel pass-through,Mellanox DDR & QDR Infiniband.

    One or two (redundant) CMC modules, that include high performance,

    Ethernet based management connectivity via the CMC.

    An optional iKVM module.

    Choice of 3 or 6 hot-pluggable power supplies, including thorough power

    management capabilities including delivering shared power to ensure full

    capacity of the power supplies available to all modules.

    Nine, N+1 redundant fan modules all come standard.

    All back panel modules are hot-pluggable.

  • 7/28/2019 01PE M1000e12

    9/189PowerEdge M1000e Administration and Configuration

    The CMC provides multiple systems management functions for your modular server,

    including the M1000e enclosures network and security settings, I/O module and

    iDRAC network settings, and power redundancy and power ceiling settings.

    The optional Avocent iKVM analog switch module provides connections for a

    keyboard, video (monitor), and mouse. The iKVM can also be accessed from the frontof the enclosure, providing front or rear panel KVM functionality, but not at the same

    time. For enhanced security, front panel access can be disabled using the CMCs

    interface. You can use the iKVM to access the CMC.

    It should be noted that chassis management and monitoring on previous blade

    systems (1855/1955) was done using a DRAC installed directly in to the chassis, the

    DRAC would then offer connectivity to the blades one by one. The PowerEdge

    M1000e blade server solution instead uses a CMC to manage and monitor the chassis

    and each server module has its own onboard iDRAC chip. The iDRAC offers features

    in line with the DRAC 5 and allows remote control using virtual machine, this was

    provide by the KVM on previous models of the blade systems, now the iKVM has only

    been offered as an option as a lot of customers do not access the blade serverslocally.

  • 7/28/2019 01PE M1000e12

    10/1810PowerEdge M1000e Administration and Configuration

    The M1000e server solution offers a holistic management solution designed to fitinto any customer data center. It features:

    Dual Redundant Chassis Management Controllers (CMC)

    o Powerful management for the entire enclosure

    o Includes: real-time power management and monitoring; flexiblesecurity; status/ inventory/ alerting for blades, I/O and chassis

    iDRAC

    o One per blade with full DRAC functionality like other Dell serversincluding vMedia/KVM

    o Integrates into CMC or can be used separately

    iKVM

    o Embedded in the chassis for easy KVM infrastructure incorporationallowing one admin per blade

    o Control Panel on front of M1000e for crash cart access

    Front LCD

    o Designed for deployment and local status reporting

    Management connections transfer health and control traffic throughout the chassis.The system management fabric is architected for 100BaseT Ethernet over differentialpairs routed to each module. There are two 100BaseT interfaces between CMCs, oneswitched and one unswitched. All system management Ethernet is routed for 100Mbps signaling. Every module has a management network link to each CMC, withredundancy provided at the module level. Failure of any individual link will causefailover to the redundant CMC.

  • 7/28/2019 01PE M1000e12

    11/1811PowerEdge M1000e Administration and Configuration

    The FlexAddress feature is an optional upgrade introduced in CMC 1.1 that allows

    server modules to replace the factory assigned World Wide Name and Media Access

    Control (WWN/MAC) network IDs with WWN/MAC IDs provided by the chassis.

    Every server module is assigned unique WWN and MAC IDs as part of the

    manufacturing process. Before the FlexAddress feature was introduced, if you had toreplace one server module with another, the WWN/MAC IDs would change and

    Ethernet network management tools and SAN resources would need to be

    reconfigured to be aware of the new server module.

    FlexAddress allows the CMC to assign WWN/MAC IDs to a particular slot and override

    the factory IDs. If the server module is replaced, the slot-based WWN/MAC ID

    remains the same. This feature eliminates the need to reconfigure Ethernet network

    management tools and SAN resources for a new server module.

    Additionally, the override action only occurs when a server module is inserted in a

    FlexAddress enabled chassis; no permanent changes are made to the server module.

    If a server module is moved to a chassis that does not support FlexAddress, the

    factory assigned WWN/MAC IDs are used.

  • 7/28/2019 01PE M1000e12

    12/1812PowerEdge M1000e Administration and Configuration

    Features Benefits

    Lock the World Wide Name (WWN) of the Fibre

    Channel controller and Media Access Control

    (MAC) of the Ethernet and iSCSI controller into a

    blade slot, instead of to the blades hardware

    Easily replace blades without network

    management effort

    Service or replace a blade or I/O mezzanine card

    and maintain all address mapping to Ethernet

    and storage fabrics

    Ease of Management

    Easy and highly reliable booting from Ethernet or

    Fibre Channel based Storage Area Networks

    (SANs)

    An almost no-touch blade replacement

    All MAC/WWN/iSCSIs in the chassis will never

    change

    Fewer future address name headaches

    Fast & Efficient integration into existing networkinfrastructure

    No need to learn a new management toolLow cost vs switch-based solution

    FlexAddress is simple and easy to implement Simple and quick to deploy

    FlexAddress SD card comes with a unique pool of

    MAC/WWNs and is able to be enabled on a single

    enclosure at a given time, until disabled

    No need for the user to configure

    No risk of duplicates on your network or SAN

    Works with all I/O modules including Cisco,

    Brocade, and Dell PowerConnect switches as well

    as pass-thru modules

    Choice is independent of switch or pass-through

    module

  • 7/28/2019 01PE M1000e12

    13/1813PowerEdge M1000e Administration and Configuration

    Each M-series server module connects to traditional network topologies; thesenetwork topologies include Ethernet, fibre-channel, and Infiniband. The M1000eenclosure uses three layers of I/O fabric to connect the server module with the I/Omodule via the midplane. Up to six hot-swappable I/O modules can be installedwithin the enclosure. The I/O modules include fibre-channel switch and pass-through modules, Infiniband switches, and 1 GbE and 10 GbE Ethernet switch andpass-through modules.

    The six I/O slots are classified as Fabrics A, B, or C. Each fabric contains 2 slotsnumbered 1 and 2 resulting in A1 and A2, B1 and B2, and finally C1 and C2. Each 1and 2 relate to the ports found on the server side I/O cards (LOM or mezzaninecards).

    Fabric A connects to the hardwired LAN-on-Motherboard (LOM) interface. Currently,only Ethernet pass-through or switch modules may be installed in Fabric A. Further,Fabrics B and C are a 1 to 10 Gb/sec dual port, quad-lane redundant fabric whichallow higher bandwidth I/O technologies and can support Ethernet, Infiniband, andfibre-channel modules. Fabric B and C can be used independently of each other, forexample either 1 GbE, 10 GbE, fibre-channel, or Infiniband can be installed in Fabric

    B, and any one of the other types in Fabric C.To communicate with an I/O module in the Fabric B or C slots, a blade must have amatching mezzanine card installed in the Fabric B or C mezzanine card location.Also, GbE I/O modules that would be used in Fabric A may also be installed in theFabric B or C slots provided a matching GbE mezzanine card is installed in that samefabric.

    In summary, the only mandate is that Fabric A is always a GbE LOM. And Fabrics Band C are similar in design, but an optional mezzanine card can be installed in one ofthe available Fabric B or C mezzanine slots located on the motherboard. There is nointerdependency of the three fabrics, the choice for one fabric does not restrict orlimit or depend on the choice any other fabric.

  • 7/28/2019 01PE M1000e12

    14/1814PowerEdge M1000e Administration and Configuration

    To understand the PowerEdge M1000e architecture, it is necessary to first define four key terms: fabric, lane,link and port.

    A fabric is defined as a method of encoding, transporting, and synchronizing data between devices. Examplesof fabrics are Gigabit Ethernet (GE), Fibre Channel (FC) or Infiniband (IB). Fabrics are carried inside the

    PowerEdge M1000e system, between server module and I/O Modules through the midplane. They are alsocarried to the outside world through the physical copper or optical interfaces on the I/O modules.

    A lane is defined as a single fabric data transport path between I/O end devices. In modern high speed serialinterfaces each lane is comprised of one transmit and one receive differential pair. In reality, a single lane isfour wires in a cable or traces of copper on a printed circuit board, a transmit positive signal, a transmit

    negative signal, a receive positive signal, and a receive negative signal. Differential pair signaling providesimproved noise immunity for these high speed lanes. Various terminology is used by fabric standards when

    referring to lanes. PCIe calls this a lane, Infiniband calls it a physical lane, and Fibre Channel and Ethernet call ita link.

    A link is defined here as a collection of multiple fabric lanes used to form a single communication transport

    path between I/O end devices. Examples are two, four and eight lane PCIe, or four lane 10GBASEKX4. PCIe,Infiniband and Ethernet call this a link. The differentiation has been made here between lane and link to

    prevent confusion over Ethernets use of the term link for both single and multiple lane fabric transports.

    Some fabrics such as Fibre Channel do not define links as they simply run multiple lanes as individualtransports for increased bandwidth. A link as defined here provides synchronization across the multiple lanes,so they effectively act together as a single transport.

    A port is defined as the physical I/O end interface of a device to a link. A port can have single or multiple lanes

    of fabric I/O connected to it.

    The PowerEdge M1000e system management hardware and software includes Fabric Consistency Checking,

    preventing the accidental activation of any misconfigured fabric device on a server module. Since mezzanineto I/O Module connectivity is hardwired yet fully flexible, a user could inadvertently hot plug a server module

    with the wrong mezzanine into the system. For instance, if Fibre Channel I/O Modules are located in Fabric CI/O Slots, then all server modules must have either no mezzanine in fabric C or only Fibre Channel cards infabric C. If a GE mezzanine card is in a Mezzanine C slot, the system automatically detects this

    misconfiguration and alerts the user of the error. No damage occurs to the system, and the user has the ability

    to reconfigure the faulted module.

  • 7/28/2019 01PE M1000e12

    15/1815PowerEdge M1000e Administration and Configuration

    The iDRAC on each server modules calculates the amount of airflow required on an

    individual server module level and sends a request to the CMC. This request is based

    on temperature conditions on the server module, as well as passive requirements due

    to hardware configuration. Concurrently, each IOM can send a request to the CMC

    to increase or decrease cooling to the I/O subsystem. The CMC interprets these

    requests, and can control the fans as required to maintain Server and I/O Module

    airflow at optimal levels.

    Fans are loud when running at full speed. It is rare that fans need to run at full speed.

    Please ensure that components are operating properly if fans remain at full speed.

    The CMC will automatically raise and lower the fan speed to a setting that is

    appropriate to keep all modules cool.

    If a single fan is removed, all fans will be set to 50% speed if the enclosure is

    in Standby mode; if the enclosure is powered on, removal of a single fan is

    treated like a failure (nothing happens).

    Re-installation of a fan will cause the rest of the fans to settle back to a

    quieter state.

    Whenever communication to the CMC or iDRAC is lost such as during

    firmware update, the fan speed will increase and create more noise.

  • 7/28/2019 01PE M1000e12

    16/1816PowerEdge M1000e Administration and Configuration

    Flexible and scalable, the PowerEdge M1000e is designed to support future generations of bladetechnologies regardless of processor or chipset architecture. The M1000e has these advantages:

    The M1000e blade enclosure helps reduce the cost and complexity of managing

    computing resources with some of the most innovative, most effective, easiest-to-usemanagement features in the industry.

    Speed and ease of deployment: Each 1U server takes on average approximately 15minutes to rack, not including cabling. The M1000e enclosure can be racked in

    approximately the same amount of time then each blade takes seconds to physicallyinstall, and cable management is reduced.

    When you need to add additional blade servers, they slide right in.

    High-density: In 40U of rack space customers can install 64 blades (4 enclosures by 16slots) into four M1000e enclosures versus 40 1U servers.

    The M1000e is a leader in power efficiency, built on innovative Dell Energy Smart

    technology.

    The M1000e is the only solution that supports mixing full- and half-height blades inadjacent slots within the same chassis without limitations or caveats.

    Redundant Chassis Management Controllers (CMCs) provide a powerful systems

    management tool, giving comprehensive access to component status, inventory, alertsand management.

    Dell FlexIO modular switch technology lets you easily scale to provide additional uplinkand stacking functionality giving you the flexibility and scalability for today's rapidly

    evolving networking landscape without replacing your current environment.

    Our FlexAddress technology ties Media Access Control (MAC) and World Wide Name(WWN) addresses to blade slots not to servers or switch ports so reconfiguringyour setup is as simple as sliding a blade out of a slot and replacing it with another.

    The M1000e's passive midplane design keeps critical active components on individual

    blades or as hot-swappable shared components within the chassis, improving reliabilityand serviceability.

  • 7/28/2019 01PE M1000e12

    17/1817PowerEdge M1000e Administration and Configuration

  • 7/28/2019 01PE M1000e12

    18/18