%e4%b8%ad%e5%b0%8...

74
1/10 中中中中中中中中中中中中 中中中中中中中中中中 2010/06/03

Upload: linlihaa

Post on 24-May-2015

143 views

Category:

Documents


5 download

TRANSCRIPT

2010/06/03#/10#1 IaaSCaaS StaaS PaaS SaaS

#

?#=(,)

Wikipedia/#()/(Virtualization) (virtual machine monitor, hypervisor, or virtualization layer)(devices)(operating system)

Hardware

Virtualization Layer

Host OS

Guest OS

Application

Application

Guest OS

Guest OS

Application

Guest OS

ApplicationSource: Mendel RosenblumStanford U., 1998#Thin ClientIP/MPLS VPNFirewall/IDSData ProtectionOperating system virtualizationHardware virtualization

Network

User AUser B

#

(VDI)VDI(virtual desktop infrastrcture)

StorageServerDesktop Clients

Network

User 2User 3User 4User 1User 2User 3User 4User 1

Management Console

Management ServerMobile Clients#IP/MPLS VPN VRFs (Virtual Route Forwarding)ATM, Frame Relay (VPN)GRE TunnelInternet GatewayFirewall / NAT / Security UTMIPSec Gateway / SSL VPNData Protection

IP/MPLS VPN

Internet

#x86 ( Intel* VT AMD-V )x86DRS -Vmotion -Storage Vmotion -(HA)9#x86(Para Virtualization)x86 ( Intel* VT AMD-V ) Hypervisor API

91,000 VM

100 Servers10:1Data Total=100TB0.1TB x 1,000RAID 1+0270TB150TBRAID 5Dedup4TB1:2.71:1.525:1StorageData : StorageVM:PM#VMWareVI / ESX 3.5 VMWare ESX 3iCitrixXenServerXenSourceMicrosoft Hyper-VOne Time License FeesFreeOne Time License FeesFreeLicense Fees(Virtual Center) (XenCenter) (SCVMM, SCOM)Gust VM OS WindowsLinuxSolarisWindowsLinuxSolaris

WindowsLinuxWindowsLinuxWindowsLinux()

SRMHAV-MotionVirtual SwitchVirtual SwitchXen-MotionHA (Next version), APIDynamic Data Center #

20020020017,000,00020+9,032,0005,000,0008 switches1,600,0002 switches400,00018,600,00014,432,00020rack360,0005rack90,00040 KW10 KW360,000/90,000/1/10/1/3-5/(HA)22%83%75%91%#12(()(:Google Search)Hadoop / MapReduce

#HadoopHadoop Hadoop Apache Top Level Yahoo! Hadoop (HDFS)MapReduce Doug CuttingGoogle FilesystemJava C++/Java/Shell/Command2006YahooPetabyte

A Cluster of MachinesHadoop Distributed File System (HDFS)MapReduceHBaseCloud Applications#Hadoop(HDFS)HadoopHDFS (Single Namespace)(block) (File replication)

#HDFSHadoop Distributed Filesystem (HDFS)

Source: http://hadoop.apache.org/common/docs/r0.20.0/hdfs_design.html#

MapReduceMapReduce Google MapReduceMapReduceMap key/value intermediate key/value Reduce intermediate key intermediate valueskey/value MapReduceSource: http://hadoop.apache.org/core/#Hadoop-MapReduce 18split 0

split 1

split 2

split 3

split 4part0mapmapmapreducereducepart1inputHDFSsort/copymergeoutputHDFSJobTrackerNameNodeblocksJobTrackerTaskTrackerMap JobTrackerTaskTracker JobTrackerTaskTrackerreducereduceJobTrackerNamenodeoutput#HBaseHBase (Multi-Dimensional Map)HBase PetabytesHBase Hadoop (HDFS) Bigtable HBaseHadoop MapReduce

#Source: http://csrc.nist.gov/cyber-md-summit/documents/posters/cloud-computing.pdfOn Demand Self-ServiceBroad Network AccessResource PoolingRapid ElasticityMeasured ServicePublic CloudPrivate CloudCommunity CloudHybrid CloudInfrastructure as a Service IaaSPlatform as a ServicePaaSSoftware as a ServiceSaaS#

Cloud Infrastructure Services (IaaS)

Cloud Platform Services (PaaS)Cloud End-User Services (SaaS)Physical InfrastructureService UsersCloud ProvidersServiceProvidersPublic CloudSource: IBMHP(virtualized servers, storage, networking)Middle ware, application servers,database servers,portal serversPrivate Cloud

Hybrid Cloud#22

MIC20099

IaaSInfrastructure as a ServicePaaSPlatform as a ServiceSaaSSoftware as a Service#IaaS 23IaaSInfrastructure as a Service

Amazon EC2 (Amazon Elastic Compute Cloud) HiCloud CaaS / StaaS

#IaaSCaaS CaaS: compute as a serviceIT(WorkLoad)VM24hx7dBrowser(Pay-as-you-go)IaaS(Subscription)#24CaaS(Segment)(Co-Location)(WebHosting)ICT (Target)(Position)CloudVM

Web/FTP/Mail Cloud ServerASPVM

VM

VM

VM

VM

VM

//

#CaaSDefault TCP Sync FloodTCP Port ScanUDP FloodUDP ScanICMP Flood()/DDoSDDoS#26

CaaS

VM 1VM 2VM 2VM 1

Internet

Server Load BalancingVirtual FirewallVirtual Chassis Switch

VPNMPLSNetwork

Elastic ComputingHigh-Availability FailoverDynamic BalancingBoot DeviceData VolumesHQ//#2727StaaSAES 128 bits

Open File(VSS)

!!!!!!!!

!!!!!File ServerNASFTP.DB(Export DB File)OracleMSSQLDB2InformixWindowsUnix(windows)!!!!taaS: storage as a service#StaaS

HeadquarterBranchBranch

Internet/VPN/Internet/VPN/Internet/VPN/Internet/VPN/ #StaaSStorageStorage

Storage AStorage BABStorage()

Agent

#StaaS

File ServerStaaS

V.S()

IBMHPEMC

File Server

WebHD#StaaSTape

#

StaaS

HiNet#PaaS

Web ApplicationsGoogle APP EngineIBM PangooMicrosoft Azure

PaaSPlatform as a Service#PaaSMicrosoft AzureAPIUSERAPIGoogle App EngineGoogle Google IBMPangooWASDB2TivoliIBM(PaaS)#PaaS- Microsoft Azure Azure Azure web

#36PaaS- Google App Enginegoogle500MB of storageup to 5 million page views a month10 applications per developer account LimitLanguage: PythonJavaweb applications

#SaaS InternetWeb () (Pay per Use)

Google Docs SaaS CRMHamiSalesforce.comSaaSSoftware as a Service# ( GoogleYahooBing) ( HotmailGmailHiBox) ( StaaSAmazon) ( FacebookTwitter) ( SaaS CRMSalesforce)

#Hami 24? Hami24!mProHamimPro#Hami Book ! Hami Book

#Hami(PIM)

#

Hami(PIM)-

#Hami(PIM)-

web

#

Hami(PIM)-

#SaaS CRM

(CRM)

#

Office Applications (25) (142) ()

Web Client

iPhone

Smart Phone#Excel

Excel CRM

ODBCCRM#Word

# Workflow

#SaaS CRM40eDMIT

##MICIaaS

#SaaSThin ClientGreen IT

#(On-Premise) vs (SaaS)

License Hardware Database Infrastructure Maintenance Dedicated Staff Upgrade?On-PremiseSaaSMIC#SaaSOn-PremiseSaaSSFACSSMASaaSCloud

N=103SaaSCIOForresterMIC20092SaaS(%)1532MIC#(1/4)data center on-demand

#(2/4)Vertical ScalingHadoopGoogleHorizontal scalingload balancing#(3/4)Thin Client

/ + (3G)Google DocsFacebook Flickr#60(4/4)

()

###SaaS(Service Provider or ISV)#SaaSSecurity LogSaaS ApplicationIdentity Management

Usage TrackingCRMCall Center Support SystemManagement LogSaaS ApplicationSaaS ApplicationSaaS ApplicationPerformanceAvailabilitySecuritySLA MonitoringProvisioningManagement AgentAccess ControlMeteringOrder ManagementService Delivery Platform Runtime

BillingManagement Alerts#ICTISVCRM SCM BI

ICT

ISVIndependent Software Vendor SPService Provider APIAPplication InterfaceSP(CHT)APIsSaaS#(1/2)Green IDCTIA942 Tire-3

#CHT(2/2)()

Num.12345LEED 3.06TIA 942-2005TIER437PUE(Tier3)1.43-1.67Green Grid8ISO 27001 A9

#hicloud-SOChicloudSOCHiNethicloudDDoShicloud(Core Switch)IPSSOCLogHiNet SOCWeb protal#hicloudEdge SwitchSWServerSLBFWVirtualize

APPOS

APPOS

APPOS

Hypervisor

APPOS

APPOS

APPOS

Hypervisor

APPOS

APPOS

APPOS

Hypervisor

IPSHiNet SOCHiNet

SOC

Virtualize

Core SWDDoSHiNet SOC#69Transcript:

There are x86 compute blades inside this system. And Cisco has developed actually specific blade-level differentiation in the product. JIM ANDERSON: Great, great. So Brian, that's great but how is this different from just the legacy systems out there today? BRIAN SCHWARZ: When you think about all these components as a single system, there's a bunch of challenges that you can solve that exist in the data center that you can't solve if network and compute and storage access and virtualization are kind of different silos. JIM ANDERSON: Okay. BRIAN SCHWARZ: There's a bunch of human peering or human process that has to go on amongst the teams that we can dramatically simplify around the management system, the UCS Manager management system. That's what gives us the agility and some of the service profile concepts we're going to talk about later. JIM ANDERSON: Great, great so Scott, another one for you. Are the 5k -- actually let's me stop and do this one. How does unified fabric in the Nexus product family all work together to enable that Data Center 3.0 architecture we talk about a lot? SCOTT ROSE: Yes, so the UCS product shares a lot of the same architectural components as the 5k, the 2k and even the Nexus 1k can be a supported software switch on California. We take full advantage of the capabilities afforded to us by unified fabrics, reducing the number of adapters, reducing the cabling, having an integrated fabric switch for both Ethernet and Fibre Channel traffic. So it's really a nice complementary solution that extends the unified fabric capabilities but into the heart of the data center. JIM ANDERSON: Great, great so Scott, are the 5k and the UCS fabric interconnect the same box? SCOTT ROSE: So again, architecturally they share some of the same design components. But physically the Nexus 5k is a separate component from the UCS fabric interconnect switch. There could be a -- you can't do a field upgrade but there could be some sales concessions for swapping the switches out. But physically in ordering, they are separate devices. JIM ANDERSON: So you can just take a 5k and put it on one of these chassis and it works. SCOTT ROSE: Correct. JIM ANDERSON: Great so Brian, how does the 7k fit into this environment? BRIAN SCHWARZ: It's a good question, Jim. So the UCS fabric interconnect, kind of I'll say the heart and soul and kind of brain of this system, requires connectivity to the upper layer, either LAN, either the distribution aggregation layer or the core itself. And that's really where the Nexus 7k would come in. You essentially connect this to the 7k. You would also connect to the Catalyst 6500 in some data centers that already have it deployed. And on the Fibre Channel side it does FCoE internally, Fibre Channel over Ethernet internally. But it has native Fibre Channel uplinks that would connect up to the SAN core, kind of fabric A/fabric B probably like the MDS 9500 or something like that. JIM ANDERSON: That's interesting. You kind of mentioned this but what is the fabric interconnect? BRIAN SCHWARZ: The fabric interconnect is the top-of-rack networking element that is the really the brain of the UCS system. It's where the UCS Manager runs. JIM ANDERSON: So it sits on top of the chassis? BRIAN SCHWARZ: It's a top-of-rack networking element. It's used, we actually have one here on the table. This is a 20-port version. It's used to connect down to the server chassis themselves as well as to the upper layers of the network. The important thing is it doesn't behave like a regular switch does. It has a very special set of management software, the UCS management software that's used to manage the entire system, not just the switch but all the chassis and blades. And it's also important to know that you only connect UCS components, UCS servers to this system. You don't connect your standard run-of-the-mill x86 servers. That's how we kind of fuse it into a tightly knit environment because we have special Cisco-developed firmware running on all of the components that connect to it. JIM ANDERSON: Interesting, but you connect it to the chassis. How many chassis can you connect to one of these? BRIAN SCHWARZ: It can support up to 40 chassis in the... JIM ANDERSON: Wow, it's a pretty big system. BRIAN SCHWARZ: Yes, absolutely, it's a multi-rack design. It can support up to 320 blade servers. JIM ANDERSON: Okay, 320 blades? BRIAN SCHWARZ: Yes. JIM ANDERSON: So you mentioned the blades. So tell me a little bit, Scott, about the compute blades as part of the system. SCOTT ROSE: Sure, so the compute blades are two-socket blades. In fact this is an example of one of the blades. This is our half-width system. It's a two-socket Nehalem-based design, fits inside a standard 19-inch data center rack. It's a Cisco-engineered, Cisco-architected system. Now with UCS, Cisco will actually be introducing two blade form factors. Again, this is the half-width and we will have a full-width system, a very exciting system which is our memory expanded blade with a very, very large memory footprint, 48 DIMM slots but still a two-socket Nehalem-based blade. JIM ANDERSON: Interesting, so how many of those can fit into a chassis, Scott? SCOTT ROSE: So each enclosure could fit eight of these half-width blades. If you went with a full-width memory blade which is dimensionally similar to the switch here, you could fit up to four of those. And then additionally, inside the system here, down here you see the actual mezzanine form factor adapter. And we'll have several versions of that, a cost version which will be an OEM adapter from Intel that Cisco will be selling, two compatibility adapters from QLogic and Emulex and also an adapter, a virtualized adapter that Cisco has built. JIM ANDERSON: Great, great. So I mean, it sounds like some great innovations. Brian, will people see this as proprietary? BRIAN SCHWARZ: It's a fair question. And certainly when we've done our initial customer briefings, it has come up. It's important to understand that we have a number of technology partners that have worked with us on the UCS system, so Intel, kind of industry standard processors. We work with Emulex and QLogic for adapters. And in addition, some of the Cisco-provided technology that's built into the UCS system is also available standalone in the Nexus product line. So you can buy unified fabric in the Nexus 5000 or the Fabric Extender in Nexus 2000. But there are certain benefits you get from the entire system being managed as a single entity that you can't get today. That's really where a bunch of the OpEX savings come from and a bunch of the agility you get with service profiles is getting the entire system. That's a benefit that only exist in the UCS family really. JIM ANDERSON: So the bottom line is it's an open system but obviously we've had innovation where there is some customer advantages we can get from that. BRIAN SCHWARZ: Absolutely, absolutely. JIM ANDERSON: Great so Scott, with UCS I get this question a lot. Will it be a complete swap-out to go to the system or can you leverage your existing infrastructure. SCOTT ROSE: Absolutely you can leverage your existing infrastructure. The LAN and SAN infrastructure, the Fibre Channel switches, the distribution switches, the core switches remain the same. We'll connect up to an MDS SAN. We'll of course connect to third-party SAN switches. We'll connect to Catalysts. We'll connect to Nexus 7k. So that is completely non-disruptive. This is about a server-based solution with integrating accessing switching. So this is about getting attached into new server deployments. But northbound to us, it's completely compatible with respect to existing configurations. JIM ANDERSON: Great, so it goes a lot to our theme about winning the architecture war. You can leverage your existing infrastructure, move over to this new infrastructure and architecture and get the benefits associated with that. SCOTT ROSE: Correct. JIM ANDERSON: Excuse me, with that. So Brian, one of the things about this is that we're going to have some key third-party relationships. Tell me a little bit about VMware in this platform. BRIAN SCHWARZ: It's probably the most important one to start with because certainly we're huge advocates of server virtualization. We appreciate the benefits it provides to customers around server consolidation and the things it's going to do in the future around making compute more virtual and mobile in the data center. And we really think that the virtual machine is kind of indeed the future building block of the data center. In some respects the ESX hypervisor, kind of the most popular hypervisor, will just run on our blade as a native server, the same as it would on a lot of x86 servers. There's two really key advantages that we have on our platform around virtualization and hypervisors. The first is VN-Link technology. So it was first introduced in the Nexus 1000V at VMworld last year. And certainly that's going to work on the California platform, both with the Nexus 1000V actually running inside the hypervisor, the same as it normally would, but also with the Palo adapter, this virtualized adapter that we're working on. JIM ANDERSON: The Palo adapter. BRIAN SCHWARZ: Yes, yes. JIM ANDERSON: Okay, great. BRIAN SCHWARZ: And the second thing in addition to VN-Link is the memory expansion blade that Scott talked about, the full-width blade. VMs are often memory-constrained environments. And we think that full-width blade is going to allow people to get higher levels of virtual machine consolidation. So less physical boxes to run the same workload, and that's a very big cost savings for customers. JIM ANDERSON: So the compute density, we're actually increasing it for customer environment. BRIAN SCHWARZ: Yes, yes so kind of spend less but run the same amount of applications that they need to run. It is important to know though that we're not ESX-specific. We can support other hypervisors. We are going to support Microsoft Hyper-V at FCS. And we'll probably be adding some of the other virtualization technologies over time as well. JIM ANDERSON: So the bottom line, it's a box that's optimized for virtualization. Obviously we've done a lot with VMware but it will support some other technologies out there. BRIAN SCHWARZ: Yes, yes, that's a good summary. JIM ANDERSON: Great, great so Scott, tell me a little bit about the Intel chip that's part of this platform. SCOTT ROSE: Yes, so I think first of all we should remember that we can't really talk publicly or externally about Intel's next-gen chip Nehalem until March 30th. That's their launch date for Nehalem. So, but generically this is a Nehalem-class system. Again, it's a two-socket system and will have two form factors, the half-width as well as the full-width memory expansion. Cisco will obviously now be an OEM for Intel chips. And we will follow very closely to the roadmap as they add more cores into Nehalem, as they go to additional and larger socket capacities. JIM ANDERSON: Great, great so Intel's a key partner here. SCOTT ROSE: The key partner, and one thing that Cisco should realize is the partnership with the Intel BDM is a very important relationship. So I believe some of those relationships have already begun. JIM ANDERSON: Exactly. SCOTT ROSE: That's a synergy that should be created and should be leveraged as we go out and become a force inside the server data center. JIM ANDERSON: All right, so let's switch up a little bit from the technology. Brian, tell me a little bit about customer business needs and how this system fits with that. BRIAN SCHWARZ: You know, probably the most prevalent thing you have to understand for customers today is cost savings. The economic environment is not great. And there's a number of areas where I think the Unified Computing System can really save people money. Around CapEx, so the capital spending, we get big benefits out of the unified fabric. This system was designed around it from the ground up and because of that we get good component reduction through the use of the Fabric Extender technology. And also you'll notice on the blade here there's only one adapter. It's a converged network adapter. So less adapters, less cabling, just overall less physical stuff. JIM ANDERSON: That's the infrastructure associated with this. BRIAN SCHWARZ: Yes, less physical infrastructure. JIM ANDERSON: And that's, it goes to green effects also, right? BRIAN SCHWARZ: That also gets you to one of the operational benefits which is it should consume less power as well. The biggest OpEX savings we're really going to get is around the management system now because although it's a bunch of distributed components, they're kind of nicely fused together with this UCS Manager. And that's a very different model than has been done in the past where each team had their own set of device managers and they used Remedy or Excel or email or something to kind of make the teams work well together. We're going to allow the multiple teams, kind of preserve the role specialization, but to work on the same piece of software to create these service profiles, deploy service profiles in a much more dynamic fashion. So that is going to turn out to be I think a huge OpEX savings for customers. That's also what gives us the ability to I think help business agility. And so I think a lot of the CEOs out there come to the CIO and say you need to be more responsive to the needs of the business. You need to roll out applications faster, things like that. This concept of the service profiles and how they work to fuse compute with the network and virtualization elements really is going to help deliver that, kind of orders of magnitude better than exists in a typical data center today. JIM ANDERSON: Great, great so just to build on that, Scott you know, the BU has spent a lot of time talking to customers out there. Tell me a little bit about the profile of a customer that can leverage this technology. SCOTT ROSE: Sure, so this is a data center solution. So it's really an ideal fit for the mid-size as well as the larger customers, enterprise-class data centers that have large amounts of x86 systems, hundreds, thousands. That's a common footprint for that type of customer. It's not an ideal fit for the SMB, for the smaller branch offices. Cisco obviously has a large channel but there's associated infrastructure with this product. So again the mid-size as well as the large-size data centers will probably be the more preferred customer fit. Obviously environments that are aggressive adoption, adopters of virtualization. That's one particular workload that we've spent a lot of time in building differentiated capabilities around. Multi-tenancy is another area, the service provider market. Organizations that are basically allocating resources across multiple divisions or even to multiple clients but all shared on the same infrastructure. We've built a lot of capabilities, both in the management architecture as well as in how the systems are interconnected to allow for that multi-tenancy type resource allocation. And then large physical applications running on x86 platforms. So these are the production systems, the transactional environments, the databases. This system is purpose-built to support those large physical applications in addition to the virtualization environments that we'll be designing around as well. JIM ANDERSON: Let me just build on that, throw another one your way. Tell me about the memory aspect and how that might play into the application world for this type of customer. SCOTT ROSE: Yes, so the expanded memory blade again is I think a fantastic feature set. The initial customer prospecting we've had has shown very, very solid interest. So obviously you can extend that to larger VM densities. JIM ANDERSON: How big can that get, that expanded memory, because I know some people that's new, you know, it's four times the number of (inaudible). SCOTT ROSE: Yes, so we've basically quadrupled the number of DIMM slots as compared to a 12-slot configuration. So fully populated using the largest DIMM densities -- or sorry, 8 gig DIMM densities on that board would give you about 384GB of memory. JIM ANDERSON: Wow, wow. SCOTT ROSE: For a two-socket configuration, so that's a vast amount of memory. JIM ANDERSON: That's going to be leading in the industry I would guess, right? SCOTT ROSE: Absolutely, it'll be leading. It'll be a large differentiation from our competition. So more VMs on a server, bigger VMs and large production applications that might require large memory footprints for better performance. BRIAN SCHWARZ: Yes, the other thing that the DIMM slots give you is you can reach pretty large memory capacities with really cheap DIMMs. So you could put in, because you have so many DIMM slots, you can put in 2 gig or 4 gig DIMMs which are really cheap relative to the 8 gig DIMMs that are pretty expensive. JIM ANDERSON: Exactly, that's a good point. BRIAN SCHWARZ: So you can get to a kind of comparable system much cheaper just because you have more DIMM slots. JIM ANDERSON: Interesting, so Brian, just to stick with you on that is what are the contingencies associated with this system? Are there any out there? BRIAN SCHWARZ: It's important to understand the dependencies that we kind of designed the system around. So most of the servers are going to use external storage, you know for the application data, for the VMDK files in ESX or something like that. JIM ANDERSON: So mostly a SAN environment? BRIAN SCHWARZ: So certainly it's designed for a SAN environment because it does implement the unified fabric. We have Fibre Channel uplinks to the SAN. It's going to work great for NAS environments too. JIM ANDERSON: NAS too, okay. BRIAN SCHWARZ: It uses 10Gb Ethernet so it's going to run screaming fast NAS as well. It's also important to know that the UCS Manager system itself runs kind of below the operating system. so we don't have any host agents in the operating system. We don't, whatever you do with the operating system today in terms of how you provision it, how you patch it, what tools you use, whether they're from the OS vendors or you've bought some industry, other industry tools, you would do those as you do today. That's largely unaffected by the use of our system. And also because we don't do a lot with the application space... JIM ANDERSON: So it can plug into the existing system management tools that are common with all of our customers? BRIAN SCHWARZ: Yes, most people have some type of automated OS deployment tool already, either they do one per operating system or they bought some tool that can support a heterogeneous set of systems. And we'll just seamlessly plug in. I think Scott's going to talk about our relationship with BMC a little bit later. The other thing to know is that we're running stock x86 operating systems, so Windows, Red Hat, ESX. We kind of get the applications for free. You shouldn't have to modify the applications. The applications get qualified against the operating system but since we're running an unmodified operating system we're going to have OEM relationships with the operating system vendors. So there's going to be the appropriate levels of support. Whatever the x86 applications people are running in their data center today they'll be able to run on the UCS system. We will be working to provide some best practices and design guides. And there's some vendors who like to do hardware certifications like Oracle we're working with. But for most application vendors, as long as it supports the right version of the operating system, it's going to be good to go on the platform. JIM ANDERSON: Interesting, so just to build on that a little bit, security's always a big issue in a data center. How does security fit in with this? BRIAN SCHWARZ: It's another good question and certainly one that's come up from a number of the folks in the Cisco team because Cisco has a big security business. And it's important to understand what's embedded in the platform versus hat you need to do outside of the platform. Inside the platform we obviously have the classic role-based authentication and things like that, username/passwords and things, can authenticate against a RADIUS LDAP system like Active Directory or something. It also does from a security aspect layer 2 networking. So it does kind of classic network isolation like VLANs and things like that. There's a lot of stuff around security that it doesn't do though and people again should use their existing toolsets to do. It doesn't do any layer 3 IP security. So use your existing firewall, use your existing intrusion detection or prevention system. you still need to do good best practice around securing the operating system with patching and things like that. Those are all handled outside of the system by the existing toolset that already exists. And again, we should be able to fit in unaffected by not changing how those tools operate. JIM ANDERSON: Okay, so I want to make this interactive. I believe we have a live question from Bellevue. So let's take a look and see what we can get there. Bellevue, are you out there? AUDIENCE QUESTION: Yes sure, hey. Thanks, Jim. One of the questions that really pops out to me, knowing that we have a lot of sophisticated customers out there, if we come announce our product on the 16th and then have kind of keep mum for two weeks about Nehalem, I'm a little unsure how we're going to handle that. Can you address that? JIM ANDERSON: Sure, great question, so a couple of things just in general. So we have decided to go through and announce what we're calling our external announcement on March 16th. We also will be participating in the overall Intel announcement on March 30th. So there is a gap between our announcement on the 16th and the 30th. When we announce on the 16th what we'll talk about is a lot of the architectural concepts associated with this particular product. We just won't get into the key performance enhancements and things like that associated with the chip. Intel's a great partner. We want to make sure that we live up to their expectation. So we have an agreement with Intel that we will not discuss specific items around the chip. But we'll talk about our platform, the benefits of our platform and why people want to transition to that platform moving forward. Scott, Brian, you want to have any more comments on that because it is a good question. BRIAN SCHWARZ: It is and I would keep the conversation about the Unified Computing System and the concept of how networking and storage access and virtualization and compute can all be fused together as a concept, and the benefits it can bring. And in most conversations it's best to stay out of the low-level details, so the speeds and feeds anyway. JIM ANDERSON: Right, right. BRIAN SCHWARZ: While you're still in the qualification process, and getting the high-level benefit translated to the customer so they can understand it before they get into how many cores and sockets it is and things like that. JIM ANDERSON: Okay. SCOTT ROSE: And one thing just to add is this is an architecture. It's not just the blade server. And if it was just the blade server, we may not have too much to talk about on the 16th. There is a lot of differentiation in the architectural design of this solution, notwithstanding what we're doing at the blade level. So we should consider this end-to-end system level architectural as well as what we can do at the compute tier and be able to talk about all those points. So the blade discussion obviously will happen after the 30th. But even before that there's going to be a lot of meat behind this announcement. JIM ANDERSON: Yes, and I think the key thing there is customers are working closely with Intel directly themselves. Intel can disclose to those customers via their sales force, so what they're like. And so I would stick to our platform, the architecture and let Intel handle discussing the performance of their chip until after the 30th. Another question just to keep it exciting here and I'll throw this one out to you, Scott. And it's from Carl Wiese, my boss, and he wants to know when will the full-size blade for extended memory be available? SCOTT ROSE: So right now that's going to be a post-FCS deliverable. The exact date's still being locked down. But it's going to be roughly about four months or so after the FCS date. So said with a little bit of disclaimer but it's about a quarter or maybe a little more than a quarter after initial release. JIM ANDERSON: Okay, and there's one for you, Brian, that came in (inaudible). Is UCS the official product name? BRIAN SCHWARZ: Unified Computing System is going to be the official product name. There's going to -- it's actually a product line name. there's a whole set of components that make up the UCS systems. There's obviously the fabric interconnect, the top-of-rack networking element. There's the blade chassis themselves, the blades that Scott talked about. There's a special version of the Fabric Extender. They're all going to be part of the UCS family. They have a numbering scheme that will describe the different components. It's 6100, B200, etc. So there's going to be a big product line that comes around the Unified Computing System brand. You kind of relate it to the Nexus brand as the data center switch product line from Cisco. UCS is going to be kind of the data center unified compute product line. JIM ANDERSON: Okay, all right, great. Great, so it is the official name. BRIAN SCHWARZ: It is. JIM ANDERSON: Okay so Scott, just to build on that, tell me a little bit about how this system scales. You mentioned it a little earlier but let's elaborate. SCOTT ROSE: So UCS again it's an architecture so you can think of it as an instance or as a domain. But basically we have two independent fabric interconnects, two independent planes and we can interconnect again up to 40 blade chassis which would fit 8 of these half-width blades. So again we're talking 320 two-socket systems all within a single management domain and I want to be very clear there. The experience of managing one system is essentially identical to managing a large collection of systems. That's one of the capabilities that we've built into the architecture. We've basically designed this for customer needs. So it's not just the physical systems but the ability to manage thousands, or interact with thousands of VMs. You could be running, two, four, maybe even a dozen virtual machines on any one of these physical blades. So we are talking about a large scale-out type implementation. Again dozens if not hundreds of physical systems, perhaps thousands of VMs running inside of a single management domain. JIM ANDERSON: Okay, great, great. So I believe we have another live question from our studio audience out there in Herndon, Virginia. Herndon, are you out there? AUDIENCE QUESTION: Yes Jim, can you hear us? JIM ANDERSON: I can hear you. How's it going? AUDIENCE QUESTION: All right, good times out here. JIM ANDERSON: Good. AUDIENCE QUESTION: Hey Jim, the question was what software packages are we going to launch with at the time of launch? And what type of benchmarks will go along with those software packages? JIM ANDERSON: Okay, great question. So I'll throw that your way first, Brian, and then Scott can go. BRIAN SCHWARZ: So there's, it's probably best to differentiate two types of software. So there's obviously software that's coming from Cisco that's part of the UCS Manager and we also have management software in all of the components. I think the question was more around the external software. JIM ANDERSON: Yes, the third-party software. BRIAN SCHWARZ: The third-party software, operating systems and applications and Scott, do you want get that? SCOTT ROSE: So yes, we'll be talking about this a little bit more. But essentially Cisco is becoming an OEM for the major x86-based operating systems. So this is not a resell. This is not a generic relationship. Cisco now will become a tier one OEM for Microsoft Windows, for VMware Virtual Infrastructure, for Red Hat Enterprise Linux, for Novell SUSE Linux Enterprise Server. And we also will be OEMing certain infrastructure technologies such as from EMC and their BladeLogic product. So we've recognized that both the internal software as well as the external software pieces are extremely important components in order to sell a strong and relevant server-based architecture. JIM ANDERSON: Okay, and just... BRIAN SCHWARZ: I think there's, just to add onto that, there was a question about benchmarks too. JIM ANDERSON: Right. BRIAN SCHWARZ: What benchmarks are we working on? JIM ANDERSON: Exactly. BRIAN SCHWARZ: There's a whole set of benchmarks that are in process in the technical marketing team at this point, mostly around the things that are industry standards like SPECint, SPECfp. JIM ANDERSON: Yes. BRIAN SCHWARZ: SPECpower, SPECjbb. SCOTT ROSE: VMmark. JIM ANDERSON: VMmark, yes. BRIAN SCHWARZ: VMmark, which is the VMware benchmark for Virtual Infrastructure. So there are a number of those that are going to be available right at FCS. And there's obviously a whole list of things we're going to have the team queued up to do over the coming weeks and months as well. SCOTT ROSE: And one more thing even to add to that. We'll also be running internal transactional benchmarks that are analogous to TPC-C, TPC-H, TPC-E to show basically the performance throughput for both transactional as well as large I/O-based applications. Those are very, very important and relevant application solutions that are running on x86. JIM ANDERSON: So just to summarize the benchmark strategy if I can here is that obviously we're working closely with the operating system vendors and we'll do all the standard testing and benchmarking associated with them. Our next strategy is with regards to what we'll call industry standard benchmarks. There are series of them out there, including VMmark associated with virtualization. And we plan to have all of them covered at FCS. And then we're going to work closely with obviously applications like Oracle or Microsoft SQL that really do right do the system level and do some benchmarks with those types of applications and build from there. SCOTT ROSE: Correct. JIM ANDERSON: Is that correct? SCOTT ROSE: Yes, that's a good summary, yes. JIM ANDERSON: Okay, great, great. So Brian, let me just try to summarize some of this stuff. What are the five key differentiators that the team can take away with regards to this platform? BRIAN SCHWARZ: Yes, so like in any sales situation the most, one of the most important concepts is understanding how what you're offering is different than what your competitors are going to offer. And there's really five big differences between what we're going to offer and what customers can get from let's say like an HP or an IBM. And I think we have a slide that's probably showing now. the first one is around the unified fabric. The whole system is built around the unified fabric. There's a bunch of benefits we get from that, component reduction and power savings, etc. We talked about the memory expansion. That is Cisco technology, Cisco ASICs that were designed that are on the board that enable us to do that. It's going to be leading technology in the industry, absolutely clear. We've the specialized adapter we talked a little bit about, this virtualized adapter we're working on. JIM ANDERSON: That was the Palo? BRIAN SCHWARZ: The Palo adapter was the codename. It's going to implement the VN-Link technology in hardware. You can essentially create NICs and HBAs through software, very, very interesting technology. JIM ANDERSON: So you're actually virtualizing I/O a little bit, right, okay? BRIAN SCHWARZ: Yes, yes, that's a huge differentiator as well. There's not really any adapter in the industry that looks like that adapter today and there's a bunch of benefits we get from that. Another one, a fourth one is the concept of service profiles and that's really where our agility comes from, our business agility. You essentially capture what you want a server to look like and how it needs to access its LAN and SAN resources, what VLANs it needs to be on, what VSANs, what WWN it needs to use, etc. That's all captured in software and you kind of deploy it into the compute fabric I'll say. JIM ANDERSON: So you can have that dynamically so it defined what the characteristics of a server? BRIAN SCHWARZ: Yes, yes, essentially we've sucked the state out of the hardware and put it essentially in the UCS Manager software. And it allows you to do this whole chain of deployment by associating a service profile with a blade. It configures the blade, the adapter, the Fabric Extender and the fabric interconnect all in one shot as an automated process. JIM ANDERSON: Interesting, okay. BRIAN SCHWARZ: So that's kind of how we implement our stateless computing concept, if people have heard of that before. And the last thing is the UCS Manager itself, just as kind of a single pane of glass for the device manager, for all of the infrastructure. So you might have 20 chassis with blades in them. Through a single pane of glass you can go and see all the inventory, which DIMM slots are populated on all 100 servers, 200 servers, the health of all those things. JIM ANDERSON: So you can get a system view of everything. BRIAN SCHWARZ: Yes, across many different components and many types of components, which really hasn't been an easy thing to get before. Each team kind of had their own view of the world, their own dashboard and it was hard to get a higher level dashboard to knit them all together. JIM ANDERSON: Okay, okay, great. So you know, I want to move on into system management. But before I do that, I do have some more questions from the audience that we'll sort of take on this side. So will UCS eventually allow for existing Cisco technology, ie UCS on the unified communications side, ACE on a blade? What are our plans there? SCOTT ROSE: So there actually already is solution development going on with other Cisco BUs and UCS. Obviously Call Manager could be one of them. We're looking at some of the branch-based technologies. So obviously right now Cisco builds these solutions and we partner with other companies for the infrastructure. Cisco now has the internal infrastructure to host those products. So you can probably be assured you're going to see Cisco end-to-end solutions being built off of UCS. JIM ANDERSON: Okay, great so before I move onto the next session, I do have one question from the live audience here. so let's take a shot. AUDIENCE QUESTION: Hi guys. So Scott, you were talking about the management domains and the fact that you can have one sort of Springfield instance managing like 40 chassis or 320 blades. So if I sort of project it out and typical mid-sized enterprise with thousands of servers if you will and let's just say that they all, they go all to California. How are we talking about managing more than 320 servers? I mean, if we have thousands of servers, are we having some sort of a peering relationship between the Springfield. SCOTT ROSE: So first off, let's keep things in check here. So UCS initially will be a domain that will manage the chassis that we've mentioned, up to 40 at full scale. So initially about 320 systems will be our maximum scale. We already have design plans to increase that through additional management capabilities. And actually we've already realized multi-UCS management basically domain-to-domain, hundreds going into thousands of systems via some of our external partnerships. In fact the partnership that we've executed through BMC, their console will be able to show physical UCS domains. So you'll be able to actually see service profiles and take service profiles from one UCS domain and move it into another. So rudimentary we're actually already at that multi-domain capability. AUDIENCE QUESTION: Perfect, thank you, great. JIM ANDERSON: So you mentioned -- go ahead, Scott. SCOTT CLARK: I was just going to say, following on what Scott said and also the question about what do we focus on between announce date and Intel's announcement. I think the core attributes that Brian laid out around the differentiation around memory and so on are interesting and important. But I think fundamentally the biggest opportunity for us is to start understanding the customers' environments that we're selling into and the management-level activities that are currently going on because I think that at the executive level is going to be the thing that, one of the major things that they're going to want to move to because that will drive OpEX out. It will drive CapEx out. And that's really where our proposition starts to stick. When we think about all the management activities that happen in the data center and what we can now do with, through UCS Manager. That's probably, you know for the two weeks after announcement, the thing that we should focus our discussions around. JIM ANDERSON: Very good point, very good point, Scott. So you mentioned management and it's a hot topic. SCOTT CLARK: Yes. JIM ANDERSON: Let's talk about that a little bit, Scott. So tell me, what is our management story for UCS? SCOTT ROSE: Sure, so I believe we might have a graphic that sort of helps out in this picture. Essentially we are not just building a platform here, a hardware architecture.Guest VMHypervisorGuest VMGuest VMGuest VMGuest VMHypervisorGuest VMGuest VMGuest VMVirtual Switch(QosPrivate VlanAccess List DHCP Spooping IP Source Guard)Virtual Chassis Switch (Layer 2 FailoverLayer 3 Routing802.1Q TrunkAccess List Qos)Server Load Balance (Server Load BalanceDDoS Hack Attack)LDAP Machine ImageMonitor/LogConsoleProvisioningWeb Protal(User Interface)Operator Management(Ticket System)hicloudComponentsApplication Firewall(Layer 3 RoutingIP NAT Access Policy)Public IPService / Traffic Control(Traffic CountingBandwidth Limitation)Private IPDHCPAAA (IPS) (DDoS)#

User Portal

HiNet SOChicloudSOC

23Guest VMGuest VMGuest VM

DDoShicloud1#ISV(BI)OLAPBIStandard CRMCHTSaaSCRMCTIUC&CIAMSaaS# IaaSPaaSSaaS CAPEXOPEX

#73#74*

Presentation_ID 1999, Cisco Systems, Inc.

FC