Hands on with the Cisco UCS C200 M2
The hype machine in Cisco channel land has been working overtime since Cisco started shipping its new USC line of servers and blade center. If you’ve heard what is being said, Cisco is basically claiming to have reinvented the server and is now offering unparalleled performance over their competitors. Their one large claim to fame on the onset was their VMware VMark scores, but as of this writing HP has bested them in every category by small margins. The other key selling point is Cisco’s Extended Memory Technology which allows an increased amount of physical RAM in UCS servers aimed at providing greater virtual machine density.
Cisco, in my view, has never been a company overly concerned about sexiness in their hardware or software, although they certainly tried harder than usual with their king Nexus 7000 switch. The UCS C200 servers I have acquired will be used to power a new virtualized Unified Communications infrastructure (Call Manager) which is another major advancement in Cisco’s product offerings. So while my use case will not push these servers to their theoretical performance limits, I will still get down and dirty with this new hardware platform.
Under the hood
My first impression of the C200 is that it looks remarkably similar to an older lower-end Dell PowerEdge or SuperMicro white box server. Aesthetically pretty vanilla, at this level anyway. That said, the layout is simple and gets the job done in true minimalist fashion. All internal components are OEM’d from the usual suspects: Intel, Samsung, Seagate, LSI… Getting the cover off of this thing is truly a pain requiring a lot of hard release button mashing and downward forceful pushing. Both of my C200’s were like this so definitely not a fluke.
Cisco Integrated Management Controller (CIMC)
CIMC is the remote out-of-band management solution (IPMI) provided with Cisco servers. With the very mature HP ILO and Dell DRAC remote management platforms around for years, Cisco’s freshman attempt in this space is very impressive indeed. All of the basic data and functionality you would expect to find is here plus a lot more. Access to the CIMC GUI requires Adobe Flash via a web browser which is visually pretty but disappointing to see in an enterprise platform. They certainly aren’t the only major vendor to start trending this direction (read: VMware View 4.5).
Performance is a bit slow for tabs on some pages where the hardware has to be polled and display data refreshed. But when that data eventually trickles in, the level of detail is dense.
That was all just from the Inventory page! More great detail is revealed in the Sensors section with multiple readings and values for each core component of the server.
Other user, session, logs, and firmware management options include all the usual settings and variables. One other neat option in the Utilities sub menu is the ability to reboot CIMC, reset it to factory default as well as import configurations! That’s huge and will make managing multiple servers much more coherent. All told and bugs aside, the potential of CIMC is very impressive.
Call Manager - the virtual edition
A major shift for Cisco, now available in CUCM Version 8.x, is the ability to deploy the enterprise voice architecture inside of VMware ESXi. Call manager, and it’s sister voice mail service Unity Connection, are just Linux servers (RHEL 4) after all so this makes perfect sense. You can now deploy Call Manager and Unity clusters inside of a virtual space while leveraging the HA provided by VMware as well.
This of course doesn’t come without its caveats. Currently Cisco does not support VMs living outside of Cisco servers and that includes storage. So you will have to buy a Cisco server to deploy this solution as well as keep the VMs on Cisco disk, not your own corporate SAN. You can use your own VMware licensing and vCenter at least which is a good thing. Once Cisco has established a comfortable foothold in the enterprise server market, look for these policies to ease a bit. Right now they need to sell servers!
Not purely Cisco-related, but a minor observation that others have noticed as well is that ESXi incorrectly reports the status of Hyper-Threading support on non-HT Intel-based servers. My C200 is equipped with Xeon E5506 CPUs which do not support HT. Not a big deal, just an observation. If HT was available in this CPU I would definitely enable it as ESX(i) 4.1 can now schedule much more efficiently with the new Intel CPU architectures.
Wrap
All in all there’s a lot to like about the new Cisco offerings. A commitment to virtualization and hardware optimized to run virtual workloads are smart investments to make right now. There are some physical design choices that I don’t particularly care for but this model server is at the bottom of the platform stack, so maybe more consideration was paid to the platforms at the top? CIMC was carefully constructed and, although buggy right now, shows some real innovation over competing platforms in this space. More companies that would not have otherwise been able to buy into a full-blown Call Manager cluster configuration can now do so with reduced hardware investments.
References:
Cisco OVA templates
Great post. Very informative. Thanks!
ReplyDeleteThanks for post
ReplyDeleteHi,
ReplyDeletethank you for your informative post.
I do have a question:
is it possible to use non-cisco memory and/or hdd's?
Best regards,
dirk adamsky
It is definitely possible as you can see the regular Seagate ES drive above is nothing special. The question will be what does this do to your support contract. It may not be possible to buy a server with no HDs and RAM.
ReplyDeleteI see "Memory Speed: 800 mhz" on the figure. Do 1333 mhz channel memories work as 800 mhz on this server? So, memories are working at low performance? Or it means something else?
ReplyDeleteGood Try
ReplyDelete