DISCUSSIONS & DOCUMENTS

 

0 Replies Last post: Aug 30, 2010 1:33 PM by Cliff Kinard  
Cliff Kinard Master 41 posts since
Aug 31, 2009
Currently Being Moderated

Aug 30, 2010 1:33 PM

Firing up VMware - Interesting discussion between Robin Bloor - Analyst/Blogger and Joe Jakubowski - Sr. Engineer Virtualization Performance at IBM

Firing Up VMware - Q&A with Robin Bloor - Analyst/Blogger and Joe Jakubowski - Sr. Engineer Virtualization Performance at IBM

 

There’s  a common perception in the server market-place that there’s little to  distinguish the x86 commodity servers from the various server vendors.  After all, they tend to use commodity components, build similar mother  boards and then slide them into servers or blades. I discussed this view  of the x86 server world with Joe Jakubowski, a Senior Engineer with IBM  who leads IBM’s System x virtualization performance team. The  conversation focused on virtual servers and their deployment. The  following is an edited transcript of the conversation.

 

Bloor: So, with VMware deployments how is IBM’s System x different from the competition?

 

Jakubowski: IBM  has been doing server consolidation for decades and has carried out  many server consolidation studies for its customers. From these studies  we understand the hardware utilization characteristics of the workloads  that customers want to consolidate. We understand the workload type;  email server, database server, file/print server and so on. We also  understand the average and peak CPU, average and peak memory footprint,  disk i/o and network i/o. From that we can develop profiles of these  workloads and that give us a yardstick by which we can meaningfully  characterize virtualization performance. From these profiles we can work  out how many of these workloads can be virtualized on the same host.

 

Bloor: So how does that contribute to System x server design?

 

Jakubowski: One  of the things that stood out from our analysis was that memory capacity  is frequently a bottleneck in virtualization deployments. If you take a  server that has two processors and only twelve DIMM slots it tends to  run out of memory capacity before it runs out of cpu. So IBM has  differentiated itself as a solutions vendor by tackling the memory  capacity issue on a virtualization host. We’ve tackled that in two ways.  One is to have more physical DIMMs, but a few other vendors have also  done that.

 

The  other way is that with our Intel Nehalem EX processor solutions we  provide our MAX5 memory expansion drawer. On our two processor and four  processor rack products we use QPI links from the Intel processors  themselves to attach the memory expansion drawer.  This  provides 8 additional memory channels and 32 additional DIMM slots. In  effect, on two processor servers with 32 DIMM slots in the server we can  increase the memory capacity by 100 percent. Through our proprietary  silicon we also have 8 additional memory channels, which means we have a  memory controller in our MAX5 memory box. That’s a major  differentiation.

 

And by the way, it may also boost the performance of some non-virtualized workloads such as high-end OLTP database applications.

 

Bloor: Do none of the other vendors do anything like this?

 

Jakubowski: Not  in the way that IBM has done it. The only thing even close to it is  what CISCO has done with their Unified Computing System. It’s a totally  different approach. They don’t use QPI links. It’s limited to their  Nehalem EP products and only in the two sockets space. What UCS does is  if you look at the 3 memory channels that come off each EP processor  what they have done is they have added what I call a “mux buffer” that  multiplexes 4 DDR3 subchannels on each memory channel and expands it  from 3 DIMMS to 8 DIMMs. They get memory expansion through a different  type of architecture, but it’s limited to their EP products. And with  CISCO UCS you have to add processor sockets to scale memory capacity.  With IBM’s QPI-based method of memory expansion, it is not necessary to  add processor sockets to scale memory capacity.

 

If  you study Intel products and the memory RAS (Reliability, Availability,  Serviceability) characteristics of the EP line versus the EX line, the  EX line has vast superiority in memory RAS features. If you’re a  customer and you want to expand memory capacity, you should be concerned  about memory RAS features. I would choose the EX processor family over  the EP processor family every time.

 

Bloor: It  occurs to me that this “memory starvation problem” is only going to get  worse in terms of each new generation of processors having more cores  than the previous one. Is that right?

 

Jakubowski: You’re  correct. But you have also to look at memory technology itself. If you  observe what’s happened over recent years with memory technology going  from megabit to a gigabit to two gigabit and now four gigabit  technology, what we’ve seen is some correlation between processor power,  memory DRAM technology and memory footprint. Memory footprint doubles  on about a 3 year cycle as does DRAM technology density. So we may go  from an 8 core to a 16 core processor but memory is almost keeping pace.  This helps mitigate some of the problem, but it doesn’t solve it all.

 

Looking  to the future, IBM regards the memory expansion drawer as today’s  solution. We’re looking at other approaches. We’re always looking to  innovate.

 

Bloor: So  how savvy are the buyers of x86 servers. How much technical knowledge  do they really have of this kind of issue? My impression is that not  many do.

 

Jakubowski: Customers  certainly know that memory capacity is a potential bottleneck in a  virtualization deployment so it’s an easy conversation to have with  customers. It’s especially the case when they don’t want to pay a  premium for 8 or 16 gigabyte DIMMs. If they want to stick with 2  gigabyte or 4 gigabyte DIMMs, we can provide them with 32 extra DIMM  slots We can provide a solution that avoids them having to go to 8  gigabyte DIMMs at all. So that soon becomes a total-cost-of-ownership  discussion.

 

It becomes a question of “Can I lower my cost per VM if I go with this memory expansion route?”

 

Bloor: Is this memory expansion capability only on rack servers or does it apply to all of the x86 line of servers?

 

Jakubowski: It’s  on all our Nehalem EX based solutions. So our 2 socket x3690 X5 and 4  socket x3850 X5, and also a blade called HX5, but the memory expansion  there is 24 DIMM slots not 32.

 

Bloor: OK. Thanks for your time and for increasing my understanding of so-called commodity servers.

 

 

Links to more information

ABOUT the speakers:

 

Joe Jakubowski is a IBM  Systems &Technology Group, Systems Hardware Development.  Virtualization Performance Lead Engineer - System x Performance Analysis  and Benchmarking LinkedIn

Robin Bloor Ph.D. Chief Analyst & President, The Bloor Group and Founder, Bloor Research

Web: http://www.thevirtualcircle.com

Web: http://www.wordsyoudontknow.com

Blog: http://www.HaveMacWillBlog.com

Bio: http://havemacwillblog.com/site-help/about-the-blogger/

Author: Words You Don’t Know, The Electronic Bazaar.

Co-Author: Service Oriented Architecture for Dummies, Service Management for Dummies, Cloud Computing for Dummies

More Like This

  • Retrieving data ...

Join Our Communities

Contact Us

Drop us a line at
vmworldteam@vmware.com

USA & Canada: 1-800-365-2459

International: +1 203-851-7802


Copyright © 2014 VMware, Inc. All rights reserved.