Many software vendors, seeking ways to extract maximum revenue from both large and small customers, have adopted per-CPU pricing models for their server-based software. This is a fundamentally flawed model, and the emergence in the past year or two of multi-core CPUs highlights the problems even more, yet the industry seems to still be clinging to it.
While some vendors have announced that they will charge only for one CPU on dual core processors, Oracle has announced that it will charge 75% of the per-CPU price per core of a multi-core CPU. I.e., someone with a dual core server will pay for one and a half licenses, and someone with a dual processor server with dual cores will pay for three licenses. C|Net quoted Forrester analyst Julie Giera about this:
"Oracle had to (change) competitively," Giera said. "They've been taking advantage of customers, frankly, by charging full price for these cores even though in dual and multicore chips you're not getting full capacity."
What's the sense of this? You've never gotten full capacity from mutli-CPU servers, either, as is nicely explained by Friedman and Pentakalos in O'Reilly's Windows 2000 Performance Guide:
Shared-memory multiprocessors have some well-known scalability limitations. They seldom provide perfect linear scalability. Each time you add a processor to the system, you do get a corresponding boost in overall performance, but with each additional processor the boost you get tends to diminish. It is even possible to reach a point of diminishing returns, where adding another processor actually reduces overall capacity.
Yet vendors have been charging per-CPU for years without ever accounting for diminishing returns! Multi-core doesn't really change this substantially, so if Oracle's move is motivated by fairness to customers, it's quite late in the game! Fairness would dictate that someone running Oracle software on a five year old server with two 1 GHz CPUs should pay less than someone running it on a brand new server with one dual core CPU running at 3 GHz, but Oracle's new policy dictates the reverse. Oracle had the chance to move to a more sensible pricing model, but they chose not to because they want to preserve a model with a built-in benefit for Oracle: customers frequently end up paying for more than they need.
With per-CPU licensing, software vendors create a choice for a lot of customers: pay for new hardware with just a single CPU, or pay for the extra licenses to run the software on an older (and slower!) dual CPU machine. That model is great for IBM, which is alone amongst the major server software vendors in also being a major hardware vendor, because they can get the revenue either way, but it sure doesn't make sense for the customers. IBM's position on multi core, by the way, takes the industry's inconsistency on this to yet another level. IBM considers dual core x86 chips and dual core OpenPower 710 and 720 chips to count as a single processor, but all other dual core chips on the market count as two CPUs. Of course, it's no coincidence that IBM will sell customers multi core x86 and OpenPower chips, but not other dual core chips! I should mention, by the way, that this policy is only for customers who actually pay for their server software, as IBM has several pricing programs that depart completely from the per-CPU model and just use per-user licensing. I have no idea what percentage of IBM customers pay on each model.
This situation isn't going to get any simpler. In addition to multi-core CPUs, there are also hyper-threading CPUs that share some resources of a single core across multiple execution units. Further variations in architecure are inevitable, and the industry will be forced to deal with them. I can imagine having chips that can operate as either quad core or dual hyper-threaded core, dynamically re-wiring themselves to adjust to loads that favor one approach or the other, and how on earth will anyone come up with a rational pricing strategy for something like that? It just makes no sense to keep trying to come up with fine distinctions, formulas, and arbitrary lists of which processors count for how many licenses. It's time for the software industry to give up a model that has been deeply flawed all along.
1. Alan Bell07/19/2005 12:24:08 PM
I was trying to think of a sensible alternative, e.g. add up the ghz of each separate processor but that doesn't take account of speed scaling and other wierd things, then I thought of something related to MIPS or Gigaflops, but that is pretty stupid too. I don't think a pricing model based on platform performance is particularly workable or neccessary. Perhaps an application written for this kind of model should include a counter to measure how much processor it is consuming, thus on a fast machine which is using the application very heavily the counter will run fast, but on a slower machine or a less utilised fast machine it would be proportionatly slower. Basically this is similar to per-transaction pricing. Overall I agree with you that the model is flawed.
2. Carl07/20/2005 01:27:20 PM
I'm in agreement, the reasons for charging more per cpu could also apply to memory size, but they aren't, so the CPU model argument doesn't fly.
Blocked Response!04/23/2007 01:47:36 AM
This response from IP Address 18.104.22.168 was blocked by the owner of this blog.