Isaac Newton, SPC-1, and The Real World

Definitions

The SPC-1 is an industry recognized storage performance benchmark developed by the Storage Performance Council to objectively and consistently measure storage system performance under real-world high-intensity workloads (principally, OLTP database workloads).

Introduction

Over the last four years, the storage industry has transformed at an amazing rate. It seemed almost every other week, another software-defined storage startup emerged. On the surface this appears great, right? Lots of competition, lots of choice, etc. However, with all of this also comes lots of confusion and disappointment. What is actually new with all of these developments? Are there truly pioneers out there taking us into new and uncharted territory? Let’s go exploring!

Isaac Newton and the SPC-1

Wait, What? How in the world does Isaac Newton relate to the SPC-1? As you may know, Newton was the founder of calculus, discovered the laws of motion and gravity. He is unquestionably one of the most notable scientists in history. Without him, we don’t have much of the modern world that we enjoy today. While some appreciate what he accomplished technically speaking, most people do not go around citing the intricacies of Newtonian Mechanics. However, we do appreciate the result of his discoveries: cars, planes, space shuttles, sending satellites to other planets, and many other amazing things. So, while these things are necessary to operate in the modern world, they are generally reserved for the areas of academia. Such is the case with the SPC-1.

This article has one simple objective. It is focused on drawing the parallel of what the SPC-1 demonstrates to implications in the real world. Similar to Newtonian Mechanics, most do not walk around citing SPC-1 results. However, just like with Newton, the results have real world implications specifically to the information technology world; a world which we are all deeply connected to in one way or another.

What Does The SPC-1 Show Us and… “So What?”

The SPC-1 analyzes all out performance and price/performance for a given storage configuration. While not showcased, latency analysis is also included within the full disclosure report for each benchmark run. The importance of latency will become apparent later in this article. But in the end, who doesn’t want performance, right?

One question that usually jumps out after referring to the SPC-1 results is, “so what?”. Well as it turns that is precisely what I am trying to answer here. On the surface there is basic vendor performance comparison. The higher the IO per second, the better the all-out performance is. The lower the $/IO the more cost efficient a system is. What happens when a vendor is able to achieve top performance numbers and price/performance numbers on the same benchmark run? Now that would be interesting.

Generally speaking, you will not find the same vendor system in the top 10 for both categories simultaneously mainly because the two categories fall at opposite ends of the spectrum. Typically, the higher the IOps produced, the more expensive the system and conversely, the lower the $/IO, the lower the total overall performance.

So hypothetically speaking, what would it mean if a vendor was to construct an individual storage system that landed in both categories? First off, it would mean that the system is both really fast and really efficient (one could argue that it is really fast because it is really efficient). Second, it would raise certain questions about how storage systems are constructed. In other words, it would be like having a Bugatti Veyron with a top speed of 268 mph for the price of a Toyota Camry. It wouldn’t just be interesting; it would change the entire industry.

If your next response is, “But I don’t need millions of IOps”, you would be missing the point completely. Ok, so you don’t need millions of IOps, but you get them anyways. What you need to realize is that you don’t need as many systems to achieve your goal in the infrastructure. In other words, why buy 10 of something when 2 will do the job?

What I am driving toward here is this: imagine how much more performance you could get for every dollar spent, imagine how much more application and storage consolidation you could get while simultaneously reducing the number of systems, imagine how much more you could save on operational expenses with less hardware, imagine running hundreds of enterprise virtual machines with true data and service high-availability in an N+1 configuration while simultaneously serving enterprise storage services to the rest of the network. Oh, the possibilities.

Below are examples of one type of convergence you can achieve with a system such as this. The server models shown below are used for illustration purposes, but it could be Lenovo, Dell, Cisco, or any multi-core x86-based system available in the market today. While traditional SAN, converged, and hyper-converged models are also easily achievable and have been available for many years, the model shown below represents a hybrid-converged model. It provides the highest level of internal application consolidation while simultaneously presenting enterprise storage services externally to the rest of the infrastructure. Without DataCore SANsymphony-V, this level of workload consolidation wouldn’t be possible.

Hybrid_Converged_HyperV

Hybrid_Converged_VMware

So, Does This System Actually Exist?

As it turns out, this isn’t theoretical, it is very real. It has been very real for many years now. DataCore’s SANsymphony-V software is what makes this possible. DataCore’s approach to performance begins and ends with software. This is completely opposite of all other vendors who try to solve the performance problem by throwing more expensive hardware at it. And this is precisely why for the first time (from what I can tell), a vendor (specifically DataCore) landed on both top 10 categories (price and price/performance) simultaneously with the same test system.

And What About This Matter of Latency?

There still tends to be a lot of talk about IOps. As I have been saying for years now, IOps is a meaningless number unless you have other pieces of information regarding the test conditions such as % read, % write, % random, % sequential, and block size. Then, even with this information, it only becomes useful when comparing systems that have been tested with the same set of conditions. In the marketing world, this is never the case. Every storage vendor touts some sort of performance achievement, but the numbers are incomparable to other systems because the test conditions are different. This is why the SPC-1 is so significant. It is a consistent application of test conditions for all systems making objective comparison possible.

One thing that is not talked about enough, however, is latency, and specifically the latency across the entire workload range. Latency is what will define the application performance and user experience in the end.

In general, when comparing systems, IOps are inversely proportional to latency (response time). In other words, the higher the IOps the lower the latency tends to be and vice versa. Note, this is not always the case because there are some systems the deliver decent IOps, but terrible latency (primarily due to large queue depths and/or queuing issues).

DataCore SANsymphony-V not only set the world record lowest price/performance number, not only landed in both top-10 categories for the same test system, but also set a new world record in terms of the lowest latency ever recorded by the SPC-1… sub-100 microseconds! Interestingly, the most impressive part, which you could miss if you are not paying attention, is that they achieved this world record latency at 100% workload. This is simply staggering! Granted, you may not run at an all-out 100% workload intensity, but that just means your latency will be that much lower under normal conditions. The analogy here is the same Bugatti Veyron mentioned earlier running at top speed while towing 10 tractor-trailers behind it.

Below shows a throughput latency curve comparing DataCore SANsymphony-V to the previous fastest response time on the SPC-1 benchmark (the fastest one I could find in the top-10 at least). Notice how flat the latency curve is for DataCore. This is indicative of how efficient DataCore’s engine is. Not only did DataCore SANsymphony-V post better than 7x latency numbers (at 100% workload) against Hitachi, it also drove an additional 900,000 SPC-1 IOs per second. And finally, it achieved this result at a cost of 1/13th the previous record holder!

LatencyCurve

How was this accomplished? Simply put, it is baked into the foundation of how DataCore moves IO through the system, in a non-interrupt-real-time-parallel fashion. In other words, DataCore doesn’t just “not get in the way”, it actually removes the barriers that normally exist.

Conclusion

Hopefully by now you can see the answer to the “so what” question. These SPC-1 results go well beyond just a storage discussion. This directly impacts the way applications are delivered. You can now achieve what was once impossible. Is it virtual desktops you are after? Imagine running 10x more with less hardware without sacrificing performance. Is it mailboxes you are after? Imagine running 20x more with less hardware without sacrificing performance. Is it database performance you are after? Imagine running on the fastest storage system on the planet (not my words, the SPC-1’s findings) with the lowest latency and doing it at a cost that is untouchable by other solutions (hardware and software-defined alike). So while the SPC-1 is rooted in storage performance, the effect this has on the rest of the ecosystem is beyond just interesting… it is revolutionary!

References

Storage Performance Council Website
SPC-1 Top Ten List
DataCore Parallel IO Website

VMworld 2014 Wrap-Up and Key Takeaways

VMworld 2014 has come and gone. It was a great show with a massive attendance exceeding 22,000 from 85 countries around the globe. This year the theme was “No Limits”, which was very appropriate since the common message across the board was about leveraging software to maximize hardware investments. I couldn’t agree more. VMworld 2014 confirmed that the industry appears to be ready for broad adoption of the software-defined storage architecture that DataCore introduced over 16 years ago and continues to innovate upon. DataCore, having released its 10th generation software-defined storage offering earlier this year, is in the industry “front seat”, leading the charge with its any storage, any server, any hypervisor product offering; a statement aligning perfectly with VMworld’s theme this year: No Limits, or in other words, Unleashed and Unbound.

vmworld2014_nolimitsNot surprisingly, Virtual SANs monopolized the topic of conversation this year. But the message was fragmented since there were many feature limitations coupled with the inability to integrate and co-exist with other storage and server components in the stack. This is what you would expect considering the infancy of the Virtual SAN concept. But this is where DataCore takes the lead yet again. As with traditional central SANs, the heart of DataCore’s Virtual SAN is SANsymphony-V. This means whether you are running traditional central SANs, Virtual SANs, or both simultaneously, DataCore offers the same enterprise-grade feature set and a single common management interface across the entire architecture. This is what you would expect from a 10th generation product release.

As a brief overview, DataCore™ Virtual SAN introduces the next evolution in software-defined storage whereby SANsymphony™-V is used to create high-performance and highly-available shared storage pools using the disks and flash storage in your application servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers without the need for a separate external SAN infrastructure.

A DataCore Virtual SAN is comprised of two or more physical x86-64 servers with local storage, running SANsymphony-V. It can leverage any combination of flash and magnetic disks (although flash is not required) to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from the virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.

DataCore’s Virtual SAN addresses the challenges that exist today within many IT organizations such as poor application performance (particularly within virtualized environments), single points of failure, low storage efficiency and utilization, and high infrastructure costs.

DataCore’s Virtual SAN opens up many new possibilities within the infrastructure. Below are some of the most common use cases:

    • Latency-sensitive Applications – Speed up application response and improve end-user experience by leveraging high-speed flash as persistent storage closest to the applications and caching reads and writes from even faster server DRAM.
    • Compact Server Clusters at Remote Sites and Branch Offices – Put the internal storage capacity of your application servers to work as a shared resource while protecting your data against outages.
    • Virtual Desktop (VDI) Deployments – Run more virtual desktops on each hypervisor host and scale them out across more servers without the complexity or expense of an external SAN.
    • Highly-available Applications – When you are running applications that cannot suffer downtime, then you need synchronous mirroring. Synchronous mirroring provides real-time synchronized copies of all data across multiple hosts and/or regional sites, ensuring the highest levels of data and application availability.

Request a Free Virtual SAN: Virtual SAN

As the industry heads full-speed down this road, you can expect very exciting advancements to develop. I know that it will be an exciting time for DataCore’s customers and partners as DataCore continues, as it always has, to invent new ways of raising the bar in the software-defined storage arena.

DataCore Virtual SAN, VMworld 2014, VVols, and more…

DataCore Virtual SAN, a stepping stone to a ‘Data Anywhere’ architecture
Earlier this year, DataCore Software released SANsymphony-V10, the 10th generation of its enterprise storage virtualization solution. SANsymphony-V10 not only continues to push storage performance, scalability, and flexibility to new levels, but it also includes DataCore’s Virtual SAN capabilities and introduces new use cases for what the industry also calls Server-Side-SAN or converged storage.

DataCore’s Virtual SAN software transforms any locally-attached server storage (flash and disk-based) into a “Virtual SAN” that works with all the major hypervisors (i.e., VMware vSphere, Microsoft Hyper-V) and runs on any industry standard server or virtual machine. The DataCore Virtual SAN eliminates the added hassles, costs and complexity required to manage and operate external SAN infrastructures.

DataCore contrasts its enterprise-class virtual SAN offering with competing converged storage and virtual SAN products which are:

  • Immature, whereas DataCore’s Virtual SAN is based on a 10th generation release of SANsymphony-V which has been deployed at over 10,000 customer sites worldwide.
  • Incapable of sustaining serious workloads and providing a growth path to extend capabilities to physical SAN assets when necessary.
  • Inextricably tied to a specific server hypervisor, rendering them unusable in all but the smallest branch office environments or non-critical test and development scenarios.
  • Unable to extend beyond local ‘data islands’ of converged, local storage (internal flash and HDDs) and unable to provide a pathway to unify centralized, external SANs and cloud storage.
  • Lack the ability to scale performance and capacity to meet enterprise-level needs. DataCore’s Virtual SAN scales to more than 50 Million IOPS and supports 32 Petabytes of capacity across a cluster of 32 servers. Yet you can start with as little as 2 nodes.

DataCore’s Virtual SAN is used to create high-performance and highly-available shared storage pools using the disks and flash storage that reside within your application servers. Virtual SAN is a stepping stone in DataCore’s strategy for the ‘Data Anywhere’ architecture. It allows organizations to manage, virtualize, and leverage server disk and flash-based storage along with the ability to virtualize external storage arrays spanning different departments, data centers, and remote locations.

VMware Integration and Future Development
Along with these pioneering advancements, DataCore also was one of the first storage virtualization solutions to interoperate with VMware and it continues to develop and maintain tight integration with VMware’s server hypervisor, ensuring a coordinated approach to realizing fully virtualized environments. DataCore SANsymphony-V10 and Virtual SAN solutions include support for:

  • VMware vSphere interoperability and HCL certifications.
  • VMware Site Recovery Manager (SRM) integration which leverages the SANsymphony-V SRA to replicate virtual machines and associated virtual disks between remote locations, making it possible to realize fully automated site recovery and cross-site migrations.
  • VMware’s vStorage APIs for Array Integration (VAAI) to offload certain low-level storage operations from the hosts to the storage virtualization layer.
  • VMware vCenter console plug-ins to simplify and enable administration of storage. The console communicates directly with vCenter Servers to automatically register vSphere hosts, clusters, virtual machines (VMs), and datastores.
  • VMware VVol provider (underway, on 2015 roadmap) to enable per-VM data services without dealing with the limitations on the number of LUNs that can be addressed and allows VMs to scale to a much larger number of virtual volumes.

DataCore continues to track key VMware product enhancements and is committed to maintaining essential certifications and integrations while working together on evolving future technologies. For example, DataCore is working with VMware on VVols which aims to increase the level of storage management granularity down to the virtual machine level. This will serve to expose DataCore’s rich feature set to the individual virtual machine, further extending the flexibility offered by DataCore into the infrastructure.

For an overview of VVol, please see Hu Yoshida’s Blog. Essentially, VVol is one of the more major innovations in storage technologies that VMware is driving. VVol is designed to provide VM-level storage granularity to IT administrators by providing a storage API and abstraction layer between the hypervisor and the storage system. It makes it easier to automate and manage data without dealing with the details of disks and LUNs. When a VM generates a workload, it is directed to the appropriate virtual volume (VMDKs) on behalf of the ESXi hypervisor. This eliminates the limitation in the number of LUNs that can be addressed and allows VMs to scale to a much larger number of virtual volumes.

VMware VVols essentially can replace VMFS as a storage unit and possibly do away with the concept of datastores in general. The key benefit for DataCore customers is that VVol enables per-VM data services, such as replication, snapshot, caching, etc.. Customers can then leverage DataCore’s comprehensive feature set and advanced capabilities including auto-tiering, thin provisioning and synchronous mirroring over metro-distances to achieve maximum performance, optimal utilization and highest availability from their storage infrastructure.

DataCore software-defined storage solutions are well poised to add value to customers’ data centers by integrating VVol with our customer-proven SANsymphony-V10 and Virtual SAN platforms. DataCore is underway to provide VVol support, has incorporated VVols as part of its strategic roadmap and ‘data anywhere’ plans, and is committed to simplify storage management in a virtual software-defined world.

DataCore ‘Data Anywhere’ Architecture – Any Hypervisor, Any Storage, Any Location

Learn more about DataCore’s Virtual SAN at VMworld 2014 (Booth #1445).

DataCore Announces Enterprise-Class Virtual SANs and Flash-Optimizing Stack in its Next Generation SANsymphony-V10 Software-Defined Storage Platform

Scales Virtual SANs to More Than 50 Million IOPS and to 32 Petabytes of Pooled Capacity Surpassing Leading Competitors; End-to-End Storage Services Reconciles Virtual SANs, Converged Appliances, Flash Devices, Physical SANs, Networked and Cloud Storage From Becoming ‘Isolated Storage Islands’

FORT LAUDERDALE, Fla.–(BUSINESS WIRE)–Amidst the pent up demand for enterprise-grade virtual SANs and the need for cost-effective utilization of Flash technology, DataCore, a leader in software-defined storage, today revealed a new virtual SAN functionality and significant enhancements to its SANsymphony™-V10 software – the 10th generation release of its comprehensive storage services platform. The new release significantly advances virtual SAN capabilities designed to achieve the fastest performance, highest availability and optimal use from Flash and disk storage directly attached to application hosts and clustered servers in virtual (server-side) SAN use cases.

DataCore’s new Virtual SAN is a software-only solution that automates and simplifies storage management and provisioning while delivering enterprise-class functionality, automated recovery and significantly faster performance. It is easy to set up and runs on new or existing x86 servers where it creates a shared storage pool out of the internal Flash and disk storage resources available to that server. This means the DataCore™ Virtual SAN can be cost-effectively deployed as an overlay, without the need to make major investments in new hardware or complex SAN gear.

DataCore contrasts its enterprise-class virtual SAN offering with competing products which are:

• Incapable of sustaining serious workloads and providing a growth path to physical SAN assets.
• Inextricably tied to a specific server hypervisor, rendering them unusable in all but the smallest branch office environments or non-critical test and development scenarios.

The Ultimate Virtual SAN: Inexhaustible Performance, Continuous Availability, Large Scale

There is no compromise on performance, availability and scaling with DataCore. The new SANsymphony-V10 virtual SAN software scales performance to more than 50 Million IOPS and to 32 Petabytes of capacity across a cluster of 32 servers, making it one of the most powerful and scalable systems in the marketplace.

Enterprise-class availability comes standard with a DataCore virtual SAN; the software includes automated failover and failback recovery, and is able to span a N+1 grid (up to 32 nodes) stretching over metro-wide distances. With a DataCore virtual SAN, business continuity, remote site replication and data protection are simple and no hassle to implement, and best of all, once set, it is automatic thereafter.

DataCore SANsymphony-V10 also resolves mixed combinations of virtual and physical SANs and accounts for the likelihood that a virtual SAN may extend out into an external SAN – as the need for centralized storage services and hardware consolidation efficiencies are required initially or considered in later stages of the project. DataCore stands apart from the competition, in that it can run on the server-side as a virtual SAN, it can run and manage physical SANs and it can operate and federate across both. SANsymphony-V10 essentially provides a comprehensive growth path that amplifies the scope of the virtual SAN to non-disruptively incorporate external storage as part of an overall architecture.

A Compelling Solution for Expanding Enterprises

While larger environments will be drawn by SANsymphony-V10’s impressive specs, many customers have relatively modest requirements for their first virtual SAN. Typically they are looking to cost-effectively deploy fast ‘in memory’ technologies to speed up critical business applications, add resiliency and grow to integrate multiple systems over multiple sites, but have to live within limited commodity equipment budgets.

“We enable clients to get started with a high performance, stretchable and scalable virtual SAN at an appealing price, that takes full advantage of inexpensive servers and their internal drives,” said Paul Murphy, vice president of worldwide marketing at DataCore. “Competing alternatives mandate many clustered servers and require add-on flash cards to achieve a fraction of what DataCore delivers.”

DataCore virtual SANs are ideal solutions for clustered servers, VDI desktop deployments, remote disaster recovery and multi-site virtual server projects, as well as those demanding database and business application workloads running on server platforms. The software enables companies to create large scale and modular ‘Google-like’ infrastructures that leverage heterogeneous and commodity storage, servers and low-cost networking to transform them into enterprise-grade production architectures.

Virtual SANs and Flash: Comprehensive Software Stack is a ‘Must Have’ for Any Flash Deployment

SANsymphony-V10 delivers the industry’s most comprehensive set of features and services to manage, integrate and optimize Flash-based technology as part of your virtual SAN deployment or within an overall storage infrastructure. For example, SANsymphony-V10 self-tunes Flash and minimizes flash wear, and enables flash to be mirrored for high-availability even to non-Flash based devices for cost reduction. The software employs adaptive ‘in-memory’ caching technologies to speed up application workloads and optimize write traffic performance to complement Flash read performance. DataCore’s powerful auto-tiering feature works across different vendor platforms optimizing the use of new and existing investments of Flash and storage devices (up to 15 tiers). Other features such as metro-wide mirroring, snapshots and auto-recovery apply to the mix of Flash and disk devices equally well, enabling greater productivity, flexibility and cost-efficiency.

DataCore’s Universal End-to-End Services Platform Unifies ‘Isolated Storage Islands’

SANsymphony-V10 also continues to advance larger scale storage infrastructure management capabilities, cross-device automation and the capability to unify and federate ‘isolated storage islands.’

“It’s easy to see how IT organizations responding to specific projects could find themselves with several disjointed software stacks – one for virtual SANs for each server hypervisor and another set of stacks from each of their flash suppliers, which further complicates the handful of embedded stacks in each of their SAN arrays,” said IDC’s consulting director for storage, Nick Sundby. “DataCore treats each of these scenarios as use cases under its one, unifying software-defined storage platform, aiming to drive management and functional convergence across the enterprise.”

Additional Highlighted Features

The spotlight on SANsymphony-V10 is clearly on the new virtual SAN capabilities, and the new licensing and pricing choices. However, a number of other major performance and scalability enhancements appear in this version as well:

• Scalability has doubled from 16 to 32 nodes; Enables Metro-wide N+1 grid data protection
• Supports high-speed 40/56 GigE iSCSI; 16Gbps Fibre Channel; iSCSI Target NIC teaming
• Performance visualization/Heat Map tools add insight into the behavior of Flash and disks
• New auto-tiering settings optimize expensive resources (e.g., flash cards) in a pool
• Intelligent disk rebalancing, dynamically redistributes load across available devices within a tier
• Automated CPU load leveling and Flash optimizations to increase performance
• Disk pool optimization and self-healing storage; Disk contents are automatically restored across the remaining storage in the pool; Enhancements to easily select and prioritize order of recovery
• New self-tuning caching algorithms and optimizations for flash cards and SSDs
• ‘Click-simple’ configuration wizards to rapidly set up different use cases (Virtual SAN; High-Availability SANs; NAS File Shares; etc.)

Pricing and Availability

Typical multi-node SANsymphony-V10 software licenses start in the $10,000 to $25,000 range. The new Virtual SAN pricing starts at $4,000 per server. The virtual SAN price includes auto-tiering, adaptive read/ write caching from DRAM, storage pooling, metro-wide synchronous mirroring, thin provisioning and snapshots. The software supports all the popular operating systems hosted on VMware ESX and Microsoft Hyper-V environments. Simple ‘Plug-ins’ for both VMware vSphere and Microsoft System Center are included to enable simplified hypervisor-based administration. SANsymphony-V10 and its virtual SAN variations may be deployed in a virtual machine or running natively on Windows Server 2012, using standard physical x86-64 servers.

General availability for SANsymphony-V10 is scheduled for May 30, 2014.

About DataCore

DataCore is a leader in software-defined storage. The company’s storage virtualization software empowers organizations to seamlessly manage and scale their data storage architectures, delivering massive performance gains at a fraction of the cost of solutions offered by legacy storage hardware vendors. Backed by 10,000 customer sites around the world, DataCore’s adaptive and self-learning and healing technology takes the pain out of manual processes and helps deliver on the promise of the new software defined data center through its hardware agnostic architecture. Visit http://www.datacore.com or call (877) 780-5111 for more information.

DataCore, the DataCore logo and SANsymphony are trademarks or registered trademarks of DataCore Software Corporation. Other DataCore product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other products, services and company names mentioned herein may be trademarks of their respective owners.

Contacts

Media & PR:
Horn Group
Joe Ferrary, 646-202-9785
datacoreteam@horngroup.com

VMware plans to abandon vRAM Licensing

The long awaited day has finally arrived. The infamous VMware vRAM licensing program appears to be going to its grave. Of course this is the third or fourth iteration of their licensing program since 2010, so hopefully VMware has finally decided to listen to the customer community… after all, the customer community is who pays the bills.

Check out the article at: Virtualization.info link

For those interested in reading about the pain that the VMware community has felt with the vRAM (vTax as it has been dubbed) licensing model, check out the forum here: VMware Forum on vRAM

VMware vSphere 5.0 GA Release Date

Now that the official VMware announcement has been made about vSphere 5 and its new capabilities (see “Raising the bar, Part V” webcast), the biggest question is when?

Well, according to VMware, the official GA (general availability) date for vSphere 5.0 is August 22, 2011.

Many more details to follow on this topic. Stay tuned.

VMware vSphere 5.0: Mac support yes, but the devil is in the details

Here is the long and short of this discussion:

Is Mac OS X supported on vSphere 5: Yes
Is Mac OS X supported on vSphere 5 using any x86 based platform: No

Again, Apple threw the EULA book at VMware stating that: While Mac OS X is supported using vSphere 5.0, it is only supported running on Apple hardware (ie. Xserve). Interestingly the Xserve line was discontinued earlier this year. Also just as interesting, VMware has yet to place it on its HCL (Hardware Compatibility List).

May the reader keep in mind… it is not a technical issue of support but rather a political one. Mac OS is based on FreeBSD, which VMware has supported as a virtual machine operating system for many years now. Apple will allow you to virtualize other operating systems on Apple hardware, but not the other way around.

At least Apple is consistent with its stance on creating technology barriers rather than overcoming them.