Tuesday, February 19, 2013

Proprietary Storage Systems Are Not Keeping Pace with Open Source

By Rolf Versluis


The mass storage world is constantly changing. Every month there are new technologies, new companies, new products. One thing is certain, there is constantly more stuff to store. And it is a virtuous cycle. As storage gets bigger and more rapid, the latest applications are able to take advantage of these new capabilities. Big Data in the form of Hadoop databases and Virtual Desktop Infrastructure are only two of the up to date applications that have an insatiable need for added storage capacity and speed.

When organizations make selections on how to acquire storage, there are a number of different decision aspects that get taken into account. Essentially, the applications that depend on the stored data must be able to run just about all the time. Ideally, the data would be accessible 100% of the time, but it is very expensive to devise a system with that much uptime. So every organization has a system wherein the data is stored on a fast and dependable storage array, it is duplicated to a different location in some manner, and the vendors who provide the storage hardware and software have accessible and effective support in case there is an issue.

There are trade-offs to the various decision factors, and this is what produces market opportunity for the different storage suppliers. It is a rapidly shifting landscape of elements that are continuously improved:

* Storage medium gets denser - Magnetic disk, SSD, and future technologies.

* Data transfer speeds advance in steps at different rates - SAS, Fibre Channel, Ethernet.

* Processors improve - Intel Architecture gets more cores, quicker processors.

* Memory - DRAM gets larger, denser, and quicker.

* Software - New features like deduplication and thin provisioning add efficiency.

* Vendor dependability - support capacity, mergers, purchase.

When I worked in the semiconductor field, I had the opportunity to work for one of Intel's manufacturer reps in a place where a large amount of the storage was designed and assembled. I saw the conversion from the i960 processors to Intel Architcture, and how most storage appliances were built around the same fundamental technology as servers.

The interesting thing regarding X86 Architecture is that for processors and chipsets, certain of them are intended for the Intel Embedded roadmap, meaning they will be constructed and supported for numerous years. This is incredibly different from the X86 Datacenter roadmap, where there is a continuous cycle of the latest processors and chipsets constructed by the most recent lithography in Intel's newest fabs. Most people familiar with servers are familiar with the Datacenter products, however,but less are aware of the Embedded processors and chipsets.

For organizations that produce dedicated storage appliances that have to go through multi-year design, testing, and manufacturing cycles, it makes a great deal of sense to design around the Intel Embedded roadmap, because the same products can be built and supported for years at a time. It makes sparing of components simpler, as well as support, software maintenance, and bug fixes. Nevertheless, every few years storage appliance companies create a major design modification because the underlying processors and chipsets have to be transferred to the new Embedded version. That's why there are still forklift improvements every few years in the storage world, and it will continue to stay that way as long as hardware and software are joined together into a dedicated appliance using proprietary operating systems and software.

Servers used to be delivered as a grouping of hardware and software as well - remember mainframes? I meet with clients who are still running AS400 systems, because decades ago there was a custom application created on this greatly dependable system that they continue to use. They would like to be running a custom or standard application on a version of Linux, virtualized by VMware, on X86 hardware, attached to shared storage, just like all their other applications. But since getting unbundled from a proprietary appliance is not easy, they don't get to obtain all the performance and reliability advantages of current computing.

Will the same thing occur in the storage world? If you look at the hardware that all storage devices are constructed around, they all have a great deal in common. Typical interfaces, processors, chipsets, hard drives, chasses. The only thing different is software, support, and the business behind it. Linux and Apache supplied the alternative to big company software and support that provided a more reliable and better performing product than both Solaris or Microsoft could come up with. Will the same thing come about in storage?




About the Author:



No comments:

Post a Comment