In this two part series we will share some of predictions for year ahead. In part 1, we focus on technology, in part two how these technology changes affect the storage industry and cloud market. Our focus is storage centric and looks at how Hyper-convergence, SDS and smart storage arrays integrating “Docker” type functionality will provide innovative solutions in the data analytic space.
Software defined storage model to evolve.
The introduction of software defined networking (SDN) in OpenStack (Neutron) and in VMWARE (Nicira) left the storage industry scrambling to re-cast their products into a “Software Defined Storage” offering. SDS offerings today are mainly automation tools sitting under an OpenStack Cinder API. The initiative by SNIA to develop the SDS model illustrates the gap between where Software Defined Storage is and where it needs to evolve to. The storage industry can look to SDN to develop the model for SDS. At MPSTOR we believe a major gap in existing SDS products is that storage array volumes are terminated on compute nodes and consumed directly, contrast this to SDN where the physical network device is virtualised in an OVS switch by the managed SDN controller. As in SDN, there should be a layer in SDS which sits between the storage array volume and the consumer, this layer should virtualise the storage array volumes and provide a range of storage services to the consumer. There will of course be major resistance to developing this layer as it sits in-band like the OVS switch layer, no storage vendor wants to be masked between by an open middleware software between the storage array provider and consumer of storage.
Hyper-converged, automated, scale-out stacks, the new IT workhorse.
Proprietary independent equipment silos of storage, compute and networking cannot meet the price point of utility cloud computing. Utility cloud price points require solutions built with standard X86 hardware. Standard X86 hardware reduce CAPEX, however major OPEX costs involve;
· Provisioning costs as subscriber number scale
· Scaling service capacity and performance with demand
· Creating differentiated services
Hyper converged technologies (HCT) integrate the three core components of any IT system, storage compute and networking. When HCT integrates scale-out technologies and automation this reduces the OPEX costs in scaling cloud infrastructure.
OpenStack integrated with RED HAT and object storage is a step in the direction of the open source Hyper converged stack, however object storage works with a limited number of workloads. The HCT feature set should include scale-out, SDDC automation with a resilient no single point of failure architectures supporting block, file and object storage. The HCT appliances from Nutanix and Symplivity are available, what is missing is an Openstack HCT stack available for open platforms.
Storage to get smarter.
Storage will get smarter. Connecting storage to compute over a high speed fabrics has many advantages but the paradigm breaks down as the storage size scales to PB capacities that are common in today’s clouds. The Data analytics use case includes an ETL phase (Extract, Translate and Load) which makes more sense to run on the storage arrays which contain the data. Data analytics requires very high coupling between compute and storage. In a disaggregated virtualized cloud this is not always possible, splitting compute into compute centric and storage centric operations can regain much of coupling lost in the disaggregation of compute and storage. Technologies such as Docker can provide the containers to run such storage centric operations directly on the storage array, watch out for Stocker Storage in 2015, storage with Docker type capabilities.
The vertical cloud appliance.
There is almost a perfect technology storm brewing to allow hardware vendors to deliver OpenStack based appliances for vertical markets. Several commercial Openstack distributions are now available providing an integration point for hardware vendors to deliver appliances with a rich set of virtual machine images. Installing many applications into a highly integrated OS can pose significant development and integration challenges. The same end result can be achieved by running pre-built applications on light weight Linux distributions (see turnkeylinux.org), standard tower motherboards with enough memory can run a range of applications as Virtual Machines delivering Video surveillance, Digital signage, CRM, file servers, email servers and a range of vertical applications for Point of Sale (POS), medical and educational software. The era of the data secured multi-function on premises cloud appliance is now a practical and cost effective option for the SMB organisation.
The golden rule, he who has the gold makes the rules.
Increasingly the value of data will enforce how cloud architectures will evolve. Cloud architectures tend to be driven by network and compute considerations. Pragmatic solutions that take account of the fact that data has Mass and can’t be moved easily or at all tend to create heretical discussion about breaking existing models. Data mass and pragmatism need to inform how the cloud architecture models evolve.