Orange County 949-556-3131

San Diego 619-618-2211

Toll Free 855-203-6339

Navigating your hybrid multicloud vision with IBM Power Systems

Life in a hybrid multicloud world

Cloud computing has undoubtedly changed how enterprise IT is delivered. It has opened the door to compute and storage resources without limits, as well as a wealth of cloud services (e.g., artifcial intelligence, weather data, etc.) for IT administrators to leverage and create the next wave of enterprise innovation. This paper provides a practical guide for IBM Power SystemTM users to gain an understanding of the POWER® cloud portfolio and how to map out a journey to a secure and reliable hybrid multicloud infrastructure.

Navigating a complex IT infrastructure

Today, cloud computing provides many opportunities to run your enterprise infrastructure more effectively including on-demand access to compute resources, disaster recovery solutions, invisible infrastructure maintenance, security patches and more. Whether you’re creating an on-premises private cloud, leveraging one or more off-premises public clouds (i.e., multicloud) or taking a hybrid cloud approach, cloud infrastructure capabilities can expand your business opportunities.

Given this broad range of technologies, how can IBM Power Systems users, running IBM AIX®, IBM i and Linux® enterprise apps, understand these capabilities and create a technology roadmap in an approachable and methodical manner?

A clear vision

A recent Gartner survey showed that 81% of organizations utilizing public cloud services are using more than one public cloud provider. And, according to the RightScale 2019 State of Cloud report, “Enterprises are prioritizing a balance of public and private clouds.

Hybrid multicloud has become a reality for enterprise and technology leaders. Yet, there is a need for a clear vision of how to navigate and operate in this environment.

What is hybrid multicloud?

A hybrid cloud is a computing environment that combines a private cloud and a public cloud by allowing applications and data to be shared between them. A multicloud refers to a cloud environment made of more than one cloud service, from more than one cloud vendor. Thus, a hybrid multicloud combines a private cloud, a public cloud and more than one cloud service, from more than one cloud vendor.

A multicloud strategy can unlock tremendous organizational value because it combines the best of both private cloud and public cloud. It allows organizations to run mission-critical applications and host sensitive data on-premises. It offers the flexibility of public cloud. And, it enables the movement of information between the private and public services.

Hybrid multicloud motivators and use cases

There are several motivators driving enterprises to construct a hybrid multicloud platform. Let’s explore some of the more prevalent scenarios for POWER customers (several of them are often pursued in parallel):

Deliver streamlined deployment of enterprise resources, including AIX, IBM i and Linux virtual machines (LPARs) and containerized apps

Users have grown to expect easy and on-demand access to IT resources through a cloud experience. Developers, QA engineers and line-of-business users want simplifed access to infrastructure and applications. IT administrators want trusted enterprise-grade security and simplifed operations. Streamlining all of these processes is made possible by adopting Power Systems hybrid multicloud technologies and processes within the data center.

Increase operational and budgetary flexibility by way of levering IBM Power Systems in a public cloud

One of public cloud’s major advantages is that it provides effectively limitless access to compute capacity billed as an operational expense. With a few clicks of the mouse on, users get immediate access to new virtual machines or containers — where they want, when they want. IBM Cloud is the perfect place to spin up QA, production or high availability (HA) and disaster recovery (DR) environments for your Power Systems estate.

Modernize existing applications to adopt cloud-native software development principles (e.g., containers and microservices)

Containers, Kubernetes and Red Hat® OpenShift® have unquestionably transformed how software is packaged, installed and operated — paving the way for new software delivery models. To that end, enterprises worldwide are exploring container technology and developing plans on how to integrate them into their technology stacks, while delicately balancing the ongoing business need to deploy, manage, operate and integrate with today’s virtual machine-based applications.

Integrate IBM Power Systems with the broader cloud strategy

As the industry shifts towards hybrid multicloud, a comprehensive cloud management strategy has become increasingly important. According to the RightScale 2019 State of Cloud Report, “enterprises, optimizing cloud costs (84 percent in 2019 vs. 80 percent in 2018) and cloud

governance (84 percent in 2019 vs. 77 percent in 2018) are growing challenges.” Long gone are the days of building siloed infrastructures. Enterprises are striving towards a model of interconnectedness so that the collective strength of their platforms and cloud providers can be leveraged to create the next wave of innovation.

High-level reference architecture

Shown here in Figure 1 is a hybrid multicloud reference architecture inclusive of the major industry hardware platforms — IBM Power SystemsTM, IBM Z® and x86. Power Systems is architected to economically scale mission-critical data- intensive apps, either virtual-machine based or containerized — delivering industry leading reliability to run them and reduce the cost of operations with built-in virtualization to optimize capacity utilization. It also provides flexibility and choice to deploy apps in the cloud of your choice.

From a cloud deployment perspective, the on-premises private cloud solution includes PowerVC that provides the infrastructure-as-a-service (IaaS) layer and Shared Utility Capacity (previously Enterprise pools 2.0) to deliver a pay-per- use consumption model with permanent activation of installed capacity. These solutions deliver agility and economics of cloud in an on-premises environment while enabling organizations to rapidly respond to shifts in workload demand.

Power Systems servers are also available in the IBM Cloud and other public clouds, providing flexibility and choice to deploy HA/DR, DevTest and more. Sitting atop the infrastructure layer is Red Hat OpenShift, which provides the enterprise Kubernetes platform-as-a-service (PaaS) layer. OpenShift users can run their software of choice, including IBM’s enterprise software delivered via IBM Cloud PaksTM, ISV software, open source software and custom enterprise software. To manage and operate everything from a centralized location, the IBM Cloud Pak for Multicloud Management can be used to connect the historically separate cloud infrastructures. Lastly, the Red Hat Ansible® Automation Platform can be leveraged across the entire landscape to provide a consistent approach to manage all of your operating systems, cloud infrastructures — regardless of the platforms you’re running.

Reference journey to the hybrid multicloud

While each organization will have its own unique characteristics, Figure 2 serves as a general blueprint to guide POWER users through the myriad of cloud technologies and remove the mystery from the journey. The path to hybrid multicloud begins with a solid foundation of infrastructure and hardware management capabilities. From there, users are directed towards establishing a cloud experience within their own data center (i.e., a private cloud), offering simplifed virtualization management and operations, advanced automation and a platform to start building innovative cloud-native applications leveraging Red Hat OpenShift, Kubernetes and containers. As a parallel track to establishing a private cloud, it is also recommended to explore the public cloud to spin up QA, production or high availability (HA) and disaster recovery (DR) environments without the need to procure and administer the infrastructure in your data center.

Lastly, users need to establish robust connectivity between their on-premises and off-premises infrastructures so that applications and data can flow seamlessly between the two.

The IBM hybrid multicloud solutions

Deploy hybrid cloud on IBM Power Systems

Ensure business continuity and lower IT acquisition costs with on-premises private cloud

The IBM Private Cloud Solution with Shared Utility Capacity lowers IT acquisition costs and delivers a by-the-minute pay-per-use consumption model in an on-premises environment. The base capacity that a user has to purchase is as low as 1 core and 256 GB. Users can buy capacity credits for resource usage above the base capacity. They can also add multiple systems in the pool. When resource usage exceeds the aggregated base of the pool, capacity credits will be debited in real-time, based on by-the-minute resource consumption.

In addition, Shared Utility Capacity delivers cloud-like economics in both enterprise and scale-out POWER9 based Power Systems. It provides multiple benefts to users. The low base capacity reduces IT acquisition costs by up to 58%. With the fully activated pay-per- use capacity, customers have additional capacity that is only charged when it is consumed. This additional capacity ensures business continuity during demand spikes. With multisystem resource sharing in the pool, customers have the flexibility to balance workloads across systems and optimize resource utilization. The by-the-minute metering also helps users to only pay for the precise capacity they consume.

IBM Power Virtualization Center (PowerVC) provides on-premises enterprise virtualization management for Power Systems, inclusive of AIX, IBM i and Linux guests. Built on OpenStack, it provides a multi-tenant IaaS layer in your data center, allowing administrators to quickly provision new virtual machines in minutes.

It also provides numerous operational benefts such as one-click system evacuation for simplifed server maintenance, dynamic resource optimization (DRO) to balance server usage during peak times, automated virtual machine restart to recover from failures, importing and exporting virtual machine images for cloud mobility and more. It also enables DevOps capabilities such as “infrastructure as code” by way of Ansible or HashiCorp Terraform. Terraform can provision Power resources through PowerVC by leveraging the out-of-box OpenStack provider. PowerVC provides the foundational technology on top of which the rest of the on-premises POWER cloud stack is built.

Reduce data center footprint and get cloud agility with public cloud

IBM Power Systems Virtual Server integrates AIX, IBM i and Linux capabilities into the IBM Cloud experience and is available on POWER9 based Power Systems. Users receive fast, self-service provisioning, flexible management and access to a stack of enterprise IBM Cloud services with pay-per-use billing. Users can easily export virtual machine images in the standard OVA format from PowerVC and upload them into the IBM Cloud for easy back and forth image mobility. With this public cloud solution, POWER users can grow at their own pace and run enterprise workloads when and where they choose with a variety of flexible operating systems, compute, storage and networking configurations.

Simplify hybrid cloud management

IBM Cloud Pak for Multicloud Management can run on Red Hat OpenShift and provide a single control point to manage a hybrid IT environment. This provides consistent visibility, governance and automation across the entire hybrid multicloud landscape, bridging traditional virtual machine apps with new cloud-native container apps. Offered as part of this Cloud Pak are three critically important management applications to hybrid multicloud:

  • Infrastructure Management (formerly known as CloudForms) provides centralized management and dashboarding of the virtual infrastructure components (e.g., virtual machines, volumes, networks, etc.) across hybrid cloud. It also offers connectivity to major public cloud infrastructures, including PowerVC for on-premises management of your Power Systems environment.
  • IBM Cloud Automation Manager (CAM) provides advanced multicloud orchestration capabilities. Using HashiCorp Terraform as its underlying engine, CAM enables connectivity to numerous cloud infrastructures, including PowerVC (OpenStack), IBM Cloud, AWS®, Azure®, Google® and several others. CAM can provision virtual machines, including LPARs via PowerVC, as well as containers. This allows users to create software catalog entries that build complex multi-tier applications with a single click. And, because CAM is delivered as part of an IBM Cloud Pak, it runs on Red Hat OpenShift, creating a centralized service catalog from which you can deploy all your applications.
  • IBM Multicloud Manager (MCM) provides a single multicloud dashboard that enables organizations to oversee multiple cloud endpoints on public or private cloud infrastructures. MCM provides consistent visibility, governance and automation across a hybrid multicloud environment.

Achieve consistent enterprise IT automation with Ansible

Red Hat Ansible Automation Platform is enabled for IBM Power Systems ac 1r oss AIX and IBM i environments and runs on Power Systems private and public cloud infrastructures. Red Hat Ansible Certified Content for IBM Power Systems helps you include workloads on Power Systems as part of your wider enterprise automation strategy through the Red Hat Ansible Automation Platform ecosystem. Enterprises already using Ansible for other IT infrastructure, such as x86 or IBM Z servers, can seamlessly integrate Power servers as well. The Ansible content helps enable DevOps automation through unified workflow orchestration with configuration management, provisioning, and application deployment in one, easy-to-use platform. This is an important step in delivering a comprehensive enterprise-grade solution for building and operating IT automation at scale. 1Some Ansible content is only available in open source form from Ansible Galaxy.

Build modern cloud-native apps with IBM Cloud Paks and Red Hat OpenShift

IBM Cloud Paks care enterprise-ready containerized software solutions that provide an open, fast and secure way to move core business applications to any cloud. They are lightweight and easy to run, certifed by IBM and Red Hat. Each Cloud Pak sits atop Red Hat OpenShift and can run anywhere on-premises, in the cloud or at the edge.

Cloud Paks are comprised of a set of containerized IBM middleware and common software services. IBM offers six Cloud Paks: IBM Cloud Pak for Applications, IBM Cloud Pak for Data, IBM Cloud Pak for Integration, IBM Cloud Pak for Automation, IBM Cloud Pak for Multicloud Management, and IBM Cloud Pak for Security. Each offering provides a broad set of capabilities for a particular domain.

Red Hat OpenShift is the industry-leading platform-as-a-service (PaaS) technology built on Kubernetes, fully enabled and supported on IBM Power Systems. Red Hat OpenShift provides an infrastructure-independent common operating environment that serves as a common foundation across both private and public cloud, making it the de-facto standard fabric for hybrid cloud infrastructures. Red Hat OpenShift provides a trusted platform from which to build new cloud-native, container-based applications. It also provides a broad set of open source software, including IBM enterprise middleware (via IBM Cloud Paks) and ISV software.

Integrations with other cloud orchestrators

  • VMware vRealize AutomationTM (vRA) speeds up the delivery of infrastructure and application resources through a policy-based self-service portal, on-premises and in public cloud. In addition to x86 VMware-based virtual machines, vRA is able to provision Power virtual machines (including AIX, IBM i and Linux) with PowerVC, providing the ability to orchestrate deployments across hybrid cloud.
  • VMware vRealize Operations for IBM Power Systems brings together all management functions including performance management, capacity, cost analytics, planning, topology analysis and troubleshooting in one integrated, highly intuitive, scalable and extensible platform. It also provides deep insights and key performance indicators for enterprise applications, including SAP HANA, Db2, Oracle and several others. This comprehensive monitoring solution is a perfect complement to a cloud management software stack as it provides a broad and deep perspective into what is happening in the cloud.

We hope our commitment to delivering open and flexible solutions for your hybrid multicloud journey will help you leverage partner cloud technologies and seamlessly integrate Power Systems with the rest of your data center.

The Key To Enterprise Hybrid Multicloud Strategy

Executive Summary

As enterprise IT readily embraces public cloud technologies, on-premises and private cloud usage continues to grow. On-premises is not going away as a critical part of IT infrastructure strategy; instead, organizations are meshing together various types of IT infrastructure to meet their needs. Organizations that can bring together on-premises with public cloud strategically will be best positioned for operational excellence.

In August 2019, IBM commissioned Forrester Consulting to evaluate how organizations develop and implement their IT infrastructure strategies. Forrester conducted an online survey of 350 global enterprise IT decision makers across industries to explore this topic. We found that organizations are mixing and matching technologies across public cloud, hosted private cloud, and on-premises infrastructure based on business requirements.


  • On-premises infrastructure is key to enterprise hybrid cloud strategy. Enterprises are making strategic decisions about what types of IT infrastructure to use for which purposes — and on- premises continues to play a key role, with 90% of IT decision makers agreeing that on-premises infrastructure is a critical part of their hybrid cloud strategies.
  • IT decisions makers select the right IT infrastructure strategy according to the job to be done. Technology professionals consider workload, security needs, and time-to-value when designing IT infrastructure strategies. When it comes to workloads, IT decision makers anticipate that more than half of mission-critical workloads and 47% of data-intensive workloads will be run either on-premises or in an internal private cloud in two years.
  • The push to public cloud doesn’t mean organizations have stopped investing in on-premises. The majority of IT decision makers surveyed expect their companies’ funding for public cloud to grow over the next 24 months. At the same time, more than eight out of 10 respondents predict their organizations will increase investment in IT infrastructure outside of public cloud.
  • Tapping the brakes on refreshes and upgrades can come at a cost. Delays in IT infrastructure refreshes and upgrades expose enterprises to expensive vulnerabilities and can negatively impact customer experience. Security vulnerabilities, software compatibility issues, and an inability to meet customer expectations as a result of delays in infrastructure refreshes are top concerns for IT decision makers.

On-Premises And Private Cloud Investments Grow At Parity With Public Cloud

Public cloud trends have garnered growing coverage over the last several years, but the increased attention on transitioning to cloud and expanding outside the data center doesn’t tell the whole story about organizations’ IT infrastructure strategies. In addition to grappling with how and what to shift to public cloud, enterprise IT organizations are also struggling with increasing demands on existing IT infrastructure, with the end result being that on-premises and private cloud spending and usage also continue to grow. In surveying 350 IT decision makers, we found that organizations are simultaneously:

  • Growing public cloud footprints. Sixty-two percent of organizations already have some form of public cloud, and 82% of tech professionals expect to increase funding for public cloud over the next two years (see Figures 1 and 2). This finding is not surprising, as cloud has become mainstream.
  • Providing for heightened demand on existing infrastructure. One of the top three IT priorities is providing for growing demands on existing IT infrastructure. However, in the cloud era, there is pressure to extend infrastructure without needed updates and upgrades. In fact, 61% of respondents say their organizations have delayed an infrastructure refresh at least a few times in the last five years (see Figure 3). IT is grappling with how to get more out of their existing technology stacks without exposing themselves to risk.
  • Increasing on-premises and other nonpublic cloud investment. Funding for infrastructure outside of public cloud is roughly at parity with expected cloud growth: 85% are increasing funding for infrastructure (not including public cloud). Meanwhile, more than half of IT decision makers plan to update existing infrastructure or purchase new infrastructure within the next 12 months

Lack Of Reinvestment Can Leave Organizations Vulnerable

As organizations continue to transition to hybrid multicloud environments, those that do not take a holistic view of their IT infrastructure, including on-premises, open themselves to security vulnerabilities, breakage, and, ultimately, loss of customer confidence and loyalty. Even when individuals recognize a need for a holistic approach, the road to implementing an all-inclusive infrastructure strategy is not easy. Seventy-five percent of survey respondents reported that they received significant pushback while advocating for strategies outside of cloud environments (see Figure 4). As a result, IT decision makers struggle with a variety of cost and strategy challenges following a delay in infrastructure refreshes and upgrades, including

  • Security vulnerabilities. When organizations prioritize other IT initiatives over infrastructure refreshes, they leave themselves exposed to security risks. Our survey findings reveal that the highest ranked repercussion is security vulnerabilities at 44%.
  • Inability to meet increased customer and employee expectations. By delaying infrastructure refreshes, organizations hinder the process for improving customer and employee experience. Forty-three percent of respondents cited the inability to meet increasing expectations of customers and employees as one of the top five consequences of delaying an infrastructure refresh. Technology innovation has powerfully changed how customers experience and value products, and in this era of hyperadoption and hyperabandonment, investing in customer experience is more critical than ever before.
  • Compatibility restrictions. Forty-three percent of respondents ranked restrictions for compatible apps, software, services, and integration as a top five challenge following a delay in infrastructure refresh.
  • Decreased market competitiveness. Based on our study, 39% of respondents have felt a loss of competitive edge as an IT organization. As a result of putting infrastructure refreshes on the backburner, organizations have not only opened themselves to internal vulnerabilities, but they have also left themselves at risk to fall behind their competition.
  • Diminished performance. In addition to organizations losing their competitive edge, delays in refreshes are also reducing organizations’ performance. Thirty-eight percent of respondents stated that their organizations have experienced a decrease in performance post-delay.

Crafting A Comprehensive IT Infrastructure Strategy: One Size Does Not Fit All

Organizations supplement cloud strategy with on-premises infrastructure to use the right tool for the job. On-premises infrastructure continues to be foundational, with 90% of respondents agreeing that it is a critical part of a hybrid cloud strategy (see Figure 6).2 Our survey revealed that key considerations for infrastructure decisions include

  • Type of workload. Organizations are increasing the percentage of mission-critical workloads that are run in public cloud and internal private cloud at comparable rates. At the same time, they expect to increase data-intensive workloads that are run in hosted private cloud environments. Organizations also leverage on-premises for improved application or infrastructure performance, which lands in the top three reasons organizations leverage on-premises resources for some workloads.
  • Compliance and security. Greater assurance for compliance is the No. 1 reason for using on-premises resources for select workloads. According to respondents, failure to meet security needs is the top reason for maintaining infrastructure outside of a public cloud platform. Hosted private cloud offers the benefits of traditional on- premises infrastructure in a secure, private setting, while also allowing organizations to take advantage of cost savings and flexibility.
  • Cost and time-to-value. Organizations ranked avoiding time- intensive budget approvals and realizing faster productivity with less process as top reasons to leverage on-premises resources. This need is particularly driving private cloud investment, with most viewing internal private cloud as a developer environment. These findings suggest that organizations use on-premises and private cloud to side-step bureaucratic processes and kick-start development efforts.

As organizations grow both their public cloud and nonpublic cloud footprints, continued investment in on-premises remains key. This theme is evident as a majority of organizations craft infrastructure strategies that account for increased workload demands, security compliance, and growth.

Key Recommendations

In a world where the focus centers on the cloud, it is easy to make the mistake of moving application workloads without a clear rationale for what benefits migration will achieve. Our survey uncovered evidence of this pressure to shift to cloud, as well as the reality that many organizations are intentionally and strategically leveraging a hybrid cloud strategy driven by diverse business and technology requirements. Forrester’s in-depth survey of 350 global IT decision makers about IT infrastructure yielded several important recommendations:

  • Invest in cloud using a strategy that aligns to your context. First, determine whether you are seeking gains at the application level or the data center level. Then, create your own sourcing framework with factors that may include cloud readiness, location challenges, compliance requirements, data types, need for additional support, and expected lifetime, among other factors.4 Hedge against cloud vendor lock-in by designing for multicloud deployment and architectures wherever possible.
  • Don’t let cloud obsession stop other infrastructure investments. The perception that infrastructure investment outside the public cloud has stopped is false. Yet as an infrastructure professional, it feels like budgets are under attack. The majority of IT leaders continue to invest.
  • Beware of delaying investment. Those that have delayed or stopped investment have experienced security vulnerabilities, software compatibility issues, and an inability to meet customer expectations. Learn from your peers and advocate for updates and upgrades.
  • Build an irrefutable business case. Our survey found that organizations are mostly likely to use higher performance as a proof point to justify new investment (see Figure 8). Performance is especially critical since it has a significant impact on customer experience (CX) and brand perception. Executives that can’t commit to complete refreshes can leverage subscription-based infrastructure refresh options to provide a more flexible future if their strategy changes.
  • Explore alternative environments for data-intensive workloads. Public cloud serves many workload types, but some use cases are extraordinarily expensive or introduce too great a risk surface. Data-intensive workloads are a great example of hybrid cloud strategies looking to optimize across all IT infrastructure options and ensure cost efficiency.

Learn the benefits and best practices of bringing a hybrid cloud strategy to life within your organization

Hybrid cloud for IT transformation

In a world of complex security, workload and data hosting needs, enterprise leaders may find that a “one-cloud-fits-all” strategy does not effectively address the needs of their organization. Instead, a more tailored approach is needed to truly transform their digital landscape and provide them with the ability to deploy applications and data in a secure, integrated, flexible and simple-to-manage way.

For a majority of enterprises, a hybrid cloud strategy has become the preferred model for deploying applications and data. According to 451 Research, more than two-thirds of companies (68%) are choosing the default approach of making strategic IT investments in hybrid IT and integrated on-premises/

off-premises cloud environments.1 And among top IT spending priorities for these organizations in 2019 are new IT projects for digital transformation (35%), upgrade/refresh existing IT (30%) and customer experience/engagement improvements (29%).

This shift to hybrid cloud offers IT leadership a unique blend of security for mission-critical workloads, flexibility for dynamic delivery and performance to meet the need for continuous and effective innovation. Adoption of a hybrid cloud strategy enables a large organization to customize their framework and deploy a model that best serves their business objectives, critical workloads and future initiatives to better serve their customers.

Understanding cloud environments and multicloud management

A hybrid approach may be the best move for an enterprise looking to keep their data protected and private while meeting the demand for business agility. The truth is, many of the critical workloads of enterprise businesses cannot or should not be moved to the public cloud. Such a move could compromise the security of mission-critical data for core business applications. Major financial, health, government and other large enterprises cannot take the risk with their business and customer data.

Understanding cloud environments and making decisions about multicloud management is complex. Many questions arise, such as, what resides on-premises? What lives in a private cloud vs. public cloud? Which public clouds should be used? What data or applications should be on-premises rather than off-premises? Why did your IT team deploy some applications in those respective environments and was it the right decision? It’s important to have a solid understanding of your current IT infrastructure and the alignment of workloads with this type of deployment. With that in mind, let’s take the time to explore the various cloud deployments.

Private cloud

A private cloud refers to a cloud solution where the infrastructure is provisioned for the exclusive use of a single organization, either on premises or off premises. The organization often acts as a cloud service provider to internal business units that obtain all the benefits of a cloud without having to provision their own infrastructure. By consolidating and centralizing services into a private cloud, the organization benefits from centralized service management and economies of scale.

An on-premises private cloud provides some advantages over an off-premises private cloud. For example, an organization gains greater control over the resources and data that make up the cloud. In addition, on-premises private clouds are ideal when the type of work being done is not practical for an off-premises private cloud because of network latency, security or regulatory concerns.

Public cloud

A public cloud infrastructure is made available to the general public or a large industry over the Internet. The infrastructure is not owned by any single user, but by an organization that provides cloud services to a variety of businesses. Public cloud services can be provided at no up-front cost, as a subscription or as a pay-as-you-go model, and resources can be shared across multiple businesses to reduce costs.

Hybrid cloud

A hybrid cloud deployment typically describes a situation in which a company is operating a mixture of private cloud, public cloud and traditional environments — regardless of whether they are located on premises or off premises. In a hybrid cloud environment, private and public cloud services are integrated with one another.

Hybrid cloud enables a business to take advantage of the agility and cost-effectiveness of off-premises, third-party resources without exposing all applications and data beyond the corporate intranet. A well-constructed hybrid cloud can service secure, mission-critical processes, such as receiving customer payments (a private cloud service) and secondary processes, such as employee payroll processing (a public cloud service).

The challenges for a hybrid cloud are the difficulty of effective creation and governance, the need to ensure portability of data and applications in the cloud, and the management of complexity. Services from various sources must be obtained and provisioned as though they originated from a single location, and interactions between private cloud and public cloud components make the implementation even more complicated.

Hybrid multicloud architecture

Hybrid multicloud refers to an organization that uses multiple public clouds from several vendors to deliver its IT services, in addition to private cloud and traditional on-premises IT. A hybrid multicloud environment consists of a combination of private, public and hybrid infrastructure-as-a-service (IaaS) environments all of which are interconnected and work together to avoid data silos.

Many enterprise companies are failing to make their various data repositories and systems ‘talk to each other’ effectively and efficiently, if at all. The result: more data silos that hinder or prevent data movement and sharing.

With a modern hybrid multicloud architecture in place, you gain access to a single source of truth as it relates to your data. If optimized properly, you can quickly access data that is reliable and accurate. Moreover, data that is unified in one location is accessible whether it resides on-premises or off-premises.

Benefits of a hybrid cloud strategy

Get the best of all environments

Hybrid multicloud is the new normal for enterprises investing in IT modernization. And with it you can get the best of all environments — while public cloud is prized for delivering customer-facing applications, on-premises private cloud is valued for securing data and prized for quick access to on-site data and applications. Optimizing for both agility and essential business needs can lead to cost efficiencies as well. That’s because keeping critical workloads on-premises can save a business big on frequently used data. Let’s explore the benefits of a hybrid cloud environment.

Five benefits of a hybrid cloud environment


In this era of frequently reported data breaches, securing all of an organization’s data is essential to maintaining customer confidence and protecting critical business data. Just as important is being able to prove to regulators that customer data is fully protected. Storing secured data on-premises and enabling fast access from cloud applications is a good start; extending the protection of data into both private and public cloud enables flexibility. A hybrid cloud environment gives you a choice of how and where your data is housed within your organization and it is important to keep it protected wherever it resides.


A hybrid cloud environment will enable you to rapidly deploy applications to satisfy customer demand and exploit business opportunities. It makes applications and data more easily accessible to a wide variety of users. And, it gives the ability to integrate your on-premises applications and data with the public cloud to securely make all of your data and applications available.


Develop new cloud-native applications using containers so they can be hosted on private and public cloud. This enables you to run applications on the right platform and take advantage of available resources. Deploying these applications using Kubernetes can help you manage cloud complexity while minimizing cost. Central to all of this is the flexibility of open-source and an infrastructure-independent common operating environment that runs anywhere — from on-premises private clouds across your entire value chain.


Remove data silos so that your core business data and applications can fuel new development and surface new insights across your business. Co-locate applications close to the data to enable faster processing and insights — from corporate data or data generated by Internet of Things (IoT) devices — while ensuring critical data remains in the most secure environment.



The hybrid cloud enables optimized placement of workloads and sharing of resources, which can help minimize both predictable costs like datacenter, software purchases and licensing costs, as well as the cost of supporting spikes in demand. A hybrid approach is flexible enough for the life of your organization.

Insider advice for CIOs building hybrid clouds

To shift to a hybrid cloud approach means to listen and adjust to each business unit (BU) within your organization. One BU may favor a specific public cloud service for their work while another BU may have established a critical and efficient system with a different cloud service. A hybrid approach accommodates the needs of each BU’s dependencies, so you can select the right service for their workloads and your customers.

According to 451 Research1, “hybrid is the preferred (or in effect, default) approach for a greater proportion of large enterprises, more than 10,000 employees (69%) and government/ education organizations (73%).” Moving forward with building your hybrid cloud environment means addressing a variety of organizational issues and demands.

Optimizing costs

There can be no compromising when it comes to the security and privacy of your data and your customers. To prepare for data growth and future regulations you need a secure hybrid cloud that protects you from all IT threats. But not all vendors use a secure-by-design approach. Your secure hybrid cloud should do the following:

  • Encrypt 100% of data, both at rest and in-flight – using on-chip hardware crypto accelerators wherever possible to minimize encryption overhead
  • Protect and store encryption keys using the highest NIST certified FIPS- certified Hardware Security Modules
  • Localize data on-premises in a private cloud to meet data privacy regulations
  • Secure application environments to run trusted workloads, designed for protection from internal and external threats
  • Extend data privacy beyond the host server and across the hybrid cloud

Managing complexity

Collaborating across your organization requires a cultural and technological investment. It can be challenging but it’s something many organizations are pursuing to lower cost and raise availability for their critical and experimental work. To enable collaboration across your organization, consider investing in:

  • Infrastructure-independent common operating environments that run anywhere — from the data center to multiple clouds to the edge
  • Building cloud-native applications using multi-architecture containers and deploy across the hybrid cloud using Kubernetes
  • Integrating new applications with existing data and systems to maximize value
  • Leveraging multicloud management to ensure the best use of resources

Building your hybrid cloud

Hybrid cloud with the right technology

In order to be an agent of change in your organization, you’ll need to have the right technology in place to support your every move. So, we put together a list of hybrid cloud technologies worth looking into as you begin or continue on your hybrid cloud journey. As you plan out your environment, here’s what you’ll need.

  • Open-source software to avoid vendor lock-in and enable innovation.
  • Lightweight virtualization and orchestration software to package applications with their software dependencies, and to accelerate development and deployment.
  • Infrastructure-independent common operating environment to enable the portability of applications across hybrid cloud environments.
  • Database and middleware software integration to help move and integrate core business applications to the hybrid cloud securely.

A short technology deep-dive


Linux has established itself as the leading operating system, both for traditional IT and in the cloud. It has been ported to multiple architectures and systems, from embedded IoT devices to supercomputers. Although there are many Linux distributions available, three have emerged as the leaders for enterprise Linux: Red Hat Enterprise Linux, SUSE Linux Enterprise Server and Ubuntu from Canonical.


Containers are a feature of Linux and other operating systems which package together application code along with all the software dependencies that it needs in order to run. This ensures that the application has everything it needs to run out of the box, independent of the operating environment in which the container runs.

Containers make life easier for both developers and administrators. They are lightweight to run and extremely quick to start, which can increase performance time. Administrators can run many of them at once to create a highly scalable environment. Their cloud-friendly nature makes it easier to deploy them automatically, and containers can run in many different operating environments because they contain the files on which they depend. And multi-architecture containers are now possible, to enable container development on one architecture and deployment on another.


Containers have been widely adopted, which means there can be lots of them, making them difficult to manage. This requires a new way of managing application deployment. Containers need to be created, provisioned, run, and deleted very quickly, and so require powerful orchestration software to manage them at scale.

Kubernetes, another open-source project, has emerged as the most popular container orchestration tool. It is declarative rather than procedural, which means the systems administrator specifies the desired end state of deployment and Kubernetes works out how to achieve it.

Red Hat OpenShift

Red Hat OpenShift Container Platform provides an infrastructure-independent common operating environment that runs anywhere — from any data center to multiple clouds to the edge. It includes support for containers and Kubernetes, as well as additional services and management capabilities.

IBM Cloud Paks

IBM Cloud Paks are enterprise-ready, containerized software solutions that offer an open, faster and more secure way to move core business applications to any cloud. Built on Red Hat OpenShift, each IBM Cloud Pak includes a container platform, containerized IBM middleware and open source components, and common software services for development and management.

Building your hybrid cloud on IBM LinuxONE

IBM LinuxONE is an enterprise platform designed to deliver high availability, security and scalability and with the agility to develop next-generation applications. As such, it can provide an ideal platform for building each element of the hybrid cloud – whether public cloud, private cloud, or traditional on-premises IT.

Here are some of the benefits of building your hybrid cloud on LinuxONE:

  • Supports Linux, containers and Kubernetes for cloud-native application development, deployment and management — with future support for Red Hat OpenShift and IBM Cloud Paks announced in a recent Statement of Direction
  • Engineered to deliver a highly scalable, secure, reliable and cost-effective platform for building and deploying containers — whether on private or public cloud
  • Scales both vertically and horizontally, so supporting big containers (for applications which have been containerized but not yet factored into microservices), and lots of parallel containers (for new cloud-native applications using containers and microservices)
  • Protects data and applications from both internal and external threats through pervasive encryption, key protection, and a highly secure environment for running applications
  • Designed for 99.999% availability to meet consumer and business expectations, LinuxONE is able to quickly recover from disaster scenarios
  • Improved quality of service vs public and private clouds There are limits to public and private clouds’ ability to deliver high quality of service to your partners and customers. This is another area where the hybrid cloud model shines. A hybrid approach gives you the power to integrate new cloud workloads with your existing IT infrastructure. This can lead to faster service for your customers. And, by having a fuller view into all your workloads, you can leverage big data for new insights that can lead to future application improvements.
  • Reduces Total Cost of Ownership by sharing resources, consolidating licensed software onto fewer cores, and simplifying management of IT With an open approach, you will be able to take on more advanced servers built with the highest levels of security, scalability and reliability and apply those advantages across all your workloads. The added scale won’t come at the cost of security either. Hybrid cloud lets you containerize existing and future applications.

Plastic Bank

As scientists predict more plastic than fish in the ocean by 2050, the Plastic Bank founders wonder what they can do to protect the natural world? Working with IBM and service provider Cognition Foundry, Plastic Bank is mobilizing recycling entrepreneurs from amongst the world’s poorest communities to clean up plastic waste in return for life-changing goods. To support their expansion, the Plastic Bank selected IBM Blockchain technology delivered on a private cloud by managed service provider Cognition Foundry, powered by IBM LinuxONE. The application front-end was designed and developed by Cognition Foundry and is hosted in Cognition Foundry’s datacenter and the IBM Cloud, creating a hybrid multicloud architecture. Blockchain is used to track the entire cycle of recycled plastic from collection, credit and compensation through delivery to companies for re-use.

Digital Asset Custody Services (DACS)

Smart contracts and crypto-asset technologies are set to transform the way enterprises across industries do business. Existing solutions tend to force people to choose between either security or convenience. For example, cold storage options generate and store assets in an offline environment. While this approach protects assets from cyber attackers, it slows down transactions. On the other hand, relying on exchanges or third-party wallets to manage digital assets means trusting that they will safeguard them adequately, and that there won’t be any interruptions to their services.

To enable companies to protect and use their digital assets freely, Digital Asset Custody Services (DACS), a subsidiary of Shuttle Holdings, is working with IBM to create a first-of-its-kind servicing platform based on IBM LinuxONETM servers and IBM Secure Service Container for IBM Cloud Private. Customers will have the choice to deploy the solution on-premises as part of a private cloud environment or as a service.

ICU IT Services

ICU IT Services, a Dutch IT infrastructure service provider, built a solution to capture new clients by merging the best of open-source and enterprise technology. Recognizing the growing popularity of open-source technology, the company saw an opportunity to tap into a new part of the marketplace. As an example of the innovation enabled by the IBM solution, ICU IT has created its own multi-architecture cloud environment using OpenStack solutions and IBM CloudTM Private. This sophisticated cloud infrastructure incorporates both Intel and LinuxONE nodes and is integrated with the IBM z/OS environment.


HCL Technologies, a Sweden-based IT services company, leverages their hybrid cloud environment to satisfy the needs of their customers. This is especially important since HCL’s customers expect that their applications and private cloud services will support their increasing demands for performance, manageability and security. With no two customers alike, HCL Services is able to provide scalable, consistent, predictable and secure cloud services that their customers demand.

Four steps to hybrid cloud readiness

1. Align your IT with C-suite business priorities and goals

Understand C-suite business goals and align with strategic initiatives. You don’t want to go into the meeting with inaccurate information. You’ll want to align and talk to their needs.

This could include:

  • Technology priorities: modernize technology and build agility between teams. Be able to speak to how DevOps connects to cloud, data analytics to AI and data protection to security and resiliency.
  • Business priorities: Delivering better customer experience, creating a digital business model, building AI training models, or implementing thorough security mechanisms to remain compliant with current regulations.

2. Choose an infrastructure mix of private cloud, public cloud, and on-premises traditional IT that fits your hybrid cloud plan

  • Look at the workloads, data placement, and agility need
  • Match workload requirements to platforms
  • Choose an infrastructure-independent common operating environment that runs anywhere
  • Leverage multi-architecture containers and interpreted languages in application development and deployment to achieve true portability of applications across the hybrid cloud

3. Share your plan with your leadership team

  • Be direct and concise. State the key takeaways from your research efforts.
  • Key differences between public and private clouds
  • What an optimized hybrid cloud environment offers
  • Your hybrid cloud plan and next steps
  • Prepare for the C-suite Q&A. This is your research and meeting, so make sure you are ready for any question that may come your way.
  • Press for investment/clarify timeline. Time is of the essence, so this is a great opportunity to encourage investment urgency.

4. Conclude and reiterate the business value

Restate the business benefits as a result of implementing a mature hybrid cloud solution.

  • Unify data to gain a single source of truth
  • Ensure applications are delivering accurate insights
  • Derive greater value from unstructured data to enable better business outcomes
  • Ensure greater business resilience
  • Deploy modern applications
  • Drive business satisfaction
  • Enable data scalability as business grows

With the meeting over, be sure to have follow-up action items and encourage any and all feedback from stakeholders.

Embracing hybrid cloud

A hybrid cloud strategy is a huge advantage for any data-driven enterprise up to the challenge. Yet, a project of this scale demands more than a will to lead on digital transformation. It requires the tools to support your every move. With the right team, goals and solutions in place, your data-driven enterprise can benefit from the following:

  • Cost reductions
  • Added reliability
  • Simpler data management
  • More rapid provisioning
  • Faster time to market for your products and services

How to Simplify Your Hybrid Multicloud Strategy


As the following Forrester report (Assess The Pain-Gain Tradeoff of Multicloud Strategies) shows, the world of hybrid multicloud is a complex business. Sure, with a hybrid multicloud platform you get the data security and uptime reliability of on-premises architecture combined with the agility and on- demand growth of the cloud, but that combination comes in many different forms.

First of all, “multicloud” can mean various things to various people, including:

  • Multiple clouds hosting different apps based on app characteristics.
  • Multiple clouds hosting parts of an app ecosystem
  • Multiple deployment options, with common APIs, on the same cloud
  • Multiple clouds being used simultaneously for a single app.

Furthermore, a multicloud strategy appeals to different organizations and stakeholders for unique reasons, ranging from improved performance and disaster recovery through to the unique strengths of outside platforms and the desire to diversify risk.

For organizations looking to grow their infrastructure, it may seem that a hybrid multicloud environment is the most obvious cloud strategy. However, this isn’t always the case. Sometimes—as when you become faced with problematic latency and bandwidth, or double the work for half the productivity—a hybrid multicloud approach just isn’t feasible.

That’s why when you over-complicate your hybrid multicloud strategy you risk increasing your costs while still getting unimpressive results.


The best approach to establishing a simplified multicloud strategy, as Forrester explains, is to drill down into your needs: “Get specific about your multicloud plan and goals before jumping into decisions about public or private cloud and complex sourcing algorithms for determining the fate of your application portfolio. What greater purpose does your cloud strategy serve? What specific efficiencies — process and resource utilization — do you seek for your company, and why are those important?”

The following suggestions can help you ensure that your strategy is actually strategic, and not just ineffectively complex:

  1. Make sure that you’re getting more out of your hybrid multicloud than you’re putting into it, meaning that the effort you’re expending doesn’t exceed the value of the end result.
  2. Don’t overly complicate your cloud strategy. The easiest, most simplified approach is often the best one.
  3. Avoid altering too much at one time unless you’re at a point of significant change in your organization, allowing you to make more radical adjustments all at once.

By focusing on simplification, you can find the right hybrid multicloud solution for your organization and assure that you’re getting way more “gain” and far less pain.

The Pain Is Usually Worth The Gain (But Not Always)

Hybrid cloud and/or multicloud is your obvious cloud strategy — or is it? Forrester outlined the basics of hybrid and multicloud in our report “Top 10 Facts Every Tech Leader Should Know About Hybrid Cloud.” But a larger story is emerging that questions the very nature of using multiple platforms — cloud or noncloud. Factors favoring variety are freedom of choice, heightened resiliency, and application-specific sourcing optimization. In opposition is the inefficiency associated with managing multiple versions, adding complexity and redundancy as enterprises veer toward cloud strategy pragmatism. As enterprises question the very nature of multiple platforms, they come to obvious conclusions:

  • At times, vendor variety is worth it. Multicloud is popular for good reason. Different platforms have different strengths. By leveraging multiple cloud platforms, organizations see freedom of choice and application-specific sourcing optimization. This can meet a variety of app or end user demands, whereas forcing all workloads to fit on a single platform can be detrimental to regulation, cost, performance, or user experience. Secondarily, using multiple platforms reinforces the concept of heightened resiliency by not putting all one’s eggs in a single basket.
  • On the other hand, strategic partnership creates great value. Some companies believe that maintaining multiple platforms can be expensive, whether for a portfolio of workloads or, even more, for a single application, and thus decide to focus on a single strategic partnership.1 The streamlined productivity, unified native management tooling, single data location, reduced redundancy, and ability to leverage unique and maintained services outweigh the benefits of added freedom.2 Strategic partnership can also reduce the risk that too much complexity can introduce.


When Forrester asked North American and European infrastructure technology decision makers employed at enterprises why they leverage multiple cloud platforms, the three most common responses embraced the concept of strategic rightsourcing: to improve performance of latency-sensitive apps (31%); because different apps require different cloud services (28%); and for disaster recovery (26%).3 Most cloud-savvy companies are familiar with these efficiency principles. However, via inquiries and briefings, Forrester has uncovered other common reasons that enterprises turn to multicloud:

  • Unique strengths outside a primary provider. Some enterprises reluctantly leverage clouds outside their primary cloud platform due to unique value-adds that they can’t find on their primary platforms. Examples include adtech and video streaming companies drawn to bare metal services, or favorable pricing for high I/O configurations while leveraging a megacloud provider for workloads without these characteristics.
  • Compelling discounts. Microsoft has long used discounting on Office 365 and Skype as a way of incentivizing the selection of Azure as a primary platform.4 Microsoft can clearly hold its own in the public cloud platform market — it’s a Leader in in “The Forrester WaveTM: Full-Stack Public Cloud Development Platforms, North America, Q2 2018” — but many enterprises note that its discounting program lured them to Azure and resulted in a multicloud strategy. These accounts were often early Amazon Web Services (AWS) users, heavily leveraging the platform for net-new development and ease of portal use. Such users have more recently moved some of their traditional enterprise applications to Microsoft Azure to use up the credit hours granted to them via negotiation.5 Google Cloud has been similarly leveraging Google Ads discounts as a way of drawing more enterprise workloads onto its platform.
  • Users. In highly distributed IT environments, developers and business users often make their own sourcing decisions. In many situations, preference alone leads to a multicloud strategy. At times, a centralized group takes over some or all environments to provide more cohesion. Rarely do such multicloud strategies transform into a single cloud story. Initial selection may be the result of early testing or of the role of the individual doing the early testing. Factors could include fit to a particular use case, a specific cloud provider’s geographic presence, or word of mouth. Less distributed groups may select multiple clouds to satisfy the demands of users they serve and gain credibility across business groups for delivering the desired capabilities.
  • Customers. The everyday B2C customer doesn’t care much about where you host your website or free software-as-a-service (SaaS) product. But B2B companies serving other businesses through digital platforms or a SaaS solution find that sourcing can make or break a deal. Not only is initial selection important, but in some cases, hosting that same workload on multiple clouds is critical (e.g., simultaneous multicloud). Customer opinion in these situations can be a result of: 1) industry stances, such as mortar and brick retailers avoiding AWS, given Amazon’s dominance in eCommerce, or 2) proximity to their primary platform to minimize latency and egress costs to their other apps.7
  • Partners that pick for you. Just as B2B clients enforce their choices, powerful partners may insist that you connect and work with them in a specific cloud platform that may differ from your primary cloud platform. The most cost-effective solution in this case is typically branching out into the preferred platform rather than switching primary providers — creating a multicloud scenario regardless of original intent.
  • Diversification of risk (in theory). Regulators and internal auditors push for single-vendor risk mitigation in the event of cost escalation or complete failure. The fear typically isn’t about a short- term outage but rather a massive pivot that puts a provider out of business or drastically changes its prices. Financial regulators have started to ask for this next-level redundancy, but the reality behind the ask seems to miss the mark. These “doomsday” scenarios are fear mongering at its worst, given that there are no examples of cloud providers increasing pricing and that the major providers are arguably more financially stable than the financial institutions themselves.8
  • Strengthening the power of negotiation (also in theory). Overconfident enterprises believe that having presence and experience on two platforms makes them more powerful in contract negotiation. Unfortunately, most enterprises don’t spend enough on a given platform to unveil favorable discounting. Splitting between multiple platforms further decreases their spend size.And to truly claim portability, you’d need to build for it. This driver makes sense for very large cloud consumers but is often not relevant for the average enterprise unless it’s considering heavily investing in a smaller player.

Get Clarity On The Types Of Multicloud

Before exploring the pains and remedies for various multicloud circumstances, first clarify the specific multicloud variations. At the most basic level, some scenarios leverage multiple platforms across a portfolio of workloads, and others do so for a single application or app ecosystem. Most assume portfoliowide or a large subset of applications. Top scenarios include

  • Multiple clouds hosting different apps based on app characteristics. This model involves leveraging multiple cloud platforms for parts of your application portfolio due to different strengths in each platform or preferences of your developer or business users. Sourcing is decided on an app-by-app basis, looking at the organization’s users or characteristics as they map to the various services available. To speed up the process, enterprises create rules regarding characteristics of an application that make one choice suitable over another.
  • Multiple clouds hosting parts of an app ecosystem (hybrid app architecture). Some applications or app ecosystems are designed to leverage multiple platforms — both cloud and noncloud. This is often due to availability of a service from a specific cloud provider, to avoid cost escalation, to meet required regulation, or to satisfy preferences of those accessing the workloads. This can be architecturally challenging. Common examples include customer- or partner-facing websites with elements subject to regulation, or heavily opinionated customers with deep pockets, edge scenarios, and medical research. Although a hybrid architecture may help mitigate latency where it matters, e.g., local decisions for the edge, internet-of-things (IoT) devices, or the shopping experience on a retail website, it isn’t easy designing a network architecture that considers cost and acceptable latency for each connected element in the ecosystem.
  • One cloud, multiple deployment options, and similar operations using common APIs. For some, multicloud doesn’t mean multicloud platforms of vendors but multiple deployment environments with the same APIs. The ability to choose where the environment lives is a bigger concern than vendor flexibility. Today, Azure paired with Azure Stack tells this story, and to a more limited extent, so does VMware Cloud Foundation paired with VMware on AWS.9
  • Multiple clouds being used simultaneously for a single app. Simultaneous multicloud is multicloud in its most extreme form. It entails running the same application simultaneously on multiple cloud platforms. This requires building the application identically on two or more cloud platforms. Due to complexity and cost, it’s very uncommon. Consideration is often limited to independent software vendors (ISVs) delivering SaaS solutions to highly opinionated clients with deep pockets and inflexible public cloud vendor preferences. Regulators in the financial services industry are trying to push for this to mitigate risk of “complete business failure on part of a cloud provider” and to avoid overdependence on a cloud provider. Regulators may ultimately find that it has the opposite effect. But, in practice, you’ll find few examples.


The options outlined above describe infrastructure decisions for hosting an application, but for some, multicloud isn’t about today but rather about choice later down the road; they want flexibility of vendor choice in the future. If the circumstances change, they learn more about the platform, or the platformcapabilities improve radically, will they have the portability to move their applications to other cloud platforms? This simple question has widespread implications. Cloud platforms aren’t the same, and portability between them will always require significant work.10 Use of application or developer services unique to the provider makes it even harder to move. Enterprises face a big decision about whether the loss of value and speed to market is worth this portability. Even those that choose portability identify areas for exceptions where they’re willing to accept vendor or platform lock-in, all in the name of value and speed. Those that choose portability use these approaches:

  • Leverage Kubernetes (K8s) for abstraction. Developer platforms, like Cloud Foundry and OpenShift, segregate app and infrastructure layers, often through K8s, while discouraging the use of services specific to a cloud platform. It’s then up to your own consumption and vendor selection to ensure that this decoupling from the platform continues. Amazon, Google, Oracle, and Rackspace all have their own managed K8s flavors on their platforms, which provide ongoing management (e.g., day 2) support, but to remain unattached, you’ll need to continue to limit the use of easily accessible services unique to the platform.
  • Leverage a management tool for abstraction. Through their portals and the use of proprietary template patterns, hybrid cloud management tools (e.g., Rightscale, Scalr, and VMware vRealize Suite), also decouple the app and infrastructure layers while discouraging the use of services specific to a cloud platform. This allows for changes in sourcing decisions later. Using other templates or orchestrating elsewhere requires discovery and conversion to gain full management control over those launched resources.
  • Write for the “least common denominator” or “if-then” logic for each platform. Using an alternative approach to decoupling is ill advised. If you choose to take that path, you must limit yourself to solutions consistent between providers or plan logic for multiple platforms. Early attempts at multicloud did just this. Limiting to the basics enabled conversion tools to easily convert to an alternative offering. Building for each platform, with logic for each, translates to bulky, inefficient code that’s time-intensive to create and maintain.
  • Seek vendor-neutral app and dev services (that don’t currently exist). There’s a belief that a market of third-party services will emerge to deliver consistent app and developer services on all the major cloud providers. The vision is giving value regardless of location, with a responsible party taking on the development, updates, and stack management. Cloud Foundry’s ecosystem approach showed early promise but little momentum. The market is anticipating the release of multicloud marketplaces, but it’s still just a concept.11
  • Minimize the impact of the lock-in. Some enterprises willingly opt in on lock-in if they can minimize the required operational time through a managed version of an offering. Smart practitioners follow two simple rules: 1) Limit yourself to services that deliver a managed version of a technology that’s highly common elsewhere (e.g., Kubernetes or SQL) and 2) limit yourself to services that don’t impact the normal running of the application (e.g., DBaaS or monitoring).12 With these guidelines, they get the value they seek without a painful rework.

When Multicloud Isn’t The Answer


Being multicloud in some form is the obvious solution for most companies, but there are exceptions.13 When they evaluate the economics, logistics, or partner relationships, multicloud simply doesn’t make sense for some situations. Some fringe cases consider this on a macro level, disregarding the value of micro-level efficiencies of app-by-app sourcing. The US Department of Defense’s JEDI proposal is one such example.14 These companies intentionally limit platform types, accepting high levels of lock-in in the name of simplicity. For other companies, the micro-level efficiency is missing. When they look at the logistics and specifics of an app, app ecosystem, or relationship, multicloud escalates costs for them. When you believe your mutlicloud options are limited:

  • Determine whether this is opinion or fact. Theory is great, but if the numbers don’t work out, it’s useless. Before diving too far into the decision about your multicloud or single-platform approach, do testing to determine if performance or resource investments play out favorably in that model.
  • Define the scope of the limitation. Most limitations aren’t total. Does the decision apply to a single app hosting decision, on a team level, or to apps that connect to a certain data set? Or does it represent a corporate-wide need? Sometimes this is easy to discern; for example, when your .NET users want to use Azure and a separate team is already heavily using AWS and its many app and dev services. But at times, it’s harder to define, especially if there are cost, latency, or compliance implications to a multicloud approach.
  • Push the limitations to see if you can overcome the barriers. Creativity and problem solving is key in the cloud era. If you’re experiencing a very real pain for the desired multicloud strategy, explore whether those are truly barriers or simply hurdles that require flexibility. Forrester has seen many “barriers” break away — steadfast capital expenditure budget preferences may dissolve; patient records might get nonidentifying codes; inspirational leaders may be able to turn around hard-headed, long-time employees; or bursting could happen, with extensive changes, at the right moment.


Creating two isolated cloud environments across two or more cloud platforms has few barriers other than the increased burden of more vendors and the requirement of more skills. But as these environments connect with a data set, an app, or an app ecosystem splitting across these platforms, challenges can arise. At times, these challenges make multicloud implausible:

  • Painful networking charges. Each of the cloud providers has different ways of charging for networking usage. For example, AWS doesn’t charge to move data onto its cloud or inside an availability zone (AZ). However, any traffic that moves from inside an AZ to other AZs, regions, platforms, or public areas will incur a cost of $0.01 per gigabyte (on AWS). Many companies don’t realize the amount of traffic that moves between different tiers of an application and have been shocked that network costs are higher than any other part of the bill.
  • Problematic latency and bandwidth. Application services and data are no longer separated by feet but possibly by thousands of miles. Communication times can grow by orders of magnitude and make application experiences painfully slow.15 In addition, 10, 40, and 100 GbE connections found in data centers don’t exist as readily outside of them. The bandwidth is throttled by competing traffic from other customers in the data center or across WAN connections. Network traffic can be held in buffers or slowed down to accommodate the limited bandwidth, which adds more latency to application communication.
  • Different types of tools. The public cloud providers allow customers to use their own networking and security services within compute cloud platforms, but that comes with a cost. The cloud providers offer free versions in the hope that customers will choose a free version and weave it into the application, such as proprietary API calls. Writing code to leverage a particular cloud service will make it difficult to develop or move an application to a different public cloud platform.
  • Double the work (or more) with half the productivity. Manufacturing in the airline industry strives toward a lean supply chain to streamline processes by eliminating waste and non-value-added activities such as vendor maintenance. Similarly, organizations setting up new instances on new platforms will repeat the same activities — such as IP address management, security profiles, and account management — for little added value through business continuity or redundancy.
  • Cost-affordable disaster recovery. Enterprises may leverage multiple cloud data centers; availability zones; or, in theory, multiple cloud platforms to provide business continuity for all or a portion of their applications. In practice, achieving active-active, even in a single cloud provider’s availability zones or regions, is too expensive for most workloads, let alone creating versions and maintaining them on multiple cloud platforms. Although the theory allows a range of disaster recovery approaches, cost and time create real limitations that make a disaster recovery plan less tangible if it stretches across clouds.

Your Strategy Is Multicloud — Now What?

Knowing that their strategy will leverage multiple clouds doesn’t answer many questions for I&O professionals. Get specific about your multicloud plan and goals before jumping into decisions about public or private cloud and complex sourcing algorithms for determining the fate of your application portfolio. What greater purpose does your cloud strategy serve? What specific efficiencies — process and resource utilization — do you seek for your company, and why are those important? This report should help you decide which multicloud approach you’re targeting and which you deem too painful to implement. Here are some of the early questions you’ll be asking:

  • Is the pain worth the gain? For most companies, a basic multicloud strategy is the clear answer — the value outweighs the cost without need for significant cost analysis. If you’re considering more involved multicloud strategies or a radical single-platform environment to serve all apps, you have some significant work ahead. Rarely is this an easy yes-or-no question. You’ll need due diligence to creatively overcome the barriers or cost escalators.
  • Are we over-architecting our multicloud strategy? You may be overly complicating your cloud strategy, building in significant inefficiency without reason. There’s no problem with the most simplified approach to multicloud. Those contemplating a more complex multicloud strategy are doing so because they have no other choice. An executive, a regulation, latency, or customer demand is forcing their hand toward a harder version of multicloud. If that’s not your situation, go with the easiest path.
  • Is this the moment for change? Too much change at once can spiral costs, and your business, out of control. However, there are moments when you should evaluate more radical change. These “change moments” typically occur when contracts end, massive refreshes are pending, new leadership enters an organization, or staffing radically changes. Change moments occur when it’s more tolerable to pivot direction due to compelling cost avoidance figures, higher appetite for change, or dire need. If your organization is planning to build a new data center, refresh a colocation contract, reduce tech organization staff size, hire cloud skill sets, replace C-level leadership, or undergo a massive infrastructure refresh, it’s worth evaluating a more radical approach. Determining whether this is your moment of change may determine whether you should consider a more involved multicloud or an even more radical idea — a single-platform strategy.


Executive Summary

The enthusiasm for hybrid cloud as an ideal structure for IT environments belies a complicated decision-making process around locations for various types of compute workloads and data stores. Though it may seem that today’s enterprises have more choices than ever for where to host their applications, some workloads must remain on-premises for reasons related to data control, security, compliance and performance. At the same time, competitive pressures are pushing businesses to be more customer-responsive by taking advantage of the perceived scalability, flexibility and agility afforded by off-premises IT architectures. Enterprises must focus on business outcomes while deploying workloads and data in a way, and in a location, that ensures security and integration across increasingly distributed environments.

Key Findings

  • Workload placement is a critical factor in maximizing the value of IT environments. As markets and technologies have matured, the choice of where to deploy data and applications has evolved as well. More than two-thirds (68%) of companies making strategic IT investments view hybrid IT and integrated on-premises/off-premises cloud environments as their default approach.
  • Public cloud is not a panacea. Many enterprises have already migrated the applications and business functions that are ‘low-hanging fruit’ for off-premises deployment: productivity suites and customer relationship management systems, for example. But factors that may prevent migration include security and data protection, performance and cost.
  • Hybrid cloud is the new normal. Data from 451 Research’s Voice of the Enterprise service shows that hybrid cloud environments encompassing both on-prem and off-prem venues are the direction of travel for most organizations. Cloud transformation is occurring both within and outside the datacenter, and IT decision-makers plan to increase their use of both in the coming years.
  • Security must be baked in to IT evolution plans. Maintaining security is paramount when migrating and refactoring IT systems to take advantage of the wealth of destinations available. Federation of identity and access management across public and private clouds is critical, and encryption of data is necessary to ensure that increasingly distributed systems remain tamper-proof while IT estates are upgraded and modernized to seize the possibilities of the next 20 years.

Cloud Adds Value When Data and Applications are Placed Where They Make the Most Sense

For most organizations, moving at least some applications and data to the cloud is not a matter of if, but when and why. The perceived benefits of lower cost, easier infrastructure management, and faster and more flexible provisioning ushered in a wave of business and IT transformation not seen since x86 virtualization made its appearance more than 20 years ago. As the market and technology have matured, however, businesses are changing their strategies.

In the past several years, cloud adoption has moved from being the province of early adopters into the mainstream. In many cases it began as a bottom-up phenomenon, with individual business units implementing ‘shadow IT’ – applications developed on platforms provisioned with the swipe of a credit card – to effect outcomes that made other departments (and IT management) take notice.

But the initial rush to cloud was not without complications and risks. Deployments that were impressive at small scale and in isolation created unacceptable exposure when moved into production, and establishing connections with on-premises data stores – in many cases the most valuable and differentiating IT assets in the organization – opened businesses to significant risk. Companies that were initially happy to lift and shift applications and data to the cloud soon learned that this approach, if applied indiscriminately, could be costly, complex and disruptive. This did not in itself make the organization more agile and flexible, nor did it necessarily make the applications more resilient or available.

The fact is, many workloads simply cannot or should not make the transition to cloud. Custom- built applications with core business dependencies are often mission-critical, especially in industries such as banking and insurance. These on-premises systems may be foundational, and abstracting away the underlying infrastructure would compromise the business itself. Workloads that require low-latency access to on-site data, such as financial services systems that need to process transaction details to and from customer accounts, are too sensitive for off-premises deployment; the business will rarely accept the increased risk in moving these apps and data off-premises. In all these cases, compliance demands – whether regulations restricting the geographic distribution of data, or industry or company-specific rules to ensure consumer information is protected – are needed to preserve access to lucrative markets.

The combination of these pressures – increasing business agility with cloud while maintaining on-premises control of sensitive data and regulated workloads – has led to the dominance of hybrid cloud as a key enabler of modern IT systems. Enterprises have accepted the idea of incorporating as-a-service infrastructure, platforms and software into their IT estates, but they need to do so in a selective, disciplined and secure way. This is reflected in IT spending priorities; digital transformation is the top spending focus for 2019, and cloud is a key enabler of this transformation

Enterprise buyers are also looking to improve customer engagement and automate business processes to become more responsive to markets and opportunities. These initiatives tend to be part of cloud transformation efforts in a bid to migrate applications that support the business but are not critical to the core. These are also the areas where software-as-a-service offerings are selected. New app development and proofs of concept are also likely to start in cloud environments.

However, note that the second spending priority in the figure above is to upgrade or refresh existing IT, much of which is likely on-premises and will remain there for the foreseeable future.

Among digital leaders – companies that are already executing on or strategizing their IT investments based on digital transformation – 42% are allocating more than half of their budgets on IT initiatives to grow or transform the business itself, and 68% view hybrid IT and integrated on-premises/off-premises cloud environments as their default strategic IT approach.

Challenges with an Exclusive Public Cloud

Although public cloud providers highlight customers that are going ‘all in’ on their platforms, these deployments are exceptions to the rule. Providers may position public cloud as a route to business agility, but the experience of large enterprises migrating applications and data to cloud justifies caution.

Many companies have already targeted applications for cloud migration: top candidates include email and document creation apps and systems of engagement such as customer relationship management and marketing platforms. Once these workloads have moved off-premises, however, continuing transformation becomes much more difficult.

IT decision-makers cited several high-stakes factors that prevent them from moving workloads to the public cloud, including security and data protection (including privacy), performance and cost.

Security and data protection. Public cloud SLAs may guarantee the security of the infrastructure, but it is up to the customer to secure applications and data. If a public cloud security breach does occur, any compensation from the provider will likely pale in comparison to the customer’s lost revenue, damaged reputation and regulatory fines. Enterprise stakeholders responsible for protecting a business’s valuable intellectual property want to maintain strict visibility and control of the data, and in fact, restricting the physical movement of data is a top requirement of government and industry privacy standards.

Performance. Public cloud providers tout the high availability of their services, but performance and latency issues continue to crop up. Few enterprises are willing to stake mission-critical operations on best-effort internet connections, and while high-speed direct connections can be provisioned, they come at additional expense. Customers have come to expect instantaneous access to their applications and data, but ‘cloudifying’ workloads in a way that increases the distance between source data and processing power can introduce unacceptable latency. Similar hang-ups can occur when application integrations need to be improvised as workloads are relocated, or when choke points develop due to inadequate provisioning or misconfigured policy engines.

Cost. Ironically, cost has been both a top driver and a top inhibitor to cloud adoption. In the early stages, easy access to cloud technology and lower costs caused users to consume more. Although unit prices remained low, total spending increased. The convenience of consuming public cloud infrastructure exclusively encourages sprawl and waste; orphaned resources and overprovisioning can add up to unexpectedly high bills. Storing data in the cloud looks like a bargain until customers need to access, move or remove it, when bandwidth charges come into play.

These factors can’t be considered in isolation, and in fact, they should be adjusted in relation to each other for the sake of price and performance engineering. Enterprises are willing to pay more for more resilient and secure workloads that make up critical applications while building in flexibility for systems that can tolerate occasional downtime. Such decisions require assessment of the entire IT estate, service interdependencies, and regulatory and policy needs. IT and business decision-makers require different hosting environments for different workloads, but at the same time, they need to be able to secure, manage, integrate, govern, scale, deploy and update across multiple environments, and do so seamlessly and with confidence. There is no single solution that works across the board for all businesses.

Hybrid Cloud is the New Normal

451 Research’s Voice of the Enterprise data underscores the prevalence of hybrid IT – meaning an integrated combination of on- and off-premises resources – as the direction for strategic IT (Figure 4). Behind this aggregate view is a more nuanced story. Not surprisingly, hybrid is the preferred (or in effect, default) approach for a greater proportion of large enterprises with more than 10,000 employees (69%) and government/education organizations (73%), while those going ‘all in’ on public cloud are more likely to be small organizations with fewer than 250 employees (27%).

The challenge of creating a secure, integrated hybrid environment is considerable, yet companies are pursuing it as a way to get the best of both worlds: the control and performance of on-premises IT with the pay-as-you-go offerings of public cloud. Large, multibillion-dollar enterprises are looking to modernize their IT estates and deliver services globally, complying with various regulations without having to maintain datacenters in each location. This requires security to be baked into the environment rather than applying it via perimeter hardening.

Motivations for using multiple infrastructure environments highlight the benefits of on-premises and off-premises deployments (Figure 5). The primary factor – improving performance and availability – cuts both ways: popular use cases for public cloud include backup and disaster recovery to ensure availability, but performance concerns may necessitate keeping applications on-premises for quick access to on-site data. The same dual justification goes for the second reason: optimizing for cost. Keeping frequently accessed data stores on-site can save money in the long run, but moving batch workloads to cloud offers the financial advantage of being able to scale up and scale down costs as needed.

Other factors point more directly to either on- or off-premises environments. Isolating sensitive business data and meeting data sovereignty requirements are common justifications for keeping data and applications on-premises, whereas adding new functions and adding geographic diversity (using content delivery networks) are common benefits of public cloud.


One size does not fit all when it comes to workloads and data hosting. Digital transformation requires a flexible approach to deploying workloads and data in a way, and in a location, that optimizes security, integration, flexibility, management, and agility, whether on- or off-premises or both.

Hybrid cloud environments encompassing both on-prem and off-prem deployments are clearly the direction enterprises are taking Cloud transformation is occurring both in the datacenter and off-premises, and IT decision-makers plan to increase their use of both in the coming years.


Analyzing Outcomes Delivered by Modern Multicloud Storage Environments Optimized for Next-generation Workloads

Executive Summary

Many of the problems organizations face today are related to data. Most organizations have too much data, which is growing too quickly, and which is siloed and difficult to consolidate. These challenges create an “insight gap,” where organizations are not able to adequately analyze their data and thus capitalize on its value. Traditional methods of data analysis are not sufficient for many petabyte-scale organizations. However, the promise of self-optimizing analytics powered by artificial intelligence and machine learning offers a path forward, but many organizations don’t know where to get started.

While unlocking intelligence is one common data problem, another is enabling innovation. An organization’s data is the perfect testbed for application developers and database administrators. By working with actual data in a development environment, they can better debug errors, predict production performance, and identify optimizations earlier in the development lifecycle. However, many organizations can only provide developers with dummy data sets which vary significantly from a company’s production data. This creates issues and uncertainty in the development lifecycle and slows down innovation.

The topic of data residency brings up still more issues. Organizations today have numerous options available to them when it comes to storing their data. There are a host of on- and off-premises solutions and services, all with different and shifting cost-benefit profiles. However, many organizations are unable to migrate data in an agile manner to ensure it is located for optimal performance at the lowest possible costs, and that if requirements change, the organization is not prohibitively locked in to the platform choice.

Many organizations face a combination of many or all of these data problems. Whatever the range and extent of such problems, they will invariably combine to diminish the value of an organization’s data. There is an imperative to implement data solutions to these data problems. To help, IBM has developed a vision for organizations: to implement hybrid multicloud-enabled storage infrastructure that modernizes traditional workloads and is optimized to run next-generation workloads, enabling them to operate as dynamic ‘data-driven’ enterprises. The collection of characteristics that determine whether an organization has achieved this vision are collectively referred to as Storage Maturity in this report. Research conducted by ESG strongly validates the premise that organizations that have taken the steps prescribed by this view of Storage Maturity are better positioned to harness the power of their data and to enjoy a competitive advantage over their peers.

Defining a Vision for Storage Maturity

Storage Maturity can have different meanings to different organizations, but to apply a consistent, data-driven model, ESG had to formulate concrete characteristics against which organizations could be assessed. Ultimately, ESG developed a three-pillar model for assessing Storage Maturity that we believe objectively considers organizational characteristics that are both unbiased and broadly applicable to organizations today:

  • Data-ready infrastructure—This pillar relates to the ability of the organization’s infrastructure to store, manage, and perform at a level required by a modern, data-centric organization. Within this pillar, ESG assessed an organization’s propensity to utilize high-performing flash storage to power on-premises workloads and deploy software-defined solutions that pool storage resources and abstract management capabilities into a single view. Organizations with both characteristics have data infrastructures that combine ease of management at scale with high performance, a suitable foundation for Storage Maturity.
  • Strategic reuse of secondary data—This pillar relates to the ability of the organization’s storage to support analytics and application development initiatives. Within this pillar, ESG assessed whether the organization can supply near- production copies of company data for analysts and application developers to work with. Organizations supporting these constituencies enable innovation by using storage infrastructure for more than just data retention, an appropriate aspiration of a mature storage environment.
  • Workload and data portability—This pillar relates to the ability of the organization to migrate data and workloads to a variety of platforms based on the requirements and the organization’s goals. Within this pillar, ESG assessed whether the organization has containerized legacy applications and/or developed cloud-native applications from the ground up. Going one step further, ESG measured the frequency with which organizations are migrating workloads to different on- and off-premises environments to capitalize on temporary advantages or satisfy a changing requirement. Organizations with a high degree of data and workload portability are likely to be operating a highly flexible, cost-optimized, multicloud environment.

The Current State of Storage Maturity

ESG’s three-pillar model segmented survey respondents into four different levels of Storage Maturity based on their responses to survey questions related to their infrastructure’s data readiness, enablement of data-intensive workloads, and data portability.

Respondents earned between 0 and 100 maturity points based on their responses to these questions. ESG rated respondents scoring in the bottom quartile (0-25 points) as Level 1 or Laggards, respondents scoring in the second quartile (25.5-50 points) as Level 2 or Followers, respondents scoring in the third quartile (50.5-75 points) as Level 3 or Explorers, and respondents scoring in the top quartile (75.5-100 points) as Level 4 or Leaders. See Appendix II: Criteria for Evaluating Respondent Organizations’ Storage Maturity to review the full list of dimensions of Storage Maturity on which ESG evaluated respondents.

ESG’s analysis found that very few IT organizations have achieved enough progress across enough criteria to be classified as Leaders, as defined by this maturity model. Just 13% of respondents surveyed provided answers about their organizations that resulted in a score in the top quartile. The vast majority of respondents’ organizations fell into either the Follower (42%) or Explorer (32%) categorizations, showing progress in some Storage Maturity characteristics, but with additional advancement needed. Mirroring Leaders, ESG rated just 13% of respondent organizations Laggards in terms of Storage Maturity, falling short on many—if not all— of the criteria included in ESG’s model

The Importance of Storage Maturity

Why does Storage Maturity matter? Simply put, ESG found that organizations earning a Leader designation reported the best results across many key performance indicators (KPIs) and characteristics, including: business success, IT operations effectiveness, achievement of multicloud agility, and advancement of artificial intelligence initiatives.

Moreover, the upward trend observed across maturity levels was extremely consistent across the broad spectrum of KPIs included in the research. While the differences noted in KPIs are the greatest when comparing Laggard and Leader organizations, ESG observed that KPIs incrementally improved across each level in the spectrum.

Improved Business Outcomes

Ultimately, IT exists to support the business. If there are activities that IT can undertake to improve business outcomes, then those activities are worthwhile. In ESG’s research, organizations in alignment with the principles laid out in the Storage Maturity model— those designated as Leaders—consistently reported the highest degree of business performance across all metrics included in the research. In short, there is a strong correlation between organizations that have achieved storage Leader status and the most successful organizations in the market.

Increased Maturity Leads to More Actionable Business Strategy

Organizations are awash in data: sales activity, customer support records, employee engagement metrics, market research, among others. Each of these data sources contains practical information about the state of a company: sales activity can show which sales region or product is performing best, customer support data can provide a glimpse into customer satisfaction, and employee engagement metrics can show which teams and managers are operating most effectively. However, one of the most strategic use cases for a business’s data is to accurately predict changing market dynamics like which new markets will develop, what new product launches will be successful, or which new features will be broadly adopted by users. When ESG asked respondents how successful they felt their companies were at using data to

Increased Maturity Means More Digital Enablement

For many organizations, there is no bigger business imperative than increasing digitization. Digital initiatives can vary widely. For one organization, a digital initiative may mean transitioning from a predominantly brick and mortar customer experience to a greater reliance on ecommerce. For another, it may mean enhancing digital marketing and lead nurture capabilities. Still others may be focused on entirely new digital services and subscriptions supporting net-new business models. Regardless of how ambitious these initiatives are, it often falls to IT to support and enable them.

ESG’s research shows a distinct correlation between Storage Maturity and organizational digitization. Respondents from Leaders were over four times as likely as Laggards to report that more than 10% of their organization’s revenue was driven from newly developed digital channels that did not exist two years prior (81% versus 17%). Moreover, Leaders anticipate digital revenue to grow at over three times the rate as Laggards year-over-year over the next three to five years (41% versus 13%).

Increased Maturity Means IT-fueled Profitability

IT organizations and executives are often frustrated by the perception that IT is a cost center for their organizations. IT is a critical component of business operations, with revenue-generating employees relying on IT systems and services to be productive. For many organizations, IT often functions as the innovation engine of the organization, supporting new services and finding new ways to deliver offerings to employees and other end-users. In ESG’s research, Storage Maturity was positively correlated with the IT organization’s ability to make a more dramatic impact on the business. In fact, IT organizations at Leaders were eight times more likely than Laggards to operate with a very positive return on investment

Given the fact that business benefits delivered by IT at Leaders are much more likely to significantly outweigh costs, it is not surprising to observe that Leaders were three times more likely to expect their organization to beat their annual profitability goals in 2018 than Laggards (62% versus 19%). ESG believes the positive business impact by IT organizations at Leaders is a major contributing factor to their overall bottom-line success.

Enhanced IT Effectiveness

While the correlations that exist between Storage Maturity and positive business outcomes are consistent and numerous, ESG also observed many ways in which Storage Maturity and IT capabilities trend in the same direction. In many ways, these correlations are even more noteworthy as it is more likely that they are the result of a causal relationship—that the maturity of the organization’s infrastructure, workload capabilities, and workload portability directly cause positive IT performance.

Increased Maturity Helps Organizations Lead on Innovation

Complex, legacy IT environments are difficult and costly to maintain. More importantly, the amount of time, effort, and budget they require can preclude IT from other more strategic projects like cloud migrations, data center consolidations, and application modernization efforts. However, scalable, software-defined, highly virtualized infrastructure—all managed through a single pane of glass—can free up both staff and budget resources to advance these other initiatives. By freeing up staff and dollars from infrastructure management, organizations can sharpen their focus on innovation.

Leaders represent this mature type of environment well, with a high rate of adoption of simple, scalable flash storage and a large portion of their storage infrastructures that have been virtualized—allowing a complex heterogeneous storage footprint to be managed as a single pool of resources from a single console. Thus, it is not surprising to note that Leaders are able to allocate an incremental 10% of their annual IT budget on next-generation workloads compared to Laggards, which spend 60% of their budget maintaining legacy applications

Moreover, Leaders as a group agree they are getting value from their ability to allocate more of their budget to innovation. ESG asked all respondents to describe how much progress they’ve made leveraging IT resources to speed product innovation and time to market. Respondents at Leader organizations were more than four and a half times as likely as those at Laggard organizations to describe progress as “excellent” 

Innovation Is the Precursor to Private-cloud-driven Efficiency

As noted, Leaders can allocate significantly more of their budgets to supporting next-generation workloads. Part of that means they can spend more on application development and modernization. But it also means they can spend more on the infrastructure that sits underneath those modernized applications. Given this increased level of investment on next- generation infrastructure, it would be logical to assume Leaders have made greater advancements in private cloud adoption. That is, it would be logical to assume that more of their on-premises infrastructure is highly virtualized, scalable, and elastic and that end-users are able to provision resources in a self-service manner with usage-based tracking. ESG was able to test this assumption in the research. ESG asked each respondent what percentage of on-premises workloads they run on physical servers, on virtual servers managed in a traditional manner, or on true private cloud infrastructure that mirrors public cloud service offerings. Respondents at Leaders reported running more than twice as many of their on- premises workloads on scalable, elastic, and dynamic private cloud infrastructure than respondents at Laggard organizations

ESG believes the fundamentally different, more agile infrastructure environments present at Leader organizations in turn play a significant role in those organizations’ ability to launch workloads to their production environments ahead of schedule. ESG asked respondents what percentage of all production workload launches in the past two years had been completed ahead of, on, or behind schedule. Leaders, thanks in no small part to their private cloud investments, reported that 34% of workload launches had been completed ahead of schedule, on average. By contrast, Laggards reported that just 13% of launches had been completed ahead of schedule, on average.

IT’s Innovation and Efficiency Drives Line of Business Satisfaction

Ultimately, IT’s charter is to support business, to give employees the tools and technology to do their jobs effectively. In many ways, the satisfaction of line of business employees is the true test of how effective IT is. This is a test Leaders pass with flying colors. When ESG asked respondents how satisfied the line of business end-users at their organizations are with the applications and IT services they are provided with to perform day-to-day business tasks, 60% of respondents at Leaders said, “extremely satisfied.” That represents a fifteen-times multiple over the frequency observed among Laggard organizations

Zeroing in on Storage KPIs

ESG’s research into Storage Maturity would be severely lacking if it did not include an assessment of how Storage Maturity is correlated to storage-specific KPIs and attitudes. ESG assessed a broad set of these metrics in its research and observed universally positive correlations between KPI performance and Storage Maturity.

Tactically, Maturity Leads to Productivity and Execution

ESG’s survey included a question on the organization’s total storage capacity. It also asked respondents to report how many full-time equivalents were employed by their organization to administer storage. By looking at the average ratio of these two data points in each level of Storage Maturity, ESG was able to derive a metric for administrator productivity across Laggards, Followers, Explorers, and Leaders: average number of TBs per storage administrator. Not surprisingly, Leaders reported the highest level of productivity, reporting more than twice as many TBs under management per administrator than their Laggard counterparts

Administrator productivity and efficiency is likely a contributing factor to higher organizational confidence in the storage functional group. ESG asked all respondents how confident they were in their organizations’ ability to execute major storage-related projects like new application deployments, new/extended array deployments, technology refreshes, data migrations, etc. Respondents at Leaders were three and a half times more likely than their counterparts at Laggards to report they were fully confident in the IT organization’s ability

Storage Maturity Helps Optimize Strategic Initiatives – Analytics, Application Development

For many organizations, storage is no longer just about data retention and protection. Storage is seen as a resource with the potential, if not mandate, to support other strategic initiatives. Respondents at Leaders are much more likely than other levels of Storage Maturity to feel their storage resources do a good job supporting these endeavors. Nearly three- quarters of respondents at Leaders (72%) reported that storage and data services support analytics projects “very well,” far outstripping the rate reported by Laggards (18%). Similarly, 67% of respondents at Leaders reported that storage and data services support application development initiatives like DevOps “very well,” compared to 13% of Laggards. For IT organizations and storage stakeholders looking to maximize their relevance to business strategy and strategic imperatives, optimizing Storage Maturity will help highlight the value of storage resources.

The Bigger Truth

Based on ESG’s research, Storage Maturity Leaders are the exception, not the rule—87% of the market has significant work to do to attain a Leader designation. However, this research shows that incremental benefits can be achieved by making steps to move up the maturity curve: Followers outperform Laggards and Explorers outstrip Followers.

If you are interested in improving your organization’s standing against the benchmarks laid out by ESG, it is important to understand the criteria we used to assess Storage Maturity, as well as the actions you can take to improve your rating.

  1. Leaders actively refactor legacy applications and develop cloud-native applications from the ground up. Ninety-five percent of Leaders in this research have containerized one or more legacy applications compared to just 2% of Laggards. Similarly, 97% of Leaders have developed one or more cloud-native applications from the ground up versus 5% of Laggards. By adapting and developing applications that can take advantage of multicloud deployment models, organizations reduce the friction of shifting those workloads from one cloud environment to another. In fact, the majority of Leaders report they very frequently migrate workloads from cloud to cloud to take advantage of a temporary advantage (e.g., lower cost) or to satisfy a temporary requirement (e.g., a traffic spike). Not a single respondent from Laggard organizations reported this level of workload agility.
  2. Storage Leaders have placed strategic bets on next-generation infrastructure like all-flash arrays and storage virtualization. Ninety-eight percent of all Leaders support on-premises applications with flash storage compared to 26% of Laggards. Furthermore, 99% of all Leaders (versus 23% of Laggards) have deployed storage virtualization technology that allows storage management to be abstracted from the infrastructure and the underlying storage to be managed as a single pool of resources.
  3. Storage Leaders have mature DevOps and analytics initiatives underway, and these initiatives are supported by progressive uses of secondary storage. Eighty-four percent of Leaders have analytics initiatives underway that use data to develop and refine business processes over time compared to just 4% of Laggards. Nearly half (49%) of Leaders describe their DevOps adoption as extensive versus 0% of Laggards. Moreover, more than four out of five Leaders report they can use near-production copies of their data to run analytics on and to use in application development/testing compared to less than one-third of Laggards.

Leaders run data-intensive workloads on data-ready infrastructure, and they have a high degree of flexibility enabled by the application portability unlocked by containerization. As a result, they can run IT more effectively, positively impact business outcomes, and even capitalize on early gains delivered by AI-driven analytics. Organizations should take notice of the behaviors and technology solutions allowing Leaders to capture these benefits and take the steps necessary to follow their lead.