Orange County 949-556-3131

San Diego 619-618-2211

Toll Free 855-203-6339

Navigating your hybrid multicloud vision with IBM Power Systems

Life in a hybrid multicloud world

Cloud computing has undoubtedly changed how enterprise IT is delivered. It has opened the door to compute and storage resources without limits, as well as a wealth of cloud services (e.g., artifcial intelligence, weather data, etc.) for IT administrators to leverage and create the next wave of enterprise innovation. This paper provides a practical guide for IBM Power SystemTM users to gain an understanding of the POWER® cloud portfolio and how to map out a journey to a secure and reliable hybrid multicloud infrastructure.

Navigating a complex IT infrastructure

Today, cloud computing provides many opportunities to run your enterprise infrastructure more effectively including on-demand access to compute resources, disaster recovery solutions, invisible infrastructure maintenance, security patches and more. Whether you’re creating an on-premises private cloud, leveraging one or more off-premises public clouds (i.e., multicloud) or taking a hybrid cloud approach, cloud infrastructure capabilities can expand your business opportunities.

Given this broad range of technologies, how can IBM Power Systems users, running IBM AIX®, IBM i and Linux® enterprise apps, understand these capabilities and create a technology roadmap in an approachable and methodical manner?

A clear vision

A recent Gartner survey showed that 81% of organizations utilizing public cloud services are using more than one public cloud provider. And, according to the RightScale 2019 State of Cloud report, “Enterprises are prioritizing a balance of public and private clouds.

Hybrid multicloud has become a reality for enterprise and technology leaders. Yet, there is a need for a clear vision of how to navigate and operate in this environment.

What is hybrid multicloud?

A hybrid cloud is a computing environment that combines a private cloud and a public cloud by allowing applications and data to be shared between them. A multicloud refers to a cloud environment made of more than one cloud service, from more than one cloud vendor. Thus, a hybrid multicloud combines a private cloud, a public cloud and more than one cloud service, from more than one cloud vendor.

A multicloud strategy can unlock tremendous organizational value because it combines the best of both private cloud and public cloud. It allows organizations to run mission-critical applications and host sensitive data on-premises. It offers the flexibility of public cloud. And, it enables the movement of information between the private and public services.

Hybrid multicloud motivators and use cases

There are several motivators driving enterprises to construct a hybrid multicloud platform. Let’s explore some of the more prevalent scenarios for POWER customers (several of them are often pursued in parallel):

Deliver streamlined deployment of enterprise resources, including AIX, IBM i and Linux virtual machines (LPARs) and containerized apps

Users have grown to expect easy and on-demand access to IT resources through a cloud experience. Developers, QA engineers and line-of-business users want simplifed access to infrastructure and applications. IT administrators want trusted enterprise-grade security and simplifed operations. Streamlining all of these processes is made possible by adopting Power Systems hybrid multicloud technologies and processes within the data center.

Increase operational and budgetary flexibility by way of levering IBM Power Systems in a public cloud

One of public cloud’s major advantages is that it provides effectively limitless access to compute capacity billed as an operational expense. With a few clicks of the mouse on cloud.ibm.com, users get immediate access to new virtual machines or containers — where they want, when they want. IBM Cloud is the perfect place to spin up QA, production or high availability (HA) and disaster recovery (DR) environments for your Power Systems estate.

Modernize existing applications to adopt cloud-native software development principles (e.g., containers and microservices)

Containers, Kubernetes and Red Hat® OpenShift® have unquestionably transformed how software is packaged, installed and operated — paving the way for new software delivery models. To that end, enterprises worldwide are exploring container technology and developing plans on how to integrate them into their technology stacks, while delicately balancing the ongoing business need to deploy, manage, operate and integrate with today’s virtual machine-based applications.

Integrate IBM Power Systems with the broader cloud strategy

As the industry shifts towards hybrid multicloud, a comprehensive cloud management strategy has become increasingly important. According to the RightScale 2019 State of Cloud Report, “enterprises, optimizing cloud costs (84 percent in 2019 vs. 80 percent in 2018) and cloud

governance (84 percent in 2019 vs. 77 percent in 2018) are growing challenges.” Long gone are the days of building siloed infrastructures. Enterprises are striving towards a model of interconnectedness so that the collective strength of their platforms and cloud providers can be leveraged to create the next wave of innovation.

High-level reference architecture

Shown here in Figure 1 is a hybrid multicloud reference architecture inclusive of the major industry hardware platforms — IBM Power SystemsTM, IBM Z® and x86. Power Systems is architected to economically scale mission-critical data- intensive apps, either virtual-machine based or containerized — delivering industry leading reliability to run them and reduce the cost of operations with built-in virtualization to optimize capacity utilization. It also provides flexibility and choice to deploy apps in the cloud of your choice.

From a cloud deployment perspective, the on-premises private cloud solution includes PowerVC that provides the infrastructure-as-a-service (IaaS) layer and Shared Utility Capacity (previously Enterprise pools 2.0) to deliver a pay-per- use consumption model with permanent activation of installed capacity. These solutions deliver agility and economics of cloud in an on-premises environment while enabling organizations to rapidly respond to shifts in workload demand.

Power Systems servers are also available in the IBM Cloud and other public clouds, providing flexibility and choice to deploy HA/DR, DevTest and more. Sitting atop the infrastructure layer is Red Hat OpenShift, which provides the enterprise Kubernetes platform-as-a-service (PaaS) layer. OpenShift users can run their software of choice, including IBM’s enterprise software delivered via IBM Cloud PaksTM, ISV software, open source software and custom enterprise software. To manage and operate everything from a centralized location, the IBM Cloud Pak for Multicloud Management can be used to connect the historically separate cloud infrastructures. Lastly, the Red Hat Ansible® Automation Platform can be leveraged across the entire landscape to provide a consistent approach to manage all of your operating systems, cloud infrastructures — regardless of the platforms you’re running.

Reference journey to the hybrid multicloud

While each organization will have its own unique characteristics, Figure 2 serves as a general blueprint to guide POWER users through the myriad of cloud technologies and remove the mystery from the journey. The path to hybrid multicloud begins with a solid foundation of infrastructure and hardware management capabilities. From there, users are directed towards establishing a cloud experience within their own data center (i.e., a private cloud), offering simplifed virtualization management and operations, advanced automation and a platform to start building innovative cloud-native applications leveraging Red Hat OpenShift, Kubernetes and containers. As a parallel track to establishing a private cloud, it is also recommended to explore the public cloud to spin up QA, production or high availability (HA) and disaster recovery (DR) environments without the need to procure and administer the infrastructure in your data center.

Lastly, users need to establish robust connectivity between their on-premises and off-premises infrastructures so that applications and data can flow seamlessly between the two.

The IBM hybrid multicloud solutions

Deploy hybrid cloud on IBM Power Systems

Ensure business continuity and lower IT acquisition costs with on-premises private cloud

The IBM Private Cloud Solution with Shared Utility Capacity lowers IT acquisition costs and delivers a by-the-minute pay-per-use consumption model in an on-premises environment. The base capacity that a user has to purchase is as low as 1 core and 256 GB. Users can buy capacity credits for resource usage above the base capacity. They can also add multiple systems in the pool. When resource usage exceeds the aggregated base of the pool, capacity credits will be debited in real-time, based on by-the-minute resource consumption.

In addition, Shared Utility Capacity delivers cloud-like economics in both enterprise and scale-out POWER9 based Power Systems. It provides multiple benefts to users. The low base capacity reduces IT acquisition costs by up to 58%. With the fully activated pay-per- use capacity, customers have additional capacity that is only charged when it is consumed. This additional capacity ensures business continuity during demand spikes. With multisystem resource sharing in the pool, customers have the flexibility to balance workloads across systems and optimize resource utilization. The by-the-minute metering also helps users to only pay for the precise capacity they consume.

IBM Power Virtualization Center (PowerVC) provides on-premises enterprise virtualization management for Power Systems, inclusive of AIX, IBM i and Linux guests. Built on OpenStack, it provides a multi-tenant IaaS layer in your data center, allowing administrators to quickly provision new virtual machines in minutes.

It also provides numerous operational benefts such as one-click system evacuation for simplifed server maintenance, dynamic resource optimization (DRO) to balance server usage during peak times, automated virtual machine restart to recover from failures, importing and exporting virtual machine images for cloud mobility and more. It also enables DevOps capabilities such as “infrastructure as code” by way of Ansible or HashiCorp Terraform. Terraform can provision Power resources through PowerVC by leveraging the out-of-box OpenStack provider. PowerVC provides the foundational technology on top of which the rest of the on-premises POWER cloud stack is built.

Reduce data center footprint and get cloud agility with public cloud

IBM Power Systems Virtual Server integrates AIX, IBM i and Linux capabilities into the IBM Cloud experience and is available on POWER9 based Power Systems. Users receive fast, self-service provisioning, flexible management and access to a stack of enterprise IBM Cloud services with pay-per-use billing. Users can easily export virtual machine images in the standard OVA format from PowerVC and upload them into the IBM Cloud for easy back and forth image mobility. With this public cloud solution, POWER users can grow at their own pace and run enterprise workloads when and where they choose with a variety of flexible operating systems, compute, storage and networking configurations.

Simplify hybrid cloud management

IBM Cloud Pak for Multicloud Management can run on Red Hat OpenShift and provide a single control point to manage a hybrid IT environment. This provides consistent visibility, governance and automation across the entire hybrid multicloud landscape, bridging traditional virtual machine apps with new cloud-native container apps. Offered as part of this Cloud Pak are three critically important management applications to hybrid multicloud:

  • Infrastructure Management (formerly known as CloudForms) provides centralized management and dashboarding of the virtual infrastructure components (e.g., virtual machines, volumes, networks, etc.) across hybrid cloud. It also offers connectivity to major public cloud infrastructures, including PowerVC for on-premises management of your Power Systems environment.
  • IBM Cloud Automation Manager (CAM) provides advanced multicloud orchestration capabilities. Using HashiCorp Terraform as its underlying engine, CAM enables connectivity to numerous cloud infrastructures, including PowerVC (OpenStack), IBM Cloud, AWS®, Azure®, Google® and several others. CAM can provision virtual machines, including LPARs via PowerVC, as well as containers. This allows users to create software catalog entries that build complex multi-tier applications with a single click. And, because CAM is delivered as part of an IBM Cloud Pak, it runs on Red Hat OpenShift, creating a centralized service catalog from which you can deploy all your applications.
  • IBM Multicloud Manager (MCM) provides a single multicloud dashboard that enables organizations to oversee multiple cloud endpoints on public or private cloud infrastructures. MCM provides consistent visibility, governance and automation across a hybrid multicloud environment.

Achieve consistent enterprise IT automation with Ansible

Red Hat Ansible Automation Platform is enabled for IBM Power Systems ac 1r oss AIX and IBM i environments and runs on Power Systems private and public cloud infrastructures. Red Hat Ansible Certified Content for IBM Power Systems helps you include workloads on Power Systems as part of your wider enterprise automation strategy through the Red Hat Ansible Automation Platform ecosystem. Enterprises already using Ansible for other IT infrastructure, such as x86 or IBM Z servers, can seamlessly integrate Power servers as well. The Ansible content helps enable DevOps automation through unified workflow orchestration with configuration management, provisioning, and application deployment in one, easy-to-use platform. This is an important step in delivering a comprehensive enterprise-grade solution for building and operating IT automation at scale. 1Some Ansible content is only available in open source form from Ansible Galaxy.

Build modern cloud-native apps with IBM Cloud Paks and Red Hat OpenShift

IBM Cloud Paks care enterprise-ready containerized software solutions that provide an open, fast and secure way to move core business applications to any cloud. They are lightweight and easy to run, certifed by IBM and Red Hat. Each Cloud Pak sits atop Red Hat OpenShift and can run anywhere on-premises, in the cloud or at the edge.

Cloud Paks are comprised of a set of containerized IBM middleware and common software services. IBM offers six Cloud Paks: IBM Cloud Pak for Applications, IBM Cloud Pak for Data, IBM Cloud Pak for Integration, IBM Cloud Pak for Automation, IBM Cloud Pak for Multicloud Management, and IBM Cloud Pak for Security. Each offering provides a broad set of capabilities for a particular domain.

Red Hat OpenShift is the industry-leading platform-as-a-service (PaaS) technology built on Kubernetes, fully enabled and supported on IBM Power Systems. Red Hat OpenShift provides an infrastructure-independent common operating environment that serves as a common foundation across both private and public cloud, making it the de-facto standard fabric for hybrid cloud infrastructures. Red Hat OpenShift provides a trusted platform from which to build new cloud-native, container-based applications. It also provides a broad set of open source software, including IBM enterprise middleware (via IBM Cloud Paks) and ISV software.

Integrations with other cloud orchestrators

  • VMware vRealize AutomationTM (vRA) speeds up the delivery of infrastructure and application resources through a policy-based self-service portal, on-premises and in public cloud. In addition to x86 VMware-based virtual machines, vRA is able to provision Power virtual machines (including AIX, IBM i and Linux) with PowerVC, providing the ability to orchestrate deployments across hybrid cloud.
  • VMware vRealize Operations for IBM Power Systems brings together all management functions including performance management, capacity, cost analytics, planning, topology analysis and troubleshooting in one integrated, highly intuitive, scalable and extensible platform. It also provides deep insights and key performance indicators for enterprise applications, including SAP HANA, Db2, Oracle and several others. This comprehensive monitoring solution is a perfect complement to a cloud management software stack as it provides a broad and deep perspective into what is happening in the cloud.

We hope our commitment to delivering open and flexible solutions for your hybrid multicloud journey will help you leverage partner cloud technologies and seamlessly integrate Power Systems with the rest of your data center.

The Key To Enterprise Hybrid Multicloud Strategy

Executive Summary

As enterprise IT readily embraces public cloud technologies, on-premises and private cloud usage continues to grow. On-premises is not going away as a critical part of IT infrastructure strategy; instead, organizations are meshing together various types of IT infrastructure to meet their needs. Organizations that can bring together on-premises with public cloud strategically will be best positioned for operational excellence.

In August 2019, IBM commissioned Forrester Consulting to evaluate how organizations develop and implement their IT infrastructure strategies. Forrester conducted an online survey of 350 global enterprise IT decision makers across industries to explore this topic. We found that organizations are mixing and matching technologies across public cloud, hosted private cloud, and on-premises infrastructure based on business requirements.

KEY FINDINGS

  • On-premises infrastructure is key to enterprise hybrid cloud strategy. Enterprises are making strategic decisions about what types of IT infrastructure to use for which purposes — and on- premises continues to play a key role, with 90% of IT decision makers agreeing that on-premises infrastructure is a critical part of their hybrid cloud strategies.
  • IT decisions makers select the right IT infrastructure strategy according to the job to be done. Technology professionals consider workload, security needs, and time-to-value when designing IT infrastructure strategies. When it comes to workloads, IT decision makers anticipate that more than half of mission-critical workloads and 47% of data-intensive workloads will be run either on-premises or in an internal private cloud in two years.
  • The push to public cloud doesn’t mean organizations have stopped investing in on-premises. The majority of IT decision makers surveyed expect their companies’ funding for public cloud to grow over the next 24 months. At the same time, more than eight out of 10 respondents predict their organizations will increase investment in IT infrastructure outside of public cloud.
  • Tapping the brakes on refreshes and upgrades can come at a cost. Delays in IT infrastructure refreshes and upgrades expose enterprises to expensive vulnerabilities and can negatively impact customer experience. Security vulnerabilities, software compatibility issues, and an inability to meet customer expectations as a result of delays in infrastructure refreshes are top concerns for IT decision makers.

On-Premises And Private Cloud Investments Grow At Parity With Public Cloud

Public cloud trends have garnered growing coverage over the last several years, but the increased attention on transitioning to cloud and expanding outside the data center doesn’t tell the whole story about organizations’ IT infrastructure strategies. In addition to grappling with how and what to shift to public cloud, enterprise IT organizations are also struggling with increasing demands on existing IT infrastructure, with the end result being that on-premises and private cloud spending and usage also continue to grow. In surveying 350 IT decision makers, we found that organizations are simultaneously:

  • Growing public cloud footprints. Sixty-two percent of organizations already have some form of public cloud, and 82% of tech professionals expect to increase funding for public cloud over the next two years (see Figures 1 and 2). This finding is not surprising, as cloud has become mainstream.
  • Providing for heightened demand on existing infrastructure. One of the top three IT priorities is providing for growing demands on existing IT infrastructure. However, in the cloud era, there is pressure to extend infrastructure without needed updates and upgrades. In fact, 61% of respondents say their organizations have delayed an infrastructure refresh at least a few times in the last five years (see Figure 3). IT is grappling with how to get more out of their existing technology stacks without exposing themselves to risk.
  • Increasing on-premises and other nonpublic cloud investment. Funding for infrastructure outside of public cloud is roughly at parity with expected cloud growth: 85% are increasing funding for infrastructure (not including public cloud). Meanwhile, more than half of IT decision makers plan to update existing infrastructure or purchase new infrastructure within the next 12 months

Lack Of Reinvestment Can Leave Organizations Vulnerable

As organizations continue to transition to hybrid multicloud environments, those that do not take a holistic view of their IT infrastructure, including on-premises, open themselves to security vulnerabilities, breakage, and, ultimately, loss of customer confidence and loyalty. Even when individuals recognize a need for a holistic approach, the road to implementing an all-inclusive infrastructure strategy is not easy. Seventy-five percent of survey respondents reported that they received significant pushback while advocating for strategies outside of cloud environments (see Figure 4). As a result, IT decision makers struggle with a variety of cost and strategy challenges following a delay in infrastructure refreshes and upgrades, including

  • Security vulnerabilities. When organizations prioritize other IT initiatives over infrastructure refreshes, they leave themselves exposed to security risks. Our survey findings reveal that the highest ranked repercussion is security vulnerabilities at 44%.
  • Inability to meet increased customer and employee expectations. By delaying infrastructure refreshes, organizations hinder the process for improving customer and employee experience. Forty-three percent of respondents cited the inability to meet increasing expectations of customers and employees as one of the top five consequences of delaying an infrastructure refresh. Technology innovation has powerfully changed how customers experience and value products, and in this era of hyperadoption and hyperabandonment, investing in customer experience is more critical than ever before.
  • Compatibility restrictions. Forty-three percent of respondents ranked restrictions for compatible apps, software, services, and integration as a top five challenge following a delay in infrastructure refresh.
  • Decreased market competitiveness. Based on our study, 39% of respondents have felt a loss of competitive edge as an IT organization. As a result of putting infrastructure refreshes on the backburner, organizations have not only opened themselves to internal vulnerabilities, but they have also left themselves at risk to fall behind their competition.
  • Diminished performance. In addition to organizations losing their competitive edge, delays in refreshes are also reducing organizations’ performance. Thirty-eight percent of respondents stated that their organizations have experienced a decrease in performance post-delay.

Crafting A Comprehensive IT Infrastructure Strategy: One Size Does Not Fit All

Organizations supplement cloud strategy with on-premises infrastructure to use the right tool for the job. On-premises infrastructure continues to be foundational, with 90% of respondents agreeing that it is a critical part of a hybrid cloud strategy (see Figure 6).2 Our survey revealed that key considerations for infrastructure decisions include

  • Type of workload. Organizations are increasing the percentage of mission-critical workloads that are run in public cloud and internal private cloud at comparable rates. At the same time, they expect to increase data-intensive workloads that are run in hosted private cloud environments. Organizations also leverage on-premises for improved application or infrastructure performance, which lands in the top three reasons organizations leverage on-premises resources for some workloads.
  • Compliance and security. Greater assurance for compliance is the No. 1 reason for using on-premises resources for select workloads. According to respondents, failure to meet security needs is the top reason for maintaining infrastructure outside of a public cloud platform. Hosted private cloud offers the benefits of traditional on- premises infrastructure in a secure, private setting, while also allowing organizations to take advantage of cost savings and flexibility.
  • Cost and time-to-value. Organizations ranked avoiding time- intensive budget approvals and realizing faster productivity with less process as top reasons to leverage on-premises resources. This need is particularly driving private cloud investment, with most viewing internal private cloud as a developer environment. These findings suggest that organizations use on-premises and private cloud to side-step bureaucratic processes and kick-start development efforts.

As organizations grow both their public cloud and nonpublic cloud footprints, continued investment in on-premises remains key. This theme is evident as a majority of organizations craft infrastructure strategies that account for increased workload demands, security compliance, and growth.

Key Recommendations

In a world where the focus centers on the cloud, it is easy to make the mistake of moving application workloads without a clear rationale for what benefits migration will achieve. Our survey uncovered evidence of this pressure to shift to cloud, as well as the reality that many organizations are intentionally and strategically leveraging a hybrid cloud strategy driven by diverse business and technology requirements. Forrester’s in-depth survey of 350 global IT decision makers about IT infrastructure yielded several important recommendations:

  • Invest in cloud using a strategy that aligns to your context. First, determine whether you are seeking gains at the application level or the data center level. Then, create your own sourcing framework with factors that may include cloud readiness, location challenges, compliance requirements, data types, need for additional support, and expected lifetime, among other factors.4 Hedge against cloud vendor lock-in by designing for multicloud deployment and architectures wherever possible.
  • Don’t let cloud obsession stop other infrastructure investments. The perception that infrastructure investment outside the public cloud has stopped is false. Yet as an infrastructure professional, it feels like budgets are under attack. The majority of IT leaders continue to invest.
  • Beware of delaying investment. Those that have delayed or stopped investment have experienced security vulnerabilities, software compatibility issues, and an inability to meet customer expectations. Learn from your peers and advocate for updates and upgrades.
  • Build an irrefutable business case. Our survey found that organizations are mostly likely to use higher performance as a proof point to justify new investment (see Figure 8). Performance is especially critical since it has a significant impact on customer experience (CX) and brand perception. Executives that can’t commit to complete refreshes can leverage subscription-based infrastructure refresh options to provide a more flexible future if their strategy changes.
  • Explore alternative environments for data-intensive workloads. Public cloud serves many workload types, but some use cases are extraordinarily expensive or introduce too great a risk surface. Data-intensive workloads are a great example of hybrid cloud strategies looking to optimize across all IT infrastructure options and ensure cost efficiency.

Learn the benefits and best practices of bringing a hybrid cloud strategy to life within your organization

Hybrid cloud for IT transformation

In a world of complex security, workload and data hosting needs, enterprise leaders may find that a “one-cloud-fits-all” strategy does not effectively address the needs of their organization. Instead, a more tailored approach is needed to truly transform their digital landscape and provide them with the ability to deploy applications and data in a secure, integrated, flexible and simple-to-manage way.

For a majority of enterprises, a hybrid cloud strategy has become the preferred model for deploying applications and data. According to 451 Research, more than two-thirds of companies (68%) are choosing the default approach of making strategic IT investments in hybrid IT and integrated on-premises/

off-premises cloud environments.1 And among top IT spending priorities for these organizations in 2019 are new IT projects for digital transformation (35%), upgrade/refresh existing IT (30%) and customer experience/engagement improvements (29%).

This shift to hybrid cloud offers IT leadership a unique blend of security for mission-critical workloads, flexibility for dynamic delivery and performance to meet the need for continuous and effective innovation. Adoption of a hybrid cloud strategy enables a large organization to customize their framework and deploy a model that best serves their business objectives, critical workloads and future initiatives to better serve their customers.

Understanding cloud environments and multicloud management

A hybrid approach may be the best move for an enterprise looking to keep their data protected and private while meeting the demand for business agility. The truth is, many of the critical workloads of enterprise businesses cannot or should not be moved to the public cloud. Such a move could compromise the security of mission-critical data for core business applications. Major financial, health, government and other large enterprises cannot take the risk with their business and customer data.

Understanding cloud environments and making decisions about multicloud management is complex. Many questions arise, such as, what resides on-premises? What lives in a private cloud vs. public cloud? Which public clouds should be used? What data or applications should be on-premises rather than off-premises? Why did your IT team deploy some applications in those respective environments and was it the right decision? It’s important to have a solid understanding of your current IT infrastructure and the alignment of workloads with this type of deployment. With that in mind, let’s take the time to explore the various cloud deployments.

Private cloud

A private cloud refers to a cloud solution where the infrastructure is provisioned for the exclusive use of a single organization, either on premises or off premises. The organization often acts as a cloud service provider to internal business units that obtain all the benefits of a cloud without having to provision their own infrastructure. By consolidating and centralizing services into a private cloud, the organization benefits from centralized service management and economies of scale.

An on-premises private cloud provides some advantages over an off-premises private cloud. For example, an organization gains greater control over the resources and data that make up the cloud. In addition, on-premises private clouds are ideal when the type of work being done is not practical for an off-premises private cloud because of network latency, security or regulatory concerns.

Public cloud

A public cloud infrastructure is made available to the general public or a large industry over the Internet. The infrastructure is not owned by any single user, but by an organization that provides cloud services to a variety of businesses. Public cloud services can be provided at no up-front cost, as a subscription or as a pay-as-you-go model, and resources can be shared across multiple businesses to reduce costs.

Hybrid cloud

A hybrid cloud deployment typically describes a situation in which a company is operating a mixture of private cloud, public cloud and traditional environments — regardless of whether they are located on premises or off premises. In a hybrid cloud environment, private and public cloud services are integrated with one another.

Hybrid cloud enables a business to take advantage of the agility and cost-effectiveness of off-premises, third-party resources without exposing all applications and data beyond the corporate intranet. A well-constructed hybrid cloud can service secure, mission-critical processes, such as receiving customer payments (a private cloud service) and secondary processes, such as employee payroll processing (a public cloud service).

The challenges for a hybrid cloud are the difficulty of effective creation and governance, the need to ensure portability of data and applications in the cloud, and the management of complexity. Services from various sources must be obtained and provisioned as though they originated from a single location, and interactions between private cloud and public cloud components make the implementation even more complicated.

Hybrid multicloud architecture

Hybrid multicloud refers to an organization that uses multiple public clouds from several vendors to deliver its IT services, in addition to private cloud and traditional on-premises IT. A hybrid multicloud environment consists of a combination of private, public and hybrid infrastructure-as-a-service (IaaS) environments all of which are interconnected and work together to avoid data silos.

Many enterprise companies are failing to make their various data repositories and systems ‘talk to each other’ effectively and efficiently, if at all. The result: more data silos that hinder or prevent data movement and sharing.

With a modern hybrid multicloud architecture in place, you gain access to a single source of truth as it relates to your data. If optimized properly, you can quickly access data that is reliable and accurate. Moreover, data that is unified in one location is accessible whether it resides on-premises or off-premises.

Benefits of a hybrid cloud strategy

Get the best of all environments

Hybrid multicloud is the new normal for enterprises investing in IT modernization. And with it you can get the best of all environments — while public cloud is prized for delivering customer-facing applications, on-premises private cloud is valued for securing data and prized for quick access to on-site data and applications. Optimizing for both agility and essential business needs can lead to cost efficiencies as well. That’s because keeping critical workloads on-premises can save a business big on frequently used data. Let’s explore the benefits of a hybrid cloud environment.

Five benefits of a hybrid cloud environment

Security

In this era of frequently reported data breaches, securing all of an organization’s data is essential to maintaining customer confidence and protecting critical business data. Just as important is being able to prove to regulators that customer data is fully protected. Storing secured data on-premises and enabling fast access from cloud applications is a good start; extending the protection of data into both private and public cloud enables flexibility. A hybrid cloud environment gives you a choice of how and where your data is housed within your organization and it is important to keep it protected wherever it resides.

Agility

A hybrid cloud environment will enable you to rapidly deploy applications to satisfy customer demand and exploit business opportunities. It makes applications and data more easily accessible to a wide variety of users. And, it gives the ability to integrate your on-premises applications and data with the public cloud to securely make all of your data and applications available.

Mobility

Develop new cloud-native applications using containers so they can be hosted on private and public cloud. This enables you to run applications on the right platform and take advantage of available resources. Deploying these applications using Kubernetes can help you manage cloud complexity while minimizing cost. Central to all of this is the flexibility of open-source and an infrastructure-independent common operating environment that runs anywhere — from on-premises private clouds across your entire value chain.

Integration

Remove data silos so that your core business data and applications can fuel new development and surface new insights across your business. Co-locate applications close to the data to enable faster processing and insights — from corporate data or data generated by Internet of Things (IoT) devices — while ensuring critical data remains in the most secure environment.

 

Cost

The hybrid cloud enables optimized placement of workloads and sharing of resources, which can help minimize both predictable costs like datacenter, software purchases and licensing costs, as well as the cost of supporting spikes in demand. A hybrid approach is flexible enough for the life of your organization.

Insider advice for CIOs building hybrid clouds

To shift to a hybrid cloud approach means to listen and adjust to each business unit (BU) within your organization. One BU may favor a specific public cloud service for their work while another BU may have established a critical and efficient system with a different cloud service. A hybrid approach accommodates the needs of each BU’s dependencies, so you can select the right service for their workloads and your customers.

According to 451 Research1, “hybrid is the preferred (or in effect, default) approach for a greater proportion of large enterprises, more than 10,000 employees (69%) and government/ education organizations (73%).” Moving forward with building your hybrid cloud environment means addressing a variety of organizational issues and demands.

Optimizing costs

There can be no compromising when it comes to the security and privacy of your data and your customers. To prepare for data growth and future regulations you need a secure hybrid cloud that protects you from all IT threats. But not all vendors use a secure-by-design approach. Your secure hybrid cloud should do the following:

  • Encrypt 100% of data, both at rest and in-flight – using on-chip hardware crypto accelerators wherever possible to minimize encryption overhead
  • Protect and store encryption keys using the highest NIST certified FIPS- certified Hardware Security Modules
  • Localize data on-premises in a private cloud to meet data privacy regulations
  • Secure application environments to run trusted workloads, designed for protection from internal and external threats
  • Extend data privacy beyond the host server and across the hybrid cloud

Managing complexity

Collaborating across your organization requires a cultural and technological investment. It can be challenging but it’s something many organizations are pursuing to lower cost and raise availability for their critical and experimental work. To enable collaboration across your organization, consider investing in:

  • Infrastructure-independent common operating environments that run anywhere — from the data center to multiple clouds to the edge
  • Building cloud-native applications using multi-architecture containers and deploy across the hybrid cloud using Kubernetes
  • Integrating new applications with existing data and systems to maximize value
  • Leveraging multicloud management to ensure the best use of resources

Building your hybrid cloud

Hybrid cloud with the right technology

In order to be an agent of change in your organization, you’ll need to have the right technology in place to support your every move. So, we put together a list of hybrid cloud technologies worth looking into as you begin or continue on your hybrid cloud journey. As you plan out your environment, here’s what you’ll need.

  • Open-source software to avoid vendor lock-in and enable innovation.
  • Lightweight virtualization and orchestration software to package applications with their software dependencies, and to accelerate development and deployment.
  • Infrastructure-independent common operating environment to enable the portability of applications across hybrid cloud environments.
  • Database and middleware software integration to help move and integrate core business applications to the hybrid cloud securely.

A short technology deep-dive

Linux

Linux has established itself as the leading operating system, both for traditional IT and in the cloud. It has been ported to multiple architectures and systems, from embedded IoT devices to supercomputers. Although there are many Linux distributions available, three have emerged as the leaders for enterprise Linux: Red Hat Enterprise Linux, SUSE Linux Enterprise Server and Ubuntu from Canonical.

Containers

Containers are a feature of Linux and other operating systems which package together application code along with all the software dependencies that it needs in order to run. This ensures that the application has everything it needs to run out of the box, independent of the operating environment in which the container runs.

Containers make life easier for both developers and administrators. They are lightweight to run and extremely quick to start, which can increase performance time. Administrators can run many of them at once to create a highly scalable environment. Their cloud-friendly nature makes it easier to deploy them automatically, and containers can run in many different operating environments because they contain the files on which they depend. And multi-architecture containers are now possible, to enable container development on one architecture and deployment on another.

Kubernetes

Containers have been widely adopted, which means there can be lots of them, making them difficult to manage. This requires a new way of managing application deployment. Containers need to be created, provisioned, run, and deleted very quickly, and so require powerful orchestration software to manage them at scale.

Kubernetes, another open-source project, has emerged as the most popular container orchestration tool. It is declarative rather than procedural, which means the systems administrator specifies the desired end state of deployment and Kubernetes works out how to achieve it.

Red Hat OpenShift

Red Hat OpenShift Container Platform provides an infrastructure-independent common operating environment that runs anywhere — from any data center to multiple clouds to the edge. It includes support for containers and Kubernetes, as well as additional services and management capabilities.

IBM Cloud Paks

IBM Cloud Paks are enterprise-ready, containerized software solutions that offer an open, faster and more secure way to move core business applications to any cloud. Built on Red Hat OpenShift, each IBM Cloud Pak includes a container platform, containerized IBM middleware and open source components, and common software services for development and management.

Building your hybrid cloud on IBM LinuxONE

IBM LinuxONE is an enterprise platform designed to deliver high availability, security and scalability and with the agility to develop next-generation applications. As such, it can provide an ideal platform for building each element of the hybrid cloud – whether public cloud, private cloud, or traditional on-premises IT.

Here are some of the benefits of building your hybrid cloud on LinuxONE:

  • Supports Linux, containers and Kubernetes for cloud-native application development, deployment and management — with future support for Red Hat OpenShift and IBM Cloud Paks announced in a recent Statement of Direction
  • Engineered to deliver a highly scalable, secure, reliable and cost-effective platform for building and deploying containers — whether on private or public cloud
  • Scales both vertically and horizontally, so supporting big containers (for applications which have been containerized but not yet factored into microservices), and lots of parallel containers (for new cloud-native applications using containers and microservices)
  • Protects data and applications from both internal and external threats through pervasive encryption, key protection, and a highly secure environment for running applications
  • Designed for 99.999% availability to meet consumer and business expectations, LinuxONE is able to quickly recover from disaster scenarios
  • Improved quality of service vs public and private clouds There are limits to public and private clouds’ ability to deliver high quality of service to your partners and customers. This is another area where the hybrid cloud model shines. A hybrid approach gives you the power to integrate new cloud workloads with your existing IT infrastructure. This can lead to faster service for your customers. And, by having a fuller view into all your workloads, you can leverage big data for new insights that can lead to future application improvements.
  • Reduces Total Cost of Ownership by sharing resources, consolidating licensed software onto fewer cores, and simplifying management of IT With an open approach, you will be able to take on more advanced servers built with the highest levels of security, scalability and reliability and apply those advantages across all your workloads. The added scale won’t come at the cost of security either. Hybrid cloud lets you containerize existing and future applications.

Plastic Bank

As scientists predict more plastic than fish in the ocean by 2050, the Plastic Bank founders wonder what they can do to protect the natural world? Working with IBM and service provider Cognition Foundry, Plastic Bank is mobilizing recycling entrepreneurs from amongst the world’s poorest communities to clean up plastic waste in return for life-changing goods. To support their expansion, the Plastic Bank selected IBM Blockchain technology delivered on a private cloud by managed service provider Cognition Foundry, powered by IBM LinuxONE. The application front-end was designed and developed by Cognition Foundry and is hosted in Cognition Foundry’s datacenter and the IBM Cloud, creating a hybrid multicloud architecture. Blockchain is used to track the entire cycle of recycled plastic from collection, credit and compensation through delivery to companies for re-use.

Digital Asset Custody Services (DACS)

Smart contracts and crypto-asset technologies are set to transform the way enterprises across industries do business. Existing solutions tend to force people to choose between either security or convenience. For example, cold storage options generate and store assets in an offline environment. While this approach protects assets from cyber attackers, it slows down transactions. On the other hand, relying on exchanges or third-party wallets to manage digital assets means trusting that they will safeguard them adequately, and that there won’t be any interruptions to their services.

To enable companies to protect and use their digital assets freely, Digital Asset Custody Services (DACS), a subsidiary of Shuttle Holdings, is working with IBM to create a first-of-its-kind servicing platform based on IBM LinuxONETM servers and IBM Secure Service Container for IBM Cloud Private. Customers will have the choice to deploy the solution on-premises as part of a private cloud environment or as a service.

ICU IT Services

ICU IT Services, a Dutch IT infrastructure service provider, built a solution to capture new clients by merging the best of open-source and enterprise technology. Recognizing the growing popularity of open-source technology, the company saw an opportunity to tap into a new part of the marketplace. As an example of the innovation enabled by the IBM solution, ICU IT has created its own multi-architecture cloud environment using OpenStack solutions and IBM CloudTM Private. This sophisticated cloud infrastructure incorporates both Intel and LinuxONE nodes and is integrated with the IBM z/OS environment.

HCL

HCL Technologies, a Sweden-based IT services company, leverages their hybrid cloud environment to satisfy the needs of their customers. This is especially important since HCL’s customers expect that their applications and private cloud services will support their increasing demands for performance, manageability and security. With no two customers alike, HCL Services is able to provide scalable, consistent, predictable and secure cloud services that their customers demand.

Four steps to hybrid cloud readiness

1. Align your IT with C-suite business priorities and goals

Understand C-suite business goals and align with strategic initiatives. You don’t want to go into the meeting with inaccurate information. You’ll want to align and talk to their needs.

This could include:

  • Technology priorities: modernize technology and build agility between teams. Be able to speak to how DevOps connects to cloud, data analytics to AI and data protection to security and resiliency.
  • Business priorities: Delivering better customer experience, creating a digital business model, building AI training models, or implementing thorough security mechanisms to remain compliant with current regulations.

2. Choose an infrastructure mix of private cloud, public cloud, and on-premises traditional IT that fits your hybrid cloud plan

  • Look at the workloads, data placement, and agility need
  • Match workload requirements to platforms
  • Choose an infrastructure-independent common operating environment that runs anywhere
  • Leverage multi-architecture containers and interpreted languages in application development and deployment to achieve true portability of applications across the hybrid cloud

3. Share your plan with your leadership team

  • Be direct and concise. State the key takeaways from your research efforts.
  • Key differences between public and private clouds
  • What an optimized hybrid cloud environment offers
  • Your hybrid cloud plan and next steps
  • Prepare for the C-suite Q&A. This is your research and meeting, so make sure you are ready for any question that may come your way.
  • Press for investment/clarify timeline. Time is of the essence, so this is a great opportunity to encourage investment urgency.

4. Conclude and reiterate the business value

Restate the business benefits as a result of implementing a mature hybrid cloud solution.

  • Unify data to gain a single source of truth
  • Ensure applications are delivering accurate insights
  • Derive greater value from unstructured data to enable better business outcomes
  • Ensure greater business resilience
  • Deploy modern applications
  • Drive business satisfaction
  • Enable data scalability as business grows

With the meeting over, be sure to have follow-up action items and encourage any and all feedback from stakeholders.

Embracing hybrid cloud

A hybrid cloud strategy is a huge advantage for any data-driven enterprise up to the challenge. Yet, a project of this scale demands more than a will to lead on digital transformation. It requires the tools to support your every move. With the right team, goals and solutions in place, your data-driven enterprise can benefit from the following:

  • Cost reductions
  • Added reliability
  • Simpler data management
  • More rapid provisioning
  • Faster time to market for your products and services

How to Simplify Your Hybrid Multicloud Strategy

THE COMPLEXITIES OF MULTICLOUD

As the following Forrester report (Assess The Pain-Gain Tradeoff of Multicloud Strategies) shows, the world of hybrid multicloud is a complex business. Sure, with a hybrid multicloud platform you get the data security and uptime reliability of on-premises architecture combined with the agility and on- demand growth of the cloud, but that combination comes in many different forms.

First of all, “multicloud” can mean various things to various people, including:

  • Multiple clouds hosting different apps based on app characteristics.
  • Multiple clouds hosting parts of an app ecosystem
  • Multiple deployment options, with common APIs, on the same cloud
  • Multiple clouds being used simultaneously for a single app.

Furthermore, a multicloud strategy appeals to different organizations and stakeholders for unique reasons, ranging from improved performance and disaster recovery through to the unique strengths of outside platforms and the desire to diversify risk.

For organizations looking to grow their infrastructure, it may seem that a hybrid multicloud environment is the most obvious cloud strategy. However, this isn’t always the case. Sometimes—as when you become faced with problematic latency and bandwidth, or double the work for half the productivity—a hybrid multicloud approach just isn’t feasible.

That’s why when you over-complicate your hybrid multicloud strategy you risk increasing your costs while still getting unimpressive results.

SIMPLIFYING YOUR STRATEGY

The best approach to establishing a simplified multicloud strategy, as Forrester explains, is to drill down into your needs: “Get specific about your multicloud plan and goals before jumping into decisions about public or private cloud and complex sourcing algorithms for determining the fate of your application portfolio. What greater purpose does your cloud strategy serve? What specific efficiencies — process and resource utilization — do you seek for your company, and why are those important?”

The following suggestions can help you ensure that your strategy is actually strategic, and not just ineffectively complex:

  1. Make sure that you’re getting more out of your hybrid multicloud than you’re putting into it, meaning that the effort you’re expending doesn’t exceed the value of the end result.
  2. Don’t overly complicate your cloud strategy. The easiest, most simplified approach is often the best one.
  3. Avoid altering too much at one time unless you’re at a point of significant change in your organization, allowing you to make more radical adjustments all at once.

By focusing on simplification, you can find the right hybrid multicloud solution for your organization and assure that you’re getting way more “gain” and far less pain.

The Pain Is Usually Worth The Gain (But Not Always)

Hybrid cloud and/or multicloud is your obvious cloud strategy — or is it? Forrester outlined the basics of hybrid and multicloud in our report “Top 10 Facts Every Tech Leader Should Know About Hybrid Cloud.” But a larger story is emerging that questions the very nature of using multiple platforms — cloud or noncloud. Factors favoring variety are freedom of choice, heightened resiliency, and application-specific sourcing optimization. In opposition is the inefficiency associated with managing multiple versions, adding complexity and redundancy as enterprises veer toward cloud strategy pragmatism. As enterprises question the very nature of multiple platforms, they come to obvious conclusions:

  • At times, vendor variety is worth it. Multicloud is popular for good reason. Different platforms have different strengths. By leveraging multiple cloud platforms, organizations see freedom of choice and application-specific sourcing optimization. This can meet a variety of app or end user demands, whereas forcing all workloads to fit on a single platform can be detrimental to regulation, cost, performance, or user experience. Secondarily, using multiple platforms reinforces the concept of heightened resiliency by not putting all one’s eggs in a single basket.
  • On the other hand, strategic partnership creates great value. Some companies believe that maintaining multiple platforms can be expensive, whether for a portfolio of workloads or, even more, for a single application, and thus decide to focus on a single strategic partnership.1 The streamlined productivity, unified native management tooling, single data location, reduced redundancy, and ability to leverage unique and maintained services outweigh the benefits of added freedom.2 Strategic partnership can also reduce the risk that too much complexity can introduce.

KEY DRIVERS FOR MULTICLOUD

When Forrester asked North American and European infrastructure technology decision makers employed at enterprises why they leverage multiple cloud platforms, the three most common responses embraced the concept of strategic rightsourcing: to improve performance of latency-sensitive apps (31%); because different apps require different cloud services (28%); and for disaster recovery (26%).3 Most cloud-savvy companies are familiar with these efficiency principles. However, via inquiries and briefings, Forrester has uncovered other common reasons that enterprises turn to multicloud:

  • Unique strengths outside a primary provider. Some enterprises reluctantly leverage clouds outside their primary cloud platform due to unique value-adds that they can’t find on their primary platforms. Examples include adtech and video streaming companies drawn to bare metal services, or favorable pricing for high I/O configurations while leveraging a megacloud provider for workloads without these characteristics.
  • Compelling discounts. Microsoft has long used discounting on Office 365 and Skype as a way of incentivizing the selection of Azure as a primary platform.4 Microsoft can clearly hold its own in the public cloud platform market — it’s a Leader in in “The Forrester WaveTM: Full-Stack Public Cloud Development Platforms, North America, Q2 2018” — but many enterprises note that its discounting program lured them to Azure and resulted in a multicloud strategy. These accounts were often early Amazon Web Services (AWS) users, heavily leveraging the platform for net-new development and ease of portal use. Such users have more recently moved some of their traditional enterprise applications to Microsoft Azure to use up the credit hours granted to them via negotiation.5 Google Cloud has been similarly leveraging Google Ads discounts as a way of drawing more enterprise workloads onto its platform.
  • Users. In highly distributed IT environments, developers and business users often make their own sourcing decisions. In many situations, preference alone leads to a multicloud strategy. At times, a centralized group takes over some or all environments to provide more cohesion. Rarely do such multicloud strategies transform into a single cloud story. Initial selection may be the result of early testing or of the role of the individual doing the early testing. Factors could include fit to a particular use case, a specific cloud provider’s geographic presence, or word of mouth. Less distributed groups may select multiple clouds to satisfy the demands of users they serve and gain credibility across business groups for delivering the desired capabilities.
  • Customers. The everyday B2C customer doesn’t care much about where you host your website or free software-as-a-service (SaaS) product. But B2B companies serving other businesses through digital platforms or a SaaS solution find that sourcing can make or break a deal. Not only is initial selection important, but in some cases, hosting that same workload on multiple clouds is critical (e.g., simultaneous multicloud). Customer opinion in these situations can be a result of: 1) industry stances, such as mortar and brick retailers avoiding AWS, given Amazon’s dominance in eCommerce, or 2) proximity to their primary platform to minimize latency and egress costs to their other apps.7
  • Partners that pick for you. Just as B2B clients enforce their choices, powerful partners may insist that you connect and work with them in a specific cloud platform that may differ from your primary cloud platform. The most cost-effective solution in this case is typically branching out into the preferred platform rather than switching primary providers — creating a multicloud scenario regardless of original intent.
  • Diversification of risk (in theory). Regulators and internal auditors push for single-vendor risk mitigation in the event of cost escalation or complete failure. The fear typically isn’t about a short- term outage but rather a massive pivot that puts a provider out of business or drastically changes its prices. Financial regulators have started to ask for this next-level redundancy, but the reality behind the ask seems to miss the mark. These “doomsday” scenarios are fear mongering at its worst, given that there are no examples of cloud providers increasing pricing and that the major providers are arguably more financially stable than the financial institutions themselves.8
  • Strengthening the power of negotiation (also in theory). Overconfident enterprises believe that having presence and experience on two platforms makes them more powerful in contract negotiation. Unfortunately, most enterprises don’t spend enough on a given platform to unveil favorable discounting. Splitting between multiple platforms further decreases their spend size.And to truly claim portability, you’d need to build for it. This driver makes sense for very large cloud consumers but is often not relevant for the average enterprise unless it’s considering heavily investing in a smaller player.

Get Clarity On The Types Of Multicloud

Before exploring the pains and remedies for various multicloud circumstances, first clarify the specific multicloud variations. At the most basic level, some scenarios leverage multiple platforms across a portfolio of workloads, and others do so for a single application or app ecosystem. Most assume portfoliowide or a large subset of applications. Top scenarios include

  • Multiple clouds hosting different apps based on app characteristics. This model involves leveraging multiple cloud platforms for parts of your application portfolio due to different strengths in each platform or preferences of your developer or business users. Sourcing is decided on an app-by-app basis, looking at the organization’s users or characteristics as they map to the various services available. To speed up the process, enterprises create rules regarding characteristics of an application that make one choice suitable over another.
  • Multiple clouds hosting parts of an app ecosystem (hybrid app architecture). Some applications or app ecosystems are designed to leverage multiple platforms — both cloud and noncloud. This is often due to availability of a service from a specific cloud provider, to avoid cost escalation, to meet required regulation, or to satisfy preferences of those accessing the workloads. This can be architecturally challenging. Common examples include customer- or partner-facing websites with elements subject to regulation, or heavily opinionated customers with deep pockets, edge scenarios, and medical research. Although a hybrid architecture may help mitigate latency where it matters, e.g., local decisions for the edge, internet-of-things (IoT) devices, or the shopping experience on a retail website, it isn’t easy designing a network architecture that considers cost and acceptable latency for each connected element in the ecosystem.
  • One cloud, multiple deployment options, and similar operations using common APIs. For some, multicloud doesn’t mean multicloud platforms of vendors but multiple deployment environments with the same APIs. The ability to choose where the environment lives is a bigger concern than vendor flexibility. Today, Azure paired with Azure Stack tells this story, and to a more limited extent, so does VMware Cloud Foundation paired with VMware on AWS.9
  • Multiple clouds being used simultaneously for a single app. Simultaneous multicloud is multicloud in its most extreme form. It entails running the same application simultaneously on multiple cloud platforms. This requires building the application identically on two or more cloud platforms. Due to complexity and cost, it’s very uncommon. Consideration is often limited to independent software vendors (ISVs) delivering SaaS solutions to highly opinionated clients with deep pockets and inflexible public cloud vendor preferences. Regulators in the financial services industry are trying to push for this to mitigate risk of “complete business failure on part of a cloud provider” and to avoid overdependence on a cloud provider. Regulators may ultimately find that it has the opposite effect. But, in practice, you’ll find few examples.

SOME ORGANIZATIONS FOCUS THEIR MULTICLOUD STRATEGIES ON BUILDING FOR PORTABILITY

The options outlined above describe infrastructure decisions for hosting an application, but for some, multicloud isn’t about today but rather about choice later down the road; they want flexibility of vendor choice in the future. If the circumstances change, they learn more about the platform, or the platformcapabilities improve radically, will they have the portability to move their applications to other cloud platforms? This simple question has widespread implications. Cloud platforms aren’t the same, and portability between them will always require significant work.10 Use of application or developer services unique to the provider makes it even harder to move. Enterprises face a big decision about whether the loss of value and speed to market is worth this portability. Even those that choose portability identify areas for exceptions where they’re willing to accept vendor or platform lock-in, all in the name of value and speed. Those that choose portability use these approaches:

  • Leverage Kubernetes (K8s) for abstraction. Developer platforms, like Cloud Foundry and OpenShift, segregate app and infrastructure layers, often through K8s, while discouraging the use of services specific to a cloud platform. It’s then up to your own consumption and vendor selection to ensure that this decoupling from the platform continues. Amazon, Google, Oracle, and Rackspace all have their own managed K8s flavors on their platforms, which provide ongoing management (e.g., day 2) support, but to remain unattached, you’ll need to continue to limit the use of easily accessible services unique to the platform.
  • Leverage a management tool for abstraction. Through their portals and the use of proprietary template patterns, hybrid cloud management tools (e.g., Rightscale, Scalr, and VMware vRealize Suite), also decouple the app and infrastructure layers while discouraging the use of services specific to a cloud platform. This allows for changes in sourcing decisions later. Using other templates or orchestrating elsewhere requires discovery and conversion to gain full management control over those launched resources.
  • Write for the “least common denominator” or “if-then” logic for each platform. Using an alternative approach to decoupling is ill advised. If you choose to take that path, you must limit yourself to solutions consistent between providers or plan logic for multiple platforms. Early attempts at multicloud did just this. Limiting to the basics enabled conversion tools to easily convert to an alternative offering. Building for each platform, with logic for each, translates to bulky, inefficient code that’s time-intensive to create and maintain.
  • Seek vendor-neutral app and dev services (that don’t currently exist). There’s a belief that a market of third-party services will emerge to deliver consistent app and developer services on all the major cloud providers. The vision is giving value regardless of location, with a responsible party taking on the development, updates, and stack management. Cloud Foundry’s ecosystem approach showed early promise but little momentum. The market is anticipating the release of multicloud marketplaces, but it’s still just a concept.11
  • Minimize the impact of the lock-in. Some enterprises willingly opt in on lock-in if they can minimize the required operational time through a managed version of an offering. Smart practitioners follow two simple rules: 1) Limit yourself to services that deliver a managed version of a technology that’s highly common elsewhere (e.g., Kubernetes or SQL) and 2) limit yourself to services that don’t impact the normal running of the application (e.g., DBaaS or monitoring).12 With these guidelines, they get the value they seek without a painful rework.

When Multicloud Isn’t The Answer

 

Being multicloud in some form is the obvious solution for most companies, but there are exceptions.13 When they evaluate the economics, logistics, or partner relationships, multicloud simply doesn’t make sense for some situations. Some fringe cases consider this on a macro level, disregarding the value of micro-level efficiencies of app-by-app sourcing. The US Department of Defense’s JEDI proposal is one such example.14 These companies intentionally limit platform types, accepting high levels of lock-in in the name of simplicity. For other companies, the micro-level efficiency is missing. When they look at the logistics and specifics of an app, app ecosystem, or relationship, multicloud escalates costs for them. When you believe your mutlicloud options are limited:

  • Determine whether this is opinion or fact. Theory is great, but if the numbers don’t work out, it’s useless. Before diving too far into the decision about your multicloud or single-platform approach, do testing to determine if performance or resource investments play out favorably in that model.
  • Define the scope of the limitation. Most limitations aren’t total. Does the decision apply to a single app hosting decision, on a team level, or to apps that connect to a certain data set? Or does it represent a corporate-wide need? Sometimes this is easy to discern; for example, when your .NET users want to use Azure and a separate team is already heavily using AWS and its many app and dev services. But at times, it’s harder to define, especially if there are cost, latency, or compliance implications to a multicloud approach.
  • Push the limitations to see if you can overcome the barriers. Creativity and problem solving is key in the cloud era. If you’re experiencing a very real pain for the desired multicloud strategy, explore whether those are truly barriers or simply hurdles that require flexibility. Forrester has seen many “barriers” break away — steadfast capital expenditure budget preferences may dissolve; patient records might get nonidentifying codes; inspirational leaders may be able to turn around hard-headed, long-time employees; or bursting could happen, with extensive changes, at the right moment.

MULTICLOUD MAY NOT BE PLAUSIBLE

Creating two isolated cloud environments across two or more cloud platforms has few barriers other than the increased burden of more vendors and the requirement of more skills. But as these environments connect with a data set, an app, or an app ecosystem splitting across these platforms, challenges can arise. At times, these challenges make multicloud implausible:

  • Painful networking charges. Each of the cloud providers has different ways of charging for networking usage. For example, AWS doesn’t charge to move data onto its cloud or inside an availability zone (AZ). However, any traffic that moves from inside an AZ to other AZs, regions, platforms, or public areas will incur a cost of $0.01 per gigabyte (on AWS). Many companies don’t realize the amount of traffic that moves between different tiers of an application and have been shocked that network costs are higher than any other part of the bill.
  • Problematic latency and bandwidth. Application services and data are no longer separated by feet but possibly by thousands of miles. Communication times can grow by orders of magnitude and make application experiences painfully slow.15 In addition, 10, 40, and 100 GbE connections found in data centers don’t exist as readily outside of them. The bandwidth is throttled by competing traffic from other customers in the data center or across WAN connections. Network traffic can be held in buffers or slowed down to accommodate the limited bandwidth, which adds more latency to application communication.
  • Different types of tools. The public cloud providers allow customers to use their own networking and security services within compute cloud platforms, but that comes with a cost. The cloud providers offer free versions in the hope that customers will choose a free version and weave it into the application, such as proprietary API calls. Writing code to leverage a particular cloud service will make it difficult to develop or move an application to a different public cloud platform.
  • Double the work (or more) with half the productivity. Manufacturing in the airline industry strives toward a lean supply chain to streamline processes by eliminating waste and non-value-added activities such as vendor maintenance. Similarly, organizations setting up new instances on new platforms will repeat the same activities — such as IP address management, security profiles, and account management — for little added value through business continuity or redundancy.
  • Cost-affordable disaster recovery. Enterprises may leverage multiple cloud data centers; availability zones; or, in theory, multiple cloud platforms to provide business continuity for all or a portion of their applications. In practice, achieving active-active, even in a single cloud provider’s availability zones or regions, is too expensive for most workloads, let alone creating versions and maintaining them on multiple cloud platforms. Although the theory allows a range of disaster recovery approaches, cost and time create real limitations that make a disaster recovery plan less tangible if it stretches across clouds.

Your Strategy Is Multicloud — Now What?

Knowing that their strategy will leverage multiple clouds doesn’t answer many questions for I&O professionals. Get specific about your multicloud plan and goals before jumping into decisions about public or private cloud and complex sourcing algorithms for determining the fate of your application portfolio. What greater purpose does your cloud strategy serve? What specific efficiencies — process and resource utilization — do you seek for your company, and why are those important? This report should help you decide which multicloud approach you’re targeting and which you deem too painful to implement. Here are some of the early questions you’ll be asking:

  • Is the pain worth the gain? For most companies, a basic multicloud strategy is the clear answer — the value outweighs the cost without need for significant cost analysis. If you’re considering more involved multicloud strategies or a radical single-platform environment to serve all apps, you have some significant work ahead. Rarely is this an easy yes-or-no question. You’ll need due diligence to creatively overcome the barriers or cost escalators.
  • Are we over-architecting our multicloud strategy? You may be overly complicating your cloud strategy, building in significant inefficiency without reason. There’s no problem with the most simplified approach to multicloud. Those contemplating a more complex multicloud strategy are doing so because they have no other choice. An executive, a regulation, latency, or customer demand is forcing their hand toward a harder version of multicloud. If that’s not your situation, go with the easiest path.
  • Is this the moment for change? Too much change at once can spiral costs, and your business, out of control. However, there are moments when you should evaluate more radical change. These “change moments” typically occur when contracts end, massive refreshes are pending, new leadership enters an organization, or staffing radically changes. Change moments occur when it’s more tolerable to pivot direction due to compelling cost avoidance figures, higher appetite for change, or dire need. If your organization is planning to build a new data center, refresh a colocation contract, reduce tech organization staff size, hire cloud skill sets, replace C-level leadership, or undergo a massive infrastructure refresh, it’s worth evaluating a more radical approach. Determining whether this is your moment of change may determine whether you should consider a more involved multicloud or an even more radical idea — a single-platform strategy.

SECURE HYBRID CLOUD THE STRATEGIC APPROACH TO ENTERPRISE IT

Executive Summary

The enthusiasm for hybrid cloud as an ideal structure for IT environments belies a complicated decision-making process around locations for various types of compute workloads and data stores. Though it may seem that today’s enterprises have more choices than ever for where to host their applications, some workloads must remain on-premises for reasons related to data control, security, compliance and performance. At the same time, competitive pressures are pushing businesses to be more customer-responsive by taking advantage of the perceived scalability, flexibility and agility afforded by off-premises IT architectures. Enterprises must focus on business outcomes while deploying workloads and data in a way, and in a location, that ensures security and integration across increasingly distributed environments.

Key Findings

  • Workload placement is a critical factor in maximizing the value of IT environments. As markets and technologies have matured, the choice of where to deploy data and applications has evolved as well. More than two-thirds (68%) of companies making strategic IT investments view hybrid IT and integrated on-premises/off-premises cloud environments as their default approach.
  • Public cloud is not a panacea. Many enterprises have already migrated the applications and business functions that are ‘low-hanging fruit’ for off-premises deployment: productivity suites and customer relationship management systems, for example. But factors that may prevent migration include security and data protection, performance and cost.
  • Hybrid cloud is the new normal. Data from 451 Research’s Voice of the Enterprise service shows that hybrid cloud environments encompassing both on-prem and off-prem venues are the direction of travel for most organizations. Cloud transformation is occurring both within and outside the datacenter, and IT decision-makers plan to increase their use of both in the coming years.
  • Security must be baked in to IT evolution plans. Maintaining security is paramount when migrating and refactoring IT systems to take advantage of the wealth of destinations available. Federation of identity and access management across public and private clouds is critical, and encryption of data is necessary to ensure that increasingly distributed systems remain tamper-proof while IT estates are upgraded and modernized to seize the possibilities of the next 20 years.

Cloud Adds Value When Data and Applications are Placed Where They Make the Most Sense

For most organizations, moving at least some applications and data to the cloud is not a matter of if, but when and why. The perceived benefits of lower cost, easier infrastructure management, and faster and more flexible provisioning ushered in a wave of business and IT transformation not seen since x86 virtualization made its appearance more than 20 years ago. As the market and technology have matured, however, businesses are changing their strategies.

In the past several years, cloud adoption has moved from being the province of early adopters into the mainstream. In many cases it began as a bottom-up phenomenon, with individual business units implementing ‘shadow IT’ – applications developed on platforms provisioned with the swipe of a credit card – to effect outcomes that made other departments (and IT management) take notice.

But the initial rush to cloud was not without complications and risks. Deployments that were impressive at small scale and in isolation created unacceptable exposure when moved into production, and establishing connections with on-premises data stores – in many cases the most valuable and differentiating IT assets in the organization – opened businesses to significant risk. Companies that were initially happy to lift and shift applications and data to the cloud soon learned that this approach, if applied indiscriminately, could be costly, complex and disruptive. This did not in itself make the organization more agile and flexible, nor did it necessarily make the applications more resilient or available.

The fact is, many workloads simply cannot or should not make the transition to cloud. Custom- built applications with core business dependencies are often mission-critical, especially in industries such as banking and insurance. These on-premises systems may be foundational, and abstracting away the underlying infrastructure would compromise the business itself. Workloads that require low-latency access to on-site data, such as financial services systems that need to process transaction details to and from customer accounts, are too sensitive for off-premises deployment; the business will rarely accept the increased risk in moving these apps and data off-premises. In all these cases, compliance demands – whether regulations restricting the geographic distribution of data, or industry or company-specific rules to ensure consumer information is protected – are needed to preserve access to lucrative markets.

The combination of these pressures – increasing business agility with cloud while maintaining on-premises control of sensitive data and regulated workloads – has led to the dominance of hybrid cloud as a key enabler of modern IT systems. Enterprises have accepted the idea of incorporating as-a-service infrastructure, platforms and software into their IT estates, but they need to do so in a selective, disciplined and secure way. This is reflected in IT spending priorities; digital transformation is the top spending focus for 2019, and cloud is a key enabler of this transformation

Enterprise buyers are also looking to improve customer engagement and automate business processes to become more responsive to markets and opportunities. These initiatives tend to be part of cloud transformation efforts in a bid to migrate applications that support the business but are not critical to the core. These are also the areas where software-as-a-service offerings are selected. New app development and proofs of concept are also likely to start in cloud environments.

However, note that the second spending priority in the figure above is to upgrade or refresh existing IT, much of which is likely on-premises and will remain there for the foreseeable future.

Among digital leaders – companies that are already executing on or strategizing their IT investments based on digital transformation – 42% are allocating more than half of their budgets on IT initiatives to grow or transform the business itself, and 68% view hybrid IT and integrated on-premises/off-premises cloud environments as their default strategic IT approach.

Challenges with an Exclusive Public Cloud

Although public cloud providers highlight customers that are going ‘all in’ on their platforms, these deployments are exceptions to the rule. Providers may position public cloud as a route to business agility, but the experience of large enterprises migrating applications and data to cloud justifies caution.

Many companies have already targeted applications for cloud migration: top candidates include email and document creation apps and systems of engagement such as customer relationship management and marketing platforms. Once these workloads have moved off-premises, however, continuing transformation becomes much more difficult.

IT decision-makers cited several high-stakes factors that prevent them from moving workloads to the public cloud, including security and data protection (including privacy), performance and cost.

Security and data protection. Public cloud SLAs may guarantee the security of the infrastructure, but it is up to the customer to secure applications and data. If a public cloud security breach does occur, any compensation from the provider will likely pale in comparison to the customer’s lost revenue, damaged reputation and regulatory fines. Enterprise stakeholders responsible for protecting a business’s valuable intellectual property want to maintain strict visibility and control of the data, and in fact, restricting the physical movement of data is a top requirement of government and industry privacy standards.

Performance. Public cloud providers tout the high availability of their services, but performance and latency issues continue to crop up. Few enterprises are willing to stake mission-critical operations on best-effort internet connections, and while high-speed direct connections can be provisioned, they come at additional expense. Customers have come to expect instantaneous access to their applications and data, but ‘cloudifying’ workloads in a way that increases the distance between source data and processing power can introduce unacceptable latency. Similar hang-ups can occur when application integrations need to be improvised as workloads are relocated, or when choke points develop due to inadequate provisioning or misconfigured policy engines.

Cost. Ironically, cost has been both a top driver and a top inhibitor to cloud adoption. In the early stages, easy access to cloud technology and lower costs caused users to consume more. Although unit prices remained low, total spending increased. The convenience of consuming public cloud infrastructure exclusively encourages sprawl and waste; orphaned resources and overprovisioning can add up to unexpectedly high bills. Storing data in the cloud looks like a bargain until customers need to access, move or remove it, when bandwidth charges come into play.

These factors can’t be considered in isolation, and in fact, they should be adjusted in relation to each other for the sake of price and performance engineering. Enterprises are willing to pay more for more resilient and secure workloads that make up critical applications while building in flexibility for systems that can tolerate occasional downtime. Such decisions require assessment of the entire IT estate, service interdependencies, and regulatory and policy needs. IT and business decision-makers require different hosting environments for different workloads, but at the same time, they need to be able to secure, manage, integrate, govern, scale, deploy and update across multiple environments, and do so seamlessly and with confidence. There is no single solution that works across the board for all businesses.

Hybrid Cloud is the New Normal

451 Research’s Voice of the Enterprise data underscores the prevalence of hybrid IT – meaning an integrated combination of on- and off-premises resources – as the direction for strategic IT (Figure 4). Behind this aggregate view is a more nuanced story. Not surprisingly, hybrid is the preferred (or in effect, default) approach for a greater proportion of large enterprises with more than 10,000 employees (69%) and government/education organizations (73%), while those going ‘all in’ on public cloud are more likely to be small organizations with fewer than 250 employees (27%).

The challenge of creating a secure, integrated hybrid environment is considerable, yet companies are pursuing it as a way to get the best of both worlds: the control and performance of on-premises IT with the pay-as-you-go offerings of public cloud. Large, multibillion-dollar enterprises are looking to modernize their IT estates and deliver services globally, complying with various regulations without having to maintain datacenters in each location. This requires security to be baked into the environment rather than applying it via perimeter hardening.

Motivations for using multiple infrastructure environments highlight the benefits of on-premises and off-premises deployments (Figure 5). The primary factor – improving performance and availability – cuts both ways: popular use cases for public cloud include backup and disaster recovery to ensure availability, but performance concerns may necessitate keeping applications on-premises for quick access to on-site data. The same dual justification goes for the second reason: optimizing for cost. Keeping frequently accessed data stores on-site can save money in the long run, but moving batch workloads to cloud offers the financial advantage of being able to scale up and scale down costs as needed.

Other factors point more directly to either on- or off-premises environments. Isolating sensitive business data and meeting data sovereignty requirements are common justifications for keeping data and applications on-premises, whereas adding new functions and adding geographic diversity (using content delivery networks) are common benefits of public cloud.

Conclusions

One size does not fit all when it comes to workloads and data hosting. Digital transformation requires a flexible approach to deploying workloads and data in a way, and in a location, that optimizes security, integration, flexibility, management, and agility, whether on- or off-premises or both.

Hybrid cloud environments encompassing both on-prem and off-prem deployments are clearly the direction enterprises are taking Cloud transformation is occurring both in the datacenter and off-premises, and IT decision-makers plan to increase their use of both in the coming years.

 

Analyzing Outcomes Delivered by Modern Multicloud Storage Environments Optimized for Next-generation Workloads

Executive Summary

Many of the problems organizations face today are related to data. Most organizations have too much data, which is growing too quickly, and which is siloed and difficult to consolidate. These challenges create an “insight gap,” where organizations are not able to adequately analyze their data and thus capitalize on its value. Traditional methods of data analysis are not sufficient for many petabyte-scale organizations. However, the promise of self-optimizing analytics powered by artificial intelligence and machine learning offers a path forward, but many organizations don’t know where to get started.

While unlocking intelligence is one common data problem, another is enabling innovation. An organization’s data is the perfect testbed for application developers and database administrators. By working with actual data in a development environment, they can better debug errors, predict production performance, and identify optimizations earlier in the development lifecycle. However, many organizations can only provide developers with dummy data sets which vary significantly from a company’s production data. This creates issues and uncertainty in the development lifecycle and slows down innovation.

The topic of data residency brings up still more issues. Organizations today have numerous options available to them when it comes to storing their data. There are a host of on- and off-premises solutions and services, all with different and shifting cost-benefit profiles. However, many organizations are unable to migrate data in an agile manner to ensure it is located for optimal performance at the lowest possible costs, and that if requirements change, the organization is not prohibitively locked in to the platform choice.

Many organizations face a combination of many or all of these data problems. Whatever the range and extent of such problems, they will invariably combine to diminish the value of an organization’s data. There is an imperative to implement data solutions to these data problems. To help, IBM has developed a vision for organizations: to implement hybrid multicloud-enabled storage infrastructure that modernizes traditional workloads and is optimized to run next-generation workloads, enabling them to operate as dynamic ‘data-driven’ enterprises. The collection of characteristics that determine whether an organization has achieved this vision are collectively referred to as Storage Maturity in this report. Research conducted by ESG strongly validates the premise that organizations that have taken the steps prescribed by this view of Storage Maturity are better positioned to harness the power of their data and to enjoy a competitive advantage over their peers.

Defining a Vision for Storage Maturity

Storage Maturity can have different meanings to different organizations, but to apply a consistent, data-driven model, ESG had to formulate concrete characteristics against which organizations could be assessed. Ultimately, ESG developed a three-pillar model for assessing Storage Maturity that we believe objectively considers organizational characteristics that are both unbiased and broadly applicable to organizations today:

  • Data-ready infrastructure—This pillar relates to the ability of the organization’s infrastructure to store, manage, and perform at a level required by a modern, data-centric organization. Within this pillar, ESG assessed an organization’s propensity to utilize high-performing flash storage to power on-premises workloads and deploy software-defined solutions that pool storage resources and abstract management capabilities into a single view. Organizations with both characteristics have data infrastructures that combine ease of management at scale with high performance, a suitable foundation for Storage Maturity.
  • Strategic reuse of secondary data—This pillar relates to the ability of the organization’s storage to support analytics and application development initiatives. Within this pillar, ESG assessed whether the organization can supply near- production copies of company data for analysts and application developers to work with. Organizations supporting these constituencies enable innovation by using storage infrastructure for more than just data retention, an appropriate aspiration of a mature storage environment.
  • Workload and data portability—This pillar relates to the ability of the organization to migrate data and workloads to a variety of platforms based on the requirements and the organization’s goals. Within this pillar, ESG assessed whether the organization has containerized legacy applications and/or developed cloud-native applications from the ground up. Going one step further, ESG measured the frequency with which organizations are migrating workloads to different on- and off-premises environments to capitalize on temporary advantages or satisfy a changing requirement. Organizations with a high degree of data and workload portability are likely to be operating a highly flexible, cost-optimized, multicloud environment.

The Current State of Storage Maturity

ESG’s three-pillar model segmented survey respondents into four different levels of Storage Maturity based on their responses to survey questions related to their infrastructure’s data readiness, enablement of data-intensive workloads, and data portability.

Respondents earned between 0 and 100 maturity points based on their responses to these questions. ESG rated respondents scoring in the bottom quartile (0-25 points) as Level 1 or Laggards, respondents scoring in the second quartile (25.5-50 points) as Level 2 or Followers, respondents scoring in the third quartile (50.5-75 points) as Level 3 or Explorers, and respondents scoring in the top quartile (75.5-100 points) as Level 4 or Leaders. See Appendix II: Criteria for Evaluating Respondent Organizations’ Storage Maturity to review the full list of dimensions of Storage Maturity on which ESG evaluated respondents.

ESG’s analysis found that very few IT organizations have achieved enough progress across enough criteria to be classified as Leaders, as defined by this maturity model. Just 13% of respondents surveyed provided answers about their organizations that resulted in a score in the top quartile. The vast majority of respondents’ organizations fell into either the Follower (42%) or Explorer (32%) categorizations, showing progress in some Storage Maturity characteristics, but with additional advancement needed. Mirroring Leaders, ESG rated just 13% of respondent organizations Laggards in terms of Storage Maturity, falling short on many—if not all— of the criteria included in ESG’s model

The Importance of Storage Maturity

Why does Storage Maturity matter? Simply put, ESG found that organizations earning a Leader designation reported the best results across many key performance indicators (KPIs) and characteristics, including: business success, IT operations effectiveness, achievement of multicloud agility, and advancement of artificial intelligence initiatives.

Moreover, the upward trend observed across maturity levels was extremely consistent across the broad spectrum of KPIs included in the research. While the differences noted in KPIs are the greatest when comparing Laggard and Leader organizations, ESG observed that KPIs incrementally improved across each level in the spectrum.

Improved Business Outcomes

Ultimately, IT exists to support the business. If there are activities that IT can undertake to improve business outcomes, then those activities are worthwhile. In ESG’s research, organizations in alignment with the principles laid out in the Storage Maturity model— those designated as Leaders—consistently reported the highest degree of business performance across all metrics included in the research. In short, there is a strong correlation between organizations that have achieved storage Leader status and the most successful organizations in the market.

Increased Maturity Leads to More Actionable Business Strategy

Organizations are awash in data: sales activity, customer support records, employee engagement metrics, market research, among others. Each of these data sources contains practical information about the state of a company: sales activity can show which sales region or product is performing best, customer support data can provide a glimpse into customer satisfaction, and employee engagement metrics can show which teams and managers are operating most effectively. However, one of the most strategic use cases for a business’s data is to accurately predict changing market dynamics like which new markets will develop, what new product launches will be successful, or which new features will be broadly adopted by users. When ESG asked respondents how successful they felt their companies were at using data to

Increased Maturity Means More Digital Enablement

For many organizations, there is no bigger business imperative than increasing digitization. Digital initiatives can vary widely. For one organization, a digital initiative may mean transitioning from a predominantly brick and mortar customer experience to a greater reliance on ecommerce. For another, it may mean enhancing digital marketing and lead nurture capabilities. Still others may be focused on entirely new digital services and subscriptions supporting net-new business models. Regardless of how ambitious these initiatives are, it often falls to IT to support and enable them.

ESG’s research shows a distinct correlation between Storage Maturity and organizational digitization. Respondents from Leaders were over four times as likely as Laggards to report that more than 10% of their organization’s revenue was driven from newly developed digital channels that did not exist two years prior (81% versus 17%). Moreover, Leaders anticipate digital revenue to grow at over three times the rate as Laggards year-over-year over the next three to five years (41% versus 13%).

Increased Maturity Means IT-fueled Profitability

IT organizations and executives are often frustrated by the perception that IT is a cost center for their organizations. IT is a critical component of business operations, with revenue-generating employees relying on IT systems and services to be productive. For many organizations, IT often functions as the innovation engine of the organization, supporting new services and finding new ways to deliver offerings to employees and other end-users. In ESG’s research, Storage Maturity was positively correlated with the IT organization’s ability to make a more dramatic impact on the business. In fact, IT organizations at Leaders were eight times more likely than Laggards to operate with a very positive return on investment

Given the fact that business benefits delivered by IT at Leaders are much more likely to significantly outweigh costs, it is not surprising to observe that Leaders were three times more likely to expect their organization to beat their annual profitability goals in 2018 than Laggards (62% versus 19%). ESG believes the positive business impact by IT organizations at Leaders is a major contributing factor to their overall bottom-line success.

Enhanced IT Effectiveness

While the correlations that exist between Storage Maturity and positive business outcomes are consistent and numerous, ESG also observed many ways in which Storage Maturity and IT capabilities trend in the same direction. In many ways, these correlations are even more noteworthy as it is more likely that they are the result of a causal relationship—that the maturity of the organization’s infrastructure, workload capabilities, and workload portability directly cause positive IT performance.

Increased Maturity Helps Organizations Lead on Innovation

Complex, legacy IT environments are difficult and costly to maintain. More importantly, the amount of time, effort, and budget they require can preclude IT from other more strategic projects like cloud migrations, data center consolidations, and application modernization efforts. However, scalable, software-defined, highly virtualized infrastructure—all managed through a single pane of glass—can free up both staff and budget resources to advance these other initiatives. By freeing up staff and dollars from infrastructure management, organizations can sharpen their focus on innovation.

Leaders represent this mature type of environment well, with a high rate of adoption of simple, scalable flash storage and a large portion of their storage infrastructures that have been virtualized—allowing a complex heterogeneous storage footprint to be managed as a single pool of resources from a single console. Thus, it is not surprising to note that Leaders are able to allocate an incremental 10% of their annual IT budget on next-generation workloads compared to Laggards, which spend 60% of their budget maintaining legacy applications

Moreover, Leaders as a group agree they are getting value from their ability to allocate more of their budget to innovation. ESG asked all respondents to describe how much progress they’ve made leveraging IT resources to speed product innovation and time to market. Respondents at Leader organizations were more than four and a half times as likely as those at Laggard organizations to describe progress as “excellent” 

Innovation Is the Precursor to Private-cloud-driven Efficiency

As noted, Leaders can allocate significantly more of their budgets to supporting next-generation workloads. Part of that means they can spend more on application development and modernization. But it also means they can spend more on the infrastructure that sits underneath those modernized applications. Given this increased level of investment on next- generation infrastructure, it would be logical to assume Leaders have made greater advancements in private cloud adoption. That is, it would be logical to assume that more of their on-premises infrastructure is highly virtualized, scalable, and elastic and that end-users are able to provision resources in a self-service manner with usage-based tracking. ESG was able to test this assumption in the research. ESG asked each respondent what percentage of on-premises workloads they run on physical servers, on virtual servers managed in a traditional manner, or on true private cloud infrastructure that mirrors public cloud service offerings. Respondents at Leaders reported running more than twice as many of their on- premises workloads on scalable, elastic, and dynamic private cloud infrastructure than respondents at Laggard organizations

ESG believes the fundamentally different, more agile infrastructure environments present at Leader organizations in turn play a significant role in those organizations’ ability to launch workloads to their production environments ahead of schedule. ESG asked respondents what percentage of all production workload launches in the past two years had been completed ahead of, on, or behind schedule. Leaders, thanks in no small part to their private cloud investments, reported that 34% of workload launches had been completed ahead of schedule, on average. By contrast, Laggards reported that just 13% of launches had been completed ahead of schedule, on average.

IT’s Innovation and Efficiency Drives Line of Business Satisfaction

Ultimately, IT’s charter is to support business, to give employees the tools and technology to do their jobs effectively. In many ways, the satisfaction of line of business employees is the true test of how effective IT is. This is a test Leaders pass with flying colors. When ESG asked respondents how satisfied the line of business end-users at their organizations are with the applications and IT services they are provided with to perform day-to-day business tasks, 60% of respondents at Leaders said, “extremely satisfied.” That represents a fifteen-times multiple over the frequency observed among Laggard organizations

Zeroing in on Storage KPIs

ESG’s research into Storage Maturity would be severely lacking if it did not include an assessment of how Storage Maturity is correlated to storage-specific KPIs and attitudes. ESG assessed a broad set of these metrics in its research and observed universally positive correlations between KPI performance and Storage Maturity.

Tactically, Maturity Leads to Productivity and Execution

ESG’s survey included a question on the organization’s total storage capacity. It also asked respondents to report how many full-time equivalents were employed by their organization to administer storage. By looking at the average ratio of these two data points in each level of Storage Maturity, ESG was able to derive a metric for administrator productivity across Laggards, Followers, Explorers, and Leaders: average number of TBs per storage administrator. Not surprisingly, Leaders reported the highest level of productivity, reporting more than twice as many TBs under management per administrator than their Laggard counterparts

Administrator productivity and efficiency is likely a contributing factor to higher organizational confidence in the storage functional group. ESG asked all respondents how confident they were in their organizations’ ability to execute major storage-related projects like new application deployments, new/extended array deployments, technology refreshes, data migrations, etc. Respondents at Leaders were three and a half times more likely than their counterparts at Laggards to report they were fully confident in the IT organization’s ability

Storage Maturity Helps Optimize Strategic Initiatives – Analytics, Application Development

For many organizations, storage is no longer just about data retention and protection. Storage is seen as a resource with the potential, if not mandate, to support other strategic initiatives. Respondents at Leaders are much more likely than other levels of Storage Maturity to feel their storage resources do a good job supporting these endeavors. Nearly three- quarters of respondents at Leaders (72%) reported that storage and data services support analytics projects “very well,” far outstripping the rate reported by Laggards (18%). Similarly, 67% of respondents at Leaders reported that storage and data services support application development initiatives like DevOps “very well,” compared to 13% of Laggards. For IT organizations and storage stakeholders looking to maximize their relevance to business strategy and strategic imperatives, optimizing Storage Maturity will help highlight the value of storage resources.

The Bigger Truth

Based on ESG’s research, Storage Maturity Leaders are the exception, not the rule—87% of the market has significant work to do to attain a Leader designation. However, this research shows that incremental benefits can be achieved by making steps to move up the maturity curve: Followers outperform Laggards and Explorers outstrip Followers.

If you are interested in improving your organization’s standing against the benchmarks laid out by ESG, it is important to understand the criteria we used to assess Storage Maturity, as well as the actions you can take to improve your rating.

  1. Leaders actively refactor legacy applications and develop cloud-native applications from the ground up. Ninety-five percent of Leaders in this research have containerized one or more legacy applications compared to just 2% of Laggards. Similarly, 97% of Leaders have developed one or more cloud-native applications from the ground up versus 5% of Laggards. By adapting and developing applications that can take advantage of multicloud deployment models, organizations reduce the friction of shifting those workloads from one cloud environment to another. In fact, the majority of Leaders report they very frequently migrate workloads from cloud to cloud to take advantage of a temporary advantage (e.g., lower cost) or to satisfy a temporary requirement (e.g., a traffic spike). Not a single respondent from Laggard organizations reported this level of workload agility.
  2. Storage Leaders have placed strategic bets on next-generation infrastructure like all-flash arrays and storage virtualization. Ninety-eight percent of all Leaders support on-premises applications with flash storage compared to 26% of Laggards. Furthermore, 99% of all Leaders (versus 23% of Laggards) have deployed storage virtualization technology that allows storage management to be abstracted from the infrastructure and the underlying storage to be managed as a single pool of resources.
  3. Storage Leaders have mature DevOps and analytics initiatives underway, and these initiatives are supported by progressive uses of secondary storage. Eighty-four percent of Leaders have analytics initiatives underway that use data to develop and refine business processes over time compared to just 4% of Laggards. Nearly half (49%) of Leaders describe their DevOps adoption as extensive versus 0% of Laggards. Moreover, more than four out of five Leaders report they can use near-production copies of their data to run analytics on and to use in application development/testing compared to less than one-third of Laggards.

Leaders run data-intensive workloads on data-ready infrastructure, and they have a high degree of flexibility enabled by the application portability unlocked by containerization. As a result, they can run IT more effectively, positively impact business outcomes, and even capitalize on early gains delivered by AI-driven analytics. Organizations should take notice of the behaviors and technology solutions allowing Leaders to capture these benefits and take the steps necessary to follow their lead.

IBM Cloud Pak for Security

IBM Cloud Pak for Security is an innovative solution that can run in a variety of deployment models that supports security analytics and incident response for today’s complex, hybrid and multi-cloud environments. It provides a consolidated view on security and threat information across a range of sources from IBM and other vendors. It supports federated search across that data, plus consolidated workflows for incident response spanning multiple systems. With these capabilities, it is a tool that can deliver significant benefits to the efficiency of every SOC.

Introduction

Over the past years, Cybersecurity has evolved from a technical challenge for the IT Security Division of businesses to a major concern for business leaders. Cybersecurity incidents cause massive damage to organizations from small businesses to global leaders. Understanding the current status of attacks across the entire IT landscape of businesses and being able to rapidly identify and respond immediately is essential to mitigate the potential damage they can cause.

On the other hand, the evolution of IT infrastructures from central, on premises data centers to hybrid IT environments running both on premises and in multi-cloud environments increases the complexity of gathering and processing the relevant data. DevOps environments also add a new element of volatility to the IT infrastructures. In addition, containerized environments – specifically if run in multi-cloud and hybrid scenarios – add to the complexity, where even critical business workloads are run in a very agile manner.

To add complexity, there is no one single tool for monitoring and analyzing data, or for automating the response to incidents. Most businesses have several such tools, one or more for each of the multiple environments in which applications run. There is a wide range of sources for security-relevant data in this hybrid world with few or even many tools consuming this data. Both the many sources of data for security and threat analytics, as well as the many systems consuming and processing that data and helping businesses to respond creates challenges.

It has become extremely difficult to create and staff process and to build infrastructures that support this complex environment. One such example is the SOC (Security Operations Center), which collects all relevant data from the hybrid, distributed, and volatile IT environments. In consequence, there is a risk that relevant data will be missed, incidents not identified in a timely manner leading to a failure to respond. Furthermore, with such a variety of such tools in place, it is also difficult to respond in a consolidated and efficient manner. Incident response, both from an organizational and technical perspective, becomes extremely complex.

Cybersecurity must deal with the reality and complexity of today’s IT environments. Point-to-point integrations of data sources to analytical solutions and to incident response solutions fail – too complex, too costly, too slow. There is need for visibility across all the relevant source data, so that systems can build on that data to detect, identify and respond effectively to cyber incidents.

There is, as yet, no defined category for such solutions because, until now, there were no such solutions available. While some vendors have good integration within their own technology or provide interfaces to their analytical applications, a comprehensive integration framework with a broad range of out-of- the-box integrations to relevant sources and analytical tools has been lacking until now.

IBM Cloud Pak for Security is now the first open platform that supports the integration of existing security tools for generating insights into cyber events across hybrid, multi-cloud environments. It is one component of a series of such enterprise-ready, containerized software solutions, named Cloud Paks, that IBM has started to bring to the market.

Product Description

IBM Cloud Pak for Security is a platform intended to connect security-related data sources, from different tools such as SIEMs, EDRs, data lakes, and more. It can access data from a broad variety and sources and provide homogeneous access across all these sources. Based on that, it can deliver consolidated information back to security applications on the platform. Furthermore, it can orchestrate workflows for incident response and automate manual and repetitive tasks. This helps security teams to work and respond faster and with better coordination, by working together based on all available data. IBM Cloud Pak for Security is intended to deliver the foundation for an integrated SOC and security teams, moving from uncoordinated processes using disparate solutions to a coordinated and integrated response. With a focus on fostering interoperability, IBM Cloud Pak for Security is not a replacement for existing tools as a “super tool”, it enhances the value of those existing tools as an integration platform. Rather than providing a central data store it is a data federation platform providing consolidated access across multiple tools. This preserves existing investments and enables security teams to deal with the complexity of the heterogeneous IT landscape as well as the range of heterogeneous IT security tools deployed. It enables a better coordinated approach to tackling the ever-increasing cyber-attacks.

IBM Cloud Pak for Security runs in hybrid environments – on-premise, private cloud or public cloud. It can access data from a variety of environments and source systems, and is an open environment, where multiple security tools can easily connect. It is focused on federating data investigations, as well as orchestrating processes and workflows across various security tools.

With the hybrid, multicloud approach, IBM Cloud Pak for Security aligns with other, recently published IBM Cloud Pak solutions. All these solutions are built on Red Hat OpenShift for the container platform and operational services and thus are one of the first concrete integrations that IBM has delivered since acquiring Red Hat. Based on that platform, Cloud Paks are micro-service based, containerized solutions that build on open source components whenever applicable, but extend and combine these into a comprehensive, packaged solution.

IBM Cloud Pak for Security will connect to a large number of tools. These cover many of the relevant vendors in the cybersecurity tools market, such as Splunk, Tenable, Carbon Black, Elastic, BigFix, AWS, or Microsoft Azure, to name just a few. All these 3rd party solutions can connect to IBM Cloud Pak for Security for access from the platform’s unified interface. Security data is accessed leveraging the platform’s universal data services and open source technology, and relevant findings can be further analyzed from one place.

Beyond integrating data sources, IBM Cloud Pak for Security also delivers unified access to that information, both via APIs and UIs. For API access, IBM Cloud Pak for Security provides its own SDK. Using that, businesses also can more easily build their own integrations and apps. The main focus of what IBM delivers out-of-the-box is on security workflows, orchestrating multiple existing solutions into integrated workflows, and supporting automation. These are intended to enable better and more efficient incident response, which is the key requirement for today’s businesses and their SOCs.

Another key capability of IBM Cloud Pak for Security is the federated search, which is a natural consequence of unified access to security-related information. Based on this federated search, information can easily be extracted and analyzed across multiple tools. Again, IBM Cloud Pak for Security does not move data to a central store, but federates access to information. However, investigations across the complex IT landscapes of today’s businesses are massively simplified when queries can be run across a variety of tools from different providers (and multiple instances of such tools), across all data centers and cloud services.

The broad support by other vendors from the very start of IBM Cloud Pak for Security is proof of the validity of this approach and the fact that this is a well-thought-out integration platform, not a replacement of existing investments.

IBM Cloud Pak for Security builds on open standards wherever feasible, which is in line with the Open Source foundation of the new solution. The solution can run on various platforms, including on premises environments, private clouds and public IaaS infrastructures such as AWS, Microsoft Azure, Google Cloud Platform, or for sure IBM’s own Cloud.

Strengths and Challenges

With IBM Cloud Pak for Security, IBM delivers a major innovation to the Cybersecurity market, addressing three of the major issues:

  • The increasing volatility of today’s IT environments;
  • The need to support complex, heterogeneous IT operating environments, that are hybrid and span multiple clouds;
  • The multitude of cybersecurity tools that commonly exist in today’s businesses, but lack integration of data and processes.

Based on the approach IBM has chosen, businesses can better integrate both their existing tools and data, in a way that easily builds on and extends their incident response processes. With the approach chosen by IBM, existing investments into cybersecurity solutions are preserved, while adding additional value.

We expect the network of partners supporting IBM Cloud Pak for Security to grow beyond the already impressive initial list of partners. From a competitive perspective, the biggest competition to IBM Cloud Pak for Security will come from vendors delivering incident response solutions. However, even those solutions can build on the integration and federated search capabilities provided by IBM Cloud Pak for Security.

In sum, IBM Cloud Pak for Security is a highly interesting solution for many businesses, specifically the ones running their own SOCs. It also appears to be of high interest to MSSPs (Managed Security Solution Providers) that need to integrate a range of solutions. We strongly recommend that customers evaluate IBM Cloud Pak for Security for use in their cybersecurity initiatives.

Strengths

  • Unique offering that allows for consolidated access to security and threat information across a wide range of systems;
  • Strong partner ecosystem, with support from the majority of leading security vendors;
  • No movement of data, but data federation, avoids the creation of new data siloes;
  • SDK and other options for developing additional apps and for creating flexible incident response workflows;
  • Runs in various cloud environments, supports multi-cloud and hybrid requirements;
  • Modern architecture, based on microservices and containerization.

Challenges

  • Confusion with existing incident response solutions, although built as a broader platform to work with any third-party solution
  • Successful federated search depends on availability of data sources.

Complexity In Cybersecurity Report 2019

Executive Summary

A rapidly changing threat landscape has made organizational security more crucial and challenging than ever. Organizations have responded by investing in an enormous number of disconnected point solutions. However, a combination of disjointed products that all operate independently and generate a large amount of data has culminated in a crisis of complexity. As a result, security teams are unable to get the most out of their investments and must spend even more to properly secure their environments. The need to reduce complexity has never been clearer.

IBM commissioned Forrester Consulting to evaluate the state of security complexity and the effect it is having on security efficiency and effectiveness. To explore this topic, Forrester conducted a survey with 200 global security professionals with responsibility for security strategy and/or security technology purchases. We found that nearly all respondents report concerns over complexity. However, organizations that have taken steps to simplify their security ecosystems, including consolidating solutions onto a single management platform, have seen meaningful benefits.

KEY FINDINGS

  • Security environments are increasingly complex. Security pros tend to operate in siloed teams, so it is rare — if not impossible — to get a full picture of data and processes across the entire security discipline, much less the entire company. Making matters worse, data volumes across locations, and particularly in the cloud, have skyrocketed in the past few years, and that trend is likely to continue.
  • Organizations are spending more but not necessarily wisely. Increases in security budgets and organizational pressure to avoid a damaging data breach have led organizations to adopt a plethora of disconnected point solutions. Our study found that, on average, 52% of security products and 77% of vendors have been added within the last two years. This buying frenzy has added to organizations’ security complexity, but it has not necessarily added to the overall maturity of their security programs.
  • Complexity erodes ROI. Security complexity has become a problem that organizations can no longer ignore. Our study found that 91% of organizations are concerned with complexity, and those with very complex environments are more likely to cite cost challenges and inefficiencies with technology and staff.
  • Simplification can unlock security value. Organizations that are effective at simplifying their environments make the most out of existing security investments. They are connecting data and processes and integrating solutions into consolidated management platforms. They’re also reaping several benefits, including improved ability to detect, respond to, and recover from threats.

Reactive Tactics Have Spun A Tangled Web Of Security Solutions

Highly publicized data breaches have moved security into the minds of executive teams. This has made it easier for security leaders to make the case for budget and get executive buy-in to fund security projects. In fact, security spending as a percentage of IT budgets is on the rise.1 At the same time, the industry has responded with a flood of intriguing solutions to protect against new threats.2 The result? Reactive security spending and widespread inefficiency.

Our research of 200 security decision makers who are prioritizing optimization of security assets and resources over the next year reinforces these trends: “Improving return on security investments” is one of their top priorities, behind only “improving advanced threat capabilities.” In addition, many are focused on increasing the productivity of their staff, simplifying their environments, and improving operational efficiency (see Figure 1). However, they face an uphill battle in these efforts as they now need to secure:

  • A soaring number of point solutions. Security pros, particularly those at companies that have suffered a breach, have dipped into their growing budgets to pay for new security solutions. However, many are solving for short-term needs without giving enough thought to how each addition contributes to the long-term maturity of their security programs. As a result, teams are overladen with a multitude of disparate and disconnected point solutions. Our respondents’ organizations are managing an average of 25 different security products/services from 13 vendors — and many have even more. In a sign of the buying frenzy of recent years, 52% of the security products added and 77% of the new vendors added were done so within just the last 24 months.
  • Skyrocketing data volumes. Over the past two years, data — on- premises, in endpoints, in virtual servers, and especially in the cloud — has increased substantially. In every location we tested, respondents report at least a 55% increase in data stored, on average, and many have seen data double, triple, or more in the same time period (see Figure 2). Yet unlike the increase in security products, security teams have little to no control over data increases that will likely persist in the years to come.

On average, 77% of the security vendors at respondents’ organizations were added in the last 24 months.

  • Data living across a heterogeneous environment. Increasingly, data is moving out of endpoints and on-premises servers and is proliferating across the enterprise. Given that many organizations have embraced cloud-first strategies, it’s not surprising that much of organizations’ data is moving to the cloud, and their security assets and processes have followed. In fact, respondents predict that by 2020, the percent of security assets and processes their organizations have in the cloud will increase by more than 200% over 2016 levels. Data dispersed across heterogeneous architectures threatens security teams’ visibility: They cannot protect valuable data assets they cannot see.

Despite the broad range of security defenses to which organizations have flocked, most security pros struggle to maximize the value of their investments and protect their organizations.3 In fact, fewer than a quarter say they’re completely satisfied with their security portfolios in supporting them to develop advanced threat intelligence capabilities; increase productivity of security staff; extract insight from data; and drive efficiencies. Moreover, just 50% or fewer respondents report they are using all or most of the available functionality in any of the 11 security technology categories in our study. Notably, fewer than 25% say their technologies are fully optimized in internet-of-things (IoT) security; identity and access management; security automation and orchestration; and security information and event management (SIEM).

Complexity Threatens Cyber Security Effectiveness

As today’s security leaders struggle to manage the complexity of their security environments, they are learning the tough lesson that adding more point solutions doesn’t simplify anything. The lengthy deployment cycles, difficult integrations, and user training involved with managing an influx of solutions present risks that make technology investments fail.4 Respondents recognize that this poses a very real threat: 91% express some level of concern over their organizations’ security complexity (see Figure 3). It ranks second highest among their top concerns, only slightly behind the changing and evolving nature of threats.

While nearly every respondent indicated some concern over complexity in their environment, the results from those who responded with the highest levels of concern made it clear just how complex organizations have become (see Figure 4). Predictably, the greater the concern over complexity, the more products and data organizations had. The respondents who indicated a higher concern for complexity also, on average, have 45% more security products and 36% more vendors than respondents who were less concerned. In addition, they are managing more data across locations. As a result, they’re twice as likely as other organizations to describe integrating disparate security technologies and data sources as challenging and to struggle with gaining visibility into security-related data and insights (see Figure 5). And any insight they do glean is difficult to build on: Over half of them cite collaborating with peers inside and outside of the organization on security insights as a barrier, making it more difficult for them to develop their threat intelligence capabilities and to uncover patterns of vulnerability.

  • Complexity erodes ROI. Security complexity exacerbates an already challenging issue: an inability to make the most of security resources. Those with greater complexity concern are more likely to say that the complexity of their security environment has led to high costs. They also are more likely to cite inefficiencies in the use of security technology and security staff time and to find it difficult to train staff on new security products
  • Complexity inhibits innovation. Market uncertainty stemming from government agencies, competitors, and customers requires companies to constantly change. Only those that are fast, connected, and innovative will be able to thrive in a shifting landscape. Unfortunately, those with security complexity struggle to evolve with the agility required: 50% report that their complexity has made it difficult to replace outdated security technology and 37% say that it has caused them to defer purchases in fear of adding further complexity. Making matters worse, 29% feel locked in on specific vendors. While companies with highly complex security environments could benefit greatly from a more streamlined ecosystem, they face an uphill battle in their efforts to modernize relative to organizations with less complexity.

SECURITY SIMPLIFICATION UNLOCKS INVESTMENT VALUE

Despite the challenges that stand in their way, organizations with the greatest levels of complexity concern simplification as a worthwhile effort. They associate several benefits with a more simplified environment — from an improvement in their ability to extract insight from data, to threat intelligence, to internal collaboration and user experience. Notably, 72% believe simplification would have a “moderate” or “significant” improvement in operational efficiency, security staff productivity (68%), and security investment return (58%) — addressing their highest priorities.

Organizations believe a simplified environment would allow them to improve operational efficiency, security staff productivity, and security investment return.

Simplified Cybersecurity Portfolios Are The Way Forward

Recognizing the challenges that come with security complexity and the benefits of simplification, the question becomes: What can organizations do to reduce security complexity? While all respondents report taking at least some steps to reduce complexity, fewer than half (44%) describe their efforts as effective. For the purposes of this study, we refer to these organizations as “Champions,” and all others (i.e., those who cite their efforts as “somewhat,” “slightly,” or “not at all” effective) as “Challengers”

Although Champions are more effective in their simplification efforts, their simplification journeys are not complete. In fact, many of them still cite concerns with complexity. They have, however, started to make significant inroads in streamlining their security and have lessons to teach organizations that are still struggling. In particular, Champions:

  • Prioritize simplification. While it may seem obvious, one of the most distinct differences between Champions and Challengers is the level of priority they’re placing on simplification. Not only are Champions significantly more likely to make simplification a priority, they’re also more likely to dedicate specific resources to the effort (see Figure 8). Seventy-five percent of Champions have dedicated resources relative to just 56% of Challengers. Additionally, 63% or more of Champions have employed each of the simplification tactics we tested.
  • Maximize existing investments. Chasing shiny new point solutions instead of optimizing technology that already exists can lead to multiple disconnected tools for similar needs. A more efficient approach is to look for opportunities to reinvent and reinvest in a smaller set of existing tools, maximizing their utility.5 Champions are doing just that: 63% have worked to reduce the number of point solutions or vendors in their security portfolios, relative to just 36% of Challengers. In addition, Champions are more likely to have reigned in repetitive spending (66% versus 52%). Finally, Champions squeeze more value out of existing security tools — they enjoy a much higher utilization rate across a range of security investments
  • Consolidate management to a single platform. Champions are more likely to be consolidating management software to a single platform or vendor (63% vs. 45%). By managing their security assets in a consolidated platform, they can transform disparate solutions into a cohesive and connected security suite. Consolidated offerings give security teams more visibility and control into their environments; they also reduce the operational complexity and cost of managing individual point products and lay the foundation for automation and orchestration of security defenses.

ADDRESSING COMPLEXITY MAKES ORGANIZATIONS MORE RESILIENT

A particularly fascinating finding of this research was that Champions are not only benefiting from efficiency gains, they’re also more successful at protecting their companies from cybersecurity threats.

Champions are more likely than their less effective peers to say they’re satisfied with their security portfolio’s ability to detect threats across their ecosystem — and they’re significantly more likely to be satisfied in its ability to respond to threats and recover from security incidents, with margins ranging from 33 to 35 points. Even though Champions still have more work to do to overcome complexity, their approach to the issue — prioritizing the effort, maximizing existing investments, and consolidating management to a single platform — makes them far more prepared to protect their organizations from security disruptions

SECURITY VENDORS PLAY AN IMPORTANT ROLE IN SIMPLIFICATION

For their part, many organizations have made some progress in their efforts to simplify their security ecosystems. However, the benefits they’ve seen will be short-lived if security vendors don’t make changes that support these efforts. Organizations must look past vendors that perpetuate the cycle of inefficiency. In fact, 98% of surveyed decision makers want help from their security vendors to reduce complexity. They want vendors to offer solutions that

  • Are easy to use, integrate, and buy. Forrester’s research has found that security leaders face major challenges with staff and skill deficits.7 Our research reinforces this trend: 44% percent of security leaders in our study cite a lack of staff as a concern in protecting their companies. Too many technologies that are poorly integrated only worsen the human capital problem. It also makes it more difficult for organizations to address the issue: 40% say skill shortages are a barrier in their efforts to simplify their environments. Many security vendors are developing new platforms that consider ease of use and simplified controls.8 Security professionals in our research express an appetite for these types of tools, as well as ones that are easy to integrate and buy.
  • Can optimize and connect to solutions already in place. Security decision makers want their vendors to understand their existing security landscapes. They want vendors to extend the value of existing security investments and integrate only those capabilities that contribute to long-term maturity of their cybersecurity programs. This includes being able to seamlessly integrate with products from other vendors, not only ones within that vendor’s portfolio.
  • Activate and connect data regardless of where it’s stored. With data growing and spreading to every corner of the enterprise, organizations cannot reasonably consolidate all data in a centralized location for insight and analysis — at least not without incurring significant costs. Security teams see value in vendors that can help them activate and connect data no matter where it’s stored, reducing their need for pricey, time-consuming, and complex data migration projects.

Key Recommendations

Complexity is becoming an increasingly urgent issue in today’s security landscape and will continue to grow if not addressed. Security teams that wish to avoid this pitfall should make reducing security complexity a priority and focus within their organizations. Take these three key actions to do so:

  • Consolidate capabilities to focus on business objectives. Limiting the number of individual solutions reduces the amount of management and maintenance required to keep the security ecosystem running smoothly. Finding ways to reinvest and reinvent current solutions helps organizations keep staff increases in check and helps increase ROI.
  • Decrease data silos to limit friction for security teams. Firms that fail to integrate security, information technology, and application data together will not possess the necessary information to make quick, accurate decisions about the potential ramifications of security events. The more concerned firms were with complexity, the more isolated data came up as a symptom. Tools and technology that allow security teams to receive and analyze disparate data sources will help security teams act decisively.
  • Simplify your ecosystem to enhance response and recovery. While detecting threats is reasonably improved by a simplified security portfolio, massive gains were identified in responding to, and recovering from, incidents, no matter where those events came from in the customer’s ecosystem. If the adage holds true that “it’s if, not when” for security leaders, then response and recovery must take center stage as areas of emphasis. Simplifying security is one clear way to make that happen.

Cloud Paks An open, faster, more secure way to move core business applications to any cloud

Introduction

Enterprises employ cloud technologies to deliver innovation at scale and at lower cost. New services are often built natively on cloud, but can also come with risk of “vendor lock-in” and escalating cost. Existing applications can be rewritten, but rewriting thousands (if not tens of thousands) of applications from the ground up is both cost and time prohibitive, so taking steps to modernize existing applications can be an attractive approach with faster time to value. Both strategies — building new cloud- native applications and modernizing existing applications to support cloud environments — need to be done in an open, portable manner that helps clients improve time to value while avoiding lock-in. Containers and Kubernetes enable this by providing portability and consistency in development and operations, but developers and administrators are still required to continuously connect component layers and verify interoperability. In addition, collecting,integrating and analyzing data enables data engineers and scientists to help application developers infuse AI into applications; but the trick is to do this without adding complexity and cost. And, then, once applications are built and connected to data, IT operations need them to run in an environment that is high performing, scalable and reliable. Today, around 80 percent of existing enterprise workloads have not yet moved to the cloud due to these challenges and enterprises struggle with movement, connectivity and management across clouds.

To help clients move more workloads, faster, to cloud and AI, IBM announces:

A family of Cloud Paks that give developers, data managers and administrators an open environment to quickly build new cloud-native applications, modernize/extend existing applications, and deploy middleware in a consistent manner across multiple clouds. Today, IBM introduces six new Cloud Paks: Cloud Pak for Applications, Cloud Pak for Data, Cloud Pak for Integration, Cloud Pak for Multicloud Management, Cloud Pak for Automation and Cloud Pak for Security that deliver IBM enterprise software and open source components in open and secure solutions that are easily consumable and can run anywhere.

Cloud Paks provide:

  • Containerized IBM middleware and open source components.
  • Consistent added capabilities for deployment, lifecycle management, and production quality of service – logging, monitoring, version upgrade and roll-back, vulnerability assessment and testing
  • Certification by IBM to run on Red Hat OpenShift, providing full software stack support, and regular security, compliance and version compatibility updates
  • The Cloud Pak for Applications reduces development time to market by up to 84 percent by reducing the compute required and by accelerating throughput of the continuous integration continuous delivery (CICD) pipeline, and reduces operational expenses by up to 75 percent through increasing IT admin efficiency and reducing related labor costs.

IBM is committed to delivering enterprise software from across its portfolio for modern cloud environments. Cloud Paks provide enterprise container software that is pre-integrated for cloud use cases in production-ready configurations; they can be quickly and easily deployed to Kubernetes-based container orchestration platforms. In addition, these Cloud Paks provide resiliency, scalability, and integration with core platform services, like monitoring or identity management.

Cloud Paks enable you to easily deploy modern enterprise software either on-premises, in the cloud, or with pre-integrated systems and quickly bring workloads to production by seamlessly leveraging Kubernetes as the management framework supporting production-level qualities of service and end-to-end lifecycle management. This gives clients an open, faster, more secure way to move core business applications for any cloud, as shown in Figure 2.

This paper describes Cloud Paks in more detail, highlighting the additional value that this delivery model offers, with some background details on the underlying open technologies, for those who may be unfamiliar.

Cloud Paks Simplify Enterprise-grade Deployment and Management for Software in Containers

Red Hat OpenShift Container Platform (OCP) builds on top of the open source Kubernetes orchestration technology. IBM is committed to delivering enterprise software designed for these modern container orchestration platforms and Red Hat OpenShift Container Platform.

Deploying complex software workloads in optimized and highly-available configurations can involve collecting or creating large numbers of disparate components, including the workload container images, configuration files, and assets for integrating with your chosen platforms or management tools.

Cloud Paks bring together thoroughly-tested enterprise software container images using, Helm charts with intelligent defaults for simplified configuration and management and can include additional assets, such as Operators that intelligently manage software during runtime, in a single archive from a trusted source. As a result, you can quickly load software into your catalog, walk through a simple deployment experience, guided by logical defaults and helper text and easily deploy production-ready enterprise software onto IBM’s container platforms, in the cloud or in your own data center.

Core Services

Cloud Paks utilize a common set of operational services by default, such as security and identity services, logging, monitoring, auditing. For example, workloads can be monitored out of the box using the integrated monitoring service. Similarly, logs that are generated by each workload container are collected and correlated by a platform-provided logging service that includes a collection, search and dashboarding capabilities.

Containers Revisited

Containers give you the ability to run multiple software elements, isolated from each other, within the same operating system instance. Unlike a virtual machine, a container shares the operating system kernel with its underlying host and since system calls can be made directly, a container can be run more efficiently and be instantiated faster, as shown in Figure 3.

While containers are available in many forms and implementations, the Open Container Initiative (OCI) has emerged as the leading standard in the industry, defining open specifications for container images and container runtimes.

The fact that containers are lightweight and start quickly makes them ideal for hosting microservices, which are a key element of cloud-native application architectures. Traditional, more monolithic applications can also be run inside containers, but will benefit less from this technology. As always, keep in mind that a poorly architected and designed application is still a poorly architected and designed application when run in a container.

Building production-ready images

All IBM container images provided in Cloud Paks follow a set of well-defined best practices and guidelines, ensuring support for production use cases, and consistency across the IBM software portfolio. Cloud Paks support deployment to Red Hat OpenShift Container Platform using Red Hat Certified Containers.

One element that is especially important to IBM is support for multiple hardware architectures, including Linux on IBM Power and Linux on IBM LinuxOne, and providing images for the hardware platforms the respective IBM products support.

Management of security vulnerabilities is also critically important. Cloud Paks are scanned regularly for known image vulnerabilities as part of the standard build procedures. As part of full software stack support and ongoing security, compliance and version compatibility, all Cloud Paks must have a documented process for managing newly identified vulnerabilities. Additionally, IBM follows Secure Engineering Practices for development of software and maintains a Security Vulnerability Management process (PSIRT) for commercial software supported by IBM. IBM Software delivered as a Cloud Pak inherently follows those corporate standards. Cloud Paks delivered by partners must have a documented process for addressing security image vulnerabilities.

Kubernetes – a management environment for containers

Up to this point, we have discussed the basics of building, running and maintaining container images, which can be used to run containers in a standalone fashion. But containers alone do not provide a framework for implementing production-grade qualities of service like resilience, scalability or maintenance.

For example, software running inside a container may write data to a file. If the file exists within the container, deleting the container will also delete the file. If the software’s state must be maintained, that state data should be written to a volume outside of the container. If the state needs to be consistent even with the failure of a host, then that volume should exist on storage that is accessible by multiple hosts, most likely over a network. To maintain availability of the application during the failure of a host, you would also need to run multiple instances of the container on multiple hosts and load balance incoming requests across those containers. This would require a reasonable amount of effort to manage manually, especially if you want to be able to seamlessly upgrade to newer versions of an application or build a continuous integration process.

Kubernetes is an open source orchestration platform for containers that solves these administrative challenges by providing a declarative framework for deploying, scaling, and managing container-based workloads. It is a popular choice for managing clusters of containers throughout the industry; RedHat OpenShift provides a common Kubernetes- based platform for Cloud Paks on premises, on public cloud infrastructure, in pre-integrated systems, and managed service via Red Hat OpenShift on IBM Cloud.

The declarative definition of abstract resources that influence how the cluster behaves and manages workloads is a key feature of Kubernetes and will be covered briefly below. Cloud Paks are built for Kubernetes-based environments and include all the configuration artifacts you need to easily customize and deploy an enterprise-grade Kubernetes workload.

Takeaway: Kubernetes is a popular framework for running containers in a scalable, resilient, highly available fashion, supporting production use cases for enterprise applications. IBM has chosen Kubernetes as its container orchestration platform both on-premises and in the cloud, and Cloud Paks are designed specifically for deployment to the Red Hat OpenShift Container Platform

Kubernetes Resources

Kubernetes provides users with a set of defined resources including a way to describe how containers should run in the cluster, how the system reacts to events like failures, how to make containers accessible over the network and how and where to store data.

You can describe the provisioning and management of your application workload by defining the desired state of these resources using a YAML file and Kubernetes will manage the cluster environment accordingly.

Internally, Kubernetes delegates the management of the resource to its associated controller.A few of the most common Kubernetes resources are described briefly below.

  • Deployment Describes the desired state of one or more Pods, which are collections of running containers
  • StatefulSet Similar to the Deployment resource mentioned above but describes containers that maintain state.
  • Service Describes how pods that are part of a deployed workload (Deployment, StatefulSet, etc.) can be accessed from outside of the Kubernetes cluster. Gives clients a well- defined target address/port combination across multiple pods, including across restarts and recreations of these pods.
  • PersistentVolume / StorageClass Enables you to define an allocation of storage that persists across the lifetime of the pods that use it. Pods can attach to a suitable volume by using a PersistentVolumeClaim. The StorageClass resource describes different qualities of service that are available for different types of storage that may be offered.
  • ConfigMap Enables separating configuration data for a pod into a separate object.
  • Secret Similar to ConfigMaps, Secrets contain sensitive data (for example, passwords or ssh keys) and are stored separately from container that use them.

This list barely scratches the surface of the resource types available in Kubernetes, which also supports defining custom resource types. For a more detailed description of Kubernetes resources, see the official documentation.

The resource definitions mentioned above contain configuration metadata that is critical in ensuring enterprise-grade qualities of service of the workloads running in Kubernetes. For example, you can define memory and CPU allocations for individual pods, ensuring that sufficient capacity is available when creating containers, while also ensuring that individual workloads cannot use more than their allocated resources, enabling effective sharing of hardware resources. As another example of the control afforded by Kubernetes, you can define affinity and anti-affinity rules that let you control which of your worker nodes certain pods run on.

Takeaway: Individual workloads, including IBM software content that runs in Red Hat Open Shift, are described using predefined Kubernetes resources. Cloud Paks define Kubernetes resources for your workloads using intelligent defaults, and provide for easy customization during deployment.

Using Helm charts to orchestrate containerized workloads

As mentioned above, Kubernetes uses abstract resources to allow describing the desired target state of a workload, paired with controller implementations that enforce the defined target state.

Each application or service running in Kubernetes is represented by multiple resources, each of which is typically defined in its own YAML file. Each resource also carries several attributes with it, whose values may differ from deployment to deployment based on the specifics of the environment and the supported usage.

The Helm project aims to simplify the deployment and maintenance of complex workloads in Kubernetes environments. It provides a packaging format called a chart, which you can use to group together YAML templates that define related sets of Kubernetes resources. An instance of a Helm chart that has been installed into a target Kubernetes cluster is called a release. Helm not only simplifies orchestration of Kubernetes resources, it also simplifies the ongoing maintenance of your releases. This makes production-level operations like rolling upgrades more manageable and contributes to the overall availability and maintainability of your application.

Cloud Paks use pre-built configurations that describe runtime environments. These resource definitions can be easily customized during deployment, and upgrades can be easily rolled out or rolled back.

Cloud Paks are certified by both IBM and Red Hat for the OpenShift Container Platform; the container images included in Cloud Paks are required to complete Red Hat container certification, which is complementary to IBM’s certification process.

Kubernetes Operators

Operators are flexible and powerful custom Kubernetes resource definitions that can be used for deploying and managing containerized workloads in a Kubernetes environment. They can also be used for packaging applications, in a manner similar to Helm charts, or they can be used together with Helm, in a complementary manner.

By building specific knowledge and best practices about deploying and managing a software product directly into an operator, a software provider can capture domain-specific expertise about operating the product, giving end-users powerful automated runtime and lifecycle management capabilities without requiring that same level of expertise from the end user.

For example, Cloud Paks can utilize operators to deliver IBM’s expert knowledge about deploying and managing IBM enterprise software products in modern container orchestration environments as part of the software offering itself, transferring some of IBM’s expertise to the customer automatically.

Takeaway: Cloud Paks include Helm charts, which assemble all of the Kubernetes resource definitions related to a piece of IBM software, and provide for easy customization, deployment, and maintenance using Red Hat OpenShift, on premises or in the cloud, and can include Operators, which capture product-specific deployment and management expertise.

Cloud Paks

Cloud Pak for Applications

To remain competitively relevant, enterprises must consistently update their software applications to meet the demands of their customers and users. Meeting this demand requires an application platform that allows for the quick building, testing and deployment in a modern, microservice-based architecture. To satisfy this crucial need, IBM is introducing Cloud Pak for Applications.

Cloud Pak for Applications supports your enterprise’s application runtimes, and offer instrumental developer tools and modernization toolkits, DevOps, Apps/Ops Management and a self-service portal. Cloud Paks for Applications can accelerate the ability to build cloud- native apps by leveraging built-in developer tools and processes, including support for microservices functions and serverless computing. Customers can leverage this Cloud Pak to quickly build apps on any cloud, while also providing the most straightforward modernization path to the cloud for existing IBM WebSphere clients, with security, resiliency and scalability.

Cloud Pak for Automation

Companies in nearly every industry are digitizing and automating their business operations. They’re freeing employees from low-value tasks and assisting them with high-value work to drive a new wave of productivity, and customer and employee experiences. However, it can be challenging to effectively automate work at the pace of customer and internal expectations.

To address these challenges, IBM is introducing, Cloud Pak for Automation is a pre-integrated set of essential software that enables you to easily design, build and run intelligent automation applications at scale. With Cloud Pak for Automation, you deploy on your choice of clouds, anywhere Kubernetes is supported – with low- code tools for business users and real-time performance visibility for business managers. It’s one flexible package with simple, consistent licensing. No vendor lock-in. And existing customers can migrate their automation runtimes without application changes or data migration.

Cloud Pak for Data

As companies continue to harness the potential of AI, they need to use data from diverse sources, support best-in-class tools and frameworks, and run models across a variety of environments. However, 81% of business leaders do not understand the data required for AI. And even if they did, 80% of data is either inaccessible, untrusted, or unanalyzed. Simply put, there’s no AI without an information architecture.

IBM recognizes this challenge our clients are facing. As a result, IBM is introducing Cloud Pak for Data with the goal of creating a prescriptive approach to accelerate the journey to AI: the AI Ladder, developed to help a client drive digital transformation in their business, no matter where they are on their journey. Cloud Pak for Data brings together all the critical cloud, data and AI capabilities as containerized microservices to deliver the AI Ladder within one unified multicloud platform.

Cloud Pak for Integration

Traditional integration approaches cannot cope with the volume and pace of business innovation. Digital transformation enables organizations to unlock the power of data to create personalized customer experiences, utilize artificial intelligence, and innovate faster to stay ahead of the competition. In order to keep up, businesses need the ability to integrate in hybrid environments outside the data center and drive speed and efficiency in integration development while lowering costs. To facilitate these new, evolving demands, IBM is introducing Cloud Pak for Integration.

Cloud Pak for Integration is designed to support the scale, security and flexibility required to empower your digital transformation. With the Cloud Pak, enterprises can integrate across multiple clouds with a container- based platform that can be deployed across any on- premise or Kubernetes cloud environment, and easily connect applications, services, and data with the right mix of integration styles, spanning API lifecycle management, application integration, enterprise messaging, event streams, and high-speed data transfer.

Enable your business to set up the appropriate organizational models and governance practices to support a modern agile approach to integration with Cloud Pak for Integration.

Cloud Pak for Multicloud Management

As application innovation accelerates, enterprises have increasingly adopted a hybrid, multicloud architecture to build, test and deploy applications. With this new hybrid, multicloud architecture, the volume and complexity of objects and metrics to manage has skyrocketed, making monitoring and securing the enterprise IT ecosystem more difficult. To mitigate some of this complexity, IBM is introducing Cloud Pak for Multicloud Management.

Cloud Pak for Multicloud Management provides consistent visibility, automation, and governance across a range of multicloud management capabilities such as cost and asset management, infrastructure management, application management, multi-cluster management, edge management, and integration with existing tools and processes. Customers can leverage Cloud Pak for Multicloud Management to simplify their IT and application ops management, while increasing flexibility and cost savings with intelligent data analysis driven by predictive signals.

Cloud Pak for Security

As organizations move their business to the cloud, applications and data may be spread across multiple clouds and on-premises environments. Trying to secure this fragmented IT environment can be challenging. Security teams must undertake costly migration projects and complex integrations. In fact, more than half of the security team surveyed struggle to integrate data with analytics tools and to combine data across their cloud environments to spot security threats. IBM Cloud Pak for Security is a containerized software platform pre-integrated with Red Hat OpenShift. It connects to existing security data sources,enabling teams to search for indicators of compromise (IOC) across any cloud or on-premises location and uncover new threats. Once threats have been found, Cloud Pak for Security allows teams to quickly orchestrate responses and automate actions from a unified interface.

Summary

Cloud Paks provide an easy and powerful way to run high-quality, container-based enterprise software on a modern Kubernetes-based orchestration platform that enables high availability, scalability, and ongoing maintenance for enterprise applications, from a source you know and trust. They include container images that are built and tested by product teams, capturing product expertise and best practices in a form factor that is easy to consume and deploy in a location of your choice, on-premises, in the cloud, or with pre-integrated systems. Images provided by IBM are regularly scanned for known security vulnerabilities and follow a rigorous process for managing newly identified issues.

Cloud Paks also include pre-configured Helm charts that describe runtime environments for IBM software products based on established best practices and can be easily customized during the deployment process. They may also include Operators that build product- specific deployment and lifecycle management expertise into the software. These capabilities combine to provide a first-class deployment experience, integration with core platform services, and production- ready qualities of service. Certified Cloud Paks built with Red Hat Certified Containers build the combined expertise of IBM and Red Hat into trusted enterprise software solutions that combine fast, easy deployment with enterprise qualities of service and simplified, flexible pricing.

The new family of Cloud Paks—including Cloud Pak for Applications, Cloud Pak for Data, Cloud Pak for Integration, Cloud Pak for Multicloud Management, Cloud Pak for Automation and Cloud Pak for Security— give customers fully modular and easy to consume capabilities they need to bring the next 80 percent of their workloads to modern, cloud-based environments.

Trends in Modern Data Protection

Executive Summary

The world of data protection is ever changing. ESG recently completed research that identified and quantified several fundamental trends occurring in the market today, with cloud adoption and data reuse as key highlights. Data protection remains a top priority for IT leaders as they modernize data protection processes while still facing service level and cost challenges. The use of secondary copies in a self-service fashion is low, but early adopters are realizing significant improvements in development quality and speed and efficiency of analysis.

  • 73% of surveyed respondents rate data protection modernization as a top-five IT priority in the next 12 months.
  • Respondents indicated that high storage costs was their number one data protection challenge.
  • Two thirds (66%) of respondents want to consume object storage on-premises, or at least have that option.
  • The percentage of organizations whose primary data protection solution will be a virtual appliance is expected to more than double over the next 24 months.
  • 67% of organizations use cloud data protection today, making up 25.6% of organizations’ protection environments on average.
  • 80% of respondents see cloud as a viable tape replacement.
  • 85% of organizations that allow application developers to use secondary copies of data for test and development in a self-serve fashion feel they have improved the development quality and speed.
  • 86% of organizations that allow business analysts to run analytics on secondary copies of data in a self-serve fashion feel they have increased efficiency of analysis.
  • When respondents were asked about the vendors that are best positioned to help organizations achieve data protection modernization, IBM was most frequently cited.

Introduction

ESG recently conducted quantitative research on behalf of IBM, surveying well-qualified IT leaders in North America and Western Europe to better understand data protection modernization trends and perceptions. Topics of focus in this research include cloud adoption for data protection and secondary uses of data copies (beyond business continuity and disaster recovery). Full respondent demographics and research methodology details are also provided at the end of this report.

Research Findings

Data Protection Landscape

For IT professionals whose responsibilities include data protection, keeping pace with data growth and technology were the most frequently cited challenges. Given these factors, it’s not surprising that storage and operational costs were also frequently mentioned among their greatest challenges (see Figure 1). This trend confirms other ESG research in the space of data protection.

On the operational processes front, data loss ranks among the top concerns of respondents, and mitigating this includes maintaining key metrics such as recovery time objective (RTO) or recovery point objective (RPO). Data access and compliance challenges are also often mentioned as operational challenges.

Taken in combination, it is interesting to see how these challenges are intimately intertwined, and it can be hard to discern where one starts and one finishes. These operational challenges also underline how mission-critical data protection has become.

Another set of challenges are more technical and implementation-focused in nature and deserve detailed scrutiny as they have an impact on investment decisions.

Integration of data protection into the “ecosystem” is still presenting challenges, in particular around storage, databases, cloud-based workloads, and migration to the cloud. (Cloud-related topics are covered in more detail in a separate section.) While a mature technology in the data center, management of virtualized workloads is still evolving as customers and vendors alike seek better overall integration with virtual environments.

One notable challenge is the ability for administrators to manage data access and lifecycle. As the technology matures in the organization, data reuse is not only desired, but also needed as a business imperative. (This report delves deeper into data reuse in a separate section.)

This places data protection professionals at a crossroads in many ways, with current practices and challenges firmly anchored in traditional backup and recovery objectives, competing with technology integration, and a clear focus on operational and business efficiency, which is reflected in changing demands such as data reuse and more integration of cloud technology.

In this context, we sought to gain insight into what these professionals determine to be “enterprise-grade” data protection. This term is often used in the industry, but what does it really mean, particularly in light of the challenges highlighted in the survey results?

Figure 1. Key Data Protection Challenges

Comprehensive and enterprise-grade data protection can be characterized by a set of factors or capabilities. When ESG asked respondents what characteristics are most indicative of an enterprise-grade solution, centralization, speed of issue resolution, ease of integration, and task automation were most frequently mentioned (see Figure 2).

The ability to centralize protection across a broad range of workloads tops the list and confirms trends from other ESG research in the space of data protection. 2 It also more generally confirms a need for less complexity in IT, which is a major challenge.3

The ability to quickly recognize and respond to potential issues is also not surprisingly top of mind. Operational efficiency can easily be affected by many internal and external events, with a litany of consequences on service levels and business interruptions.

Integration into the IT “ecosystem” is a challenge that clearly affects the “enterprise-class” perception. Ease of integration into the IT fabric and automation of data protection tasks are not trivial, particularly in larger environments.

Data reduction capabilities are typically needed to reduce the storage footprint and associated operational cost. It is a common feature, but clearly its presence (or absence) will impact a solution’s perception in the minds of IT decision makers.

The survey also showed that more than four in five respondents use or are interested in object storage. Object storage doesn’t necessarily reside in a public cloud, such as AWS, IBM Cloud, or Azure. Often companies want to utilize on-premises object storage. Looking at findings from the research, we see that, in fact, two-thirds want to consume object storage on- premises or at least have that option.

Looking at these factors in combination, it is clear that operational scale is at the heart of the respondents’ answers. This should come as no surprise, considering the deluge of data organizations must protect, combined with the stringent SLAs they must adhere to. Solutions that are both broad and deep feature-wise are needed.

Figure 2. Defining “Enterprise-grade” Data Protection

Data Protection Modernization

One objective of ESG’s research was to assess data protection modernization perceptions. It is therefore critical to define what data protection modernization actually means for organizations, and why it matters to them through the manifestation of expected benefits.

Figure 3 provides a detailed picture of key attributes of modernization in the minds of respondents. Most often, respondents reported that data protection modernization means meeting recovery and data security objectives, which can be obtained via storage and software snapshot technology and air-gapped tape storage or object storage with immutable data features.

It is important to highlight that beyond the protection efficacy and security, a material portion of respondents cited increasing public cloud service use for backup and recovery, optimizing archival data technology, data self-service, consolidating data protection solutions, and increasing automation as attributes they consider important to the meaning of data protection modernization. Many parallels exist between an organization’s “modernization” perceptions and their definitions of “enterprise-class.”

Modernization of data protection really matters to IT professionals: 73% of all organizations surveyed place data protection modernization among their organizations’ top five IT priorities for the next 12-18 months (see Figure 4). As a matter of fact, it is the most important priority for 20% of all organizations surveyed and 25% of enterprises (defined as organizations with 1,000 or more employees).

Figure 4. The Importance of Data Protection Modernization

The areas of data protection modernization that organizations reported wanting to improve echo the key challenges they reported facing: better economics, greater data security/compliance, and improved operational backup/recovery/ replication (see Figure 5). Data reuse and self-service are also to be noted because they are intertwined to a large extent. In summary, IT leaders expect that data protection modernization projects, which are top IT priorities, will alleviate key data protection challenges they identified.

Figure 5. Desired Benefits of Modernization

Beyond questioning respondents about the importance of data protection modernization, what the concept means to them, and the benefits desired, ESG also asked respondents to identify the vendors they believe can best help them achieve data protection modernization over the next 12-24 months. IBM was the vendor most frequently mentioned by respondents, showing enterprise customers’ ongoing confidence in IBM as a strategic technology partner

Cloud Trends

Cloud technologies and services are maturing and becoming pervasive in every aspect of IT infrastructure for many different uses. Backup and recovery of on-premises workloads was one of the first use cases to become “cloudified” a few years ago, followed by protection of cloud-based workloads, but tight integration and instrumentation has been lacking. The situation has significantly evolved as solutions have matured and organizations have gained experience and confidence.

Cloud is a hot topic in data protection and is continuing to gain momentum. Current cloud service usage for data protection is significant today and poised to grow: 67% of organizations surveyed currently use public cloud services in their data protection environment (see Figure 7). Among those users, on average 26% of their protection environments (measured by amount of data) are housed in the cloud, a number which is expected to grow to 35% in 24 months. It should also be noted that 80% of respondents see cloud as a potential tape replacement.

Figure 7. Public Cloud Use for Data Protection

Leveraging cloud-based data protection is expected to yield significant benefits. Improved performance, scalability, lower costs, and data portability are the cloud benefits that the largest percentages of organizations have achieved or expect to achieve

From an RPO/RTO perspective, while capabilities in the cloud have greatly evolved recently, a number of environmental conditions need to be met, such as sufficient network bandwidth. Also, there may be variations based on the type and scale of recovery the organization is undertaking. With this proverbial “grain of salt” in mind, organizations still strongly believe the cloud can deliver improved SLAs.

Scalability has been a challenge in IT forever, and not surprisingly, one of particular magnitude in data protection. Gaining access to a seemingly unlimited capacity of tiered storage for data safekeeping, archiving, or active recovery/high availability is undeniably a strong advantage offered by cloud infrastructures.

Lowering costs is a perennial pursuit for IT leaders, and the ulility consumption model offered by the cloud is clearly perceived as a great way to lower the bills for cloud-based data protection services. However, organizations may see varying levels of results here as well since there are many different pricing schemas and ingress, compute, and egress fees to consider. The fact remains that surveyed organizations expect a cost benefit when migrating data protection to cloud.

Improving the portability of data in order to make it accessible to other parts of the organization for reuse, such as in

analytics or test and development, is also a frequently cited benefit.

Figure 8. Benefits of Cloud-based Data Protection

Cloud and data protection have a great future together, and we expect to see more capabilities emerge to keep fulfilling the cloud promises. While benefit perceptions may appear inflated in some areas today, we expect further instrumentation and better portability will continue to accelerate cloud adoption.

Data Reuse

Reusing data is not a new concept in IT or business. Most organizations reuse some secondary data today, yet they have reported some roadblocks to broader adoption, among which are security concerns and compliance exposures as well as operational cost and complexity to deploy. Data reuse means thinking of data as an asset that can be leveraged beyond its original use. Examples of data reuse include running analytics to detect customer trends and using “real” production data to test a new application or feature without interfering with actual production workloads.

ESG categorized data reuse in three ways: technical use cases, such as disaster recovery testing or ransomware sandboxing; business use cases such as analytics, collaboration, and reporting; and data compliance determination use cases (GDPR, for example, is a recent regulation creating significant pressure points across businesses and IT). There are also hybrid use cases such as the aforementioned application development and testing, which combines both technical and business prerogatives. Figure 9 provides more details on these use cases’ popularity.

Figure 9. Most Organizations Reuse Secondary Data Today

It should be noted that many constraints are associated with sharing data, such as portability, format, compliance/security, and of course, volume. In other words, sharing data is not as easy as it sounds.

Zooming in on a couple of these data reuse cases, we see that IT becomes a business enabler that improves the efficacy of data reuse processes. This is significant because it places data and IT at the heart of business operational efficiency and makes IT supporters of business innovation. The modalities of data delivery will vary, with self-service being an attractive approach that allows the data consumer to work with data without having to depend on IT for provisioning or access.

Application development is a critical function in many organizations as it provides key business capabilities that are often competitive differentiators. However, to develop right, you need to test a lot—with flexibility and real data to reproduce production conditions. Our research shows that while only about 19% of organizations surveyed reuse data for app dev in a self-service fashion, those who do are realizing great business value (see Figure 11). For this use case, 85% of these organizations feel they have improved development quality and speed.

Figure 11. Self-service Secondary Data Improves Application Development Outcomes

Application developers have increased speed and quality of development thanks to self-service dev and test on secondary storage

Application developers have an increased appreciation of IT as an enabler of self-service dev and test on secondary storage

Conclusion

The market is changing rapidly with evolving demands such as data reuse increasing the value of data backups and rapid adoption of cloud-based technologies extending traditional on-premises environments. In fact, cloud has become one of the hottest topics in data protection and is continuing to gain momentum and growth.

Yet organizations continue to struggle with storage costs and operational efficiency-related data protection challenges. Modernization is needed to tackle these challenges and IT professionals are making it a top priority.

In this new world of hybrid data protection, data is not really portable across solutions and is not easily reusable. The requirement for context and content about the data is also becoming more visible with the need for reuse of data to support digital transformation and produce more business benefits.

Organizations that reuse data can reap benefits with a direct and positive impact on business. At ESG, we call this evolution “data intelligence.” As organizations evolve to manage data beyond traditional data protection, the ability to further leverage data assets will enable more parts of the business to deliver on their mission, produce better applications, help companies better understand customers, and support compliance efforts. This concept is summarized in Figure 12: As organizations evolve, they will cross a data management chasm, which will allow them to leverage data more intelligently and autonomously.

Figure 12. Backup Data Transformation Model

Research Methodology and Demographics

To gather data for this report, ESG conducted a comprehensive online survey of IT decision makers from private- and public-sector organizations.

These IT professionals were qualified on the basis of their knowledge about their organization’s data protection practices and requirements. Additionally, all respondents were required to be employed at organizations with 100 or more employees.

Responses were collected between 12/19/18 and 1/4/19, with respondents based in North America (US and Canada) representing 73% of complete responses, and Western Europe (UK) representing 27%.

All respondents were provided an incentive to complete the survey in the form of cash awards and/or cash equivalents. After filtering out unqualified respondents, removing duplicate responses, and screening the remaining completed responses (on several criteria) for data integrity, a final sample of 275 respondents remained.

The figures that follow detail the demographics of the respondent base, including individual respondents’ current job title as well as organization size and primary industry. Note: Totals in figures and tables throughout this report may not add up to 100% due to rounding.