Orange County 949-556-3131

San Diego 619-618-2211

Toll Free 855-203-6339

Deliver a competitive customer experience with automated workflows

Your employees perform tasks, make decisions and take action in a variety of workflows every day

Whether you’re looking to improve customer-facing workflows or internal processes, each one requires different steps to execute or guide different activities. 

These varied activities present three challenges:

  1. How do you optimize and scale activities in the most accurate, consistent and responsive way to satisfy customers? 
  2. How do you find a solution that can meet diverse needs to handle less-structured, case-centric activities?
  3. How do you measure performance and determine what needs improvement?

IBM Business Automation Workflow simplifies workflows for virtually any business style

IBM® Business Automation Workflow helps you easily and collaboratively discover new ways to automate and scale work by combining business process and case management capabilities. With these combined automation capabilities, you can:

  • Create and manage workflows from process models. 
  • Simplify complex tasks to reduce costs and execution time. 
  • Create content-initiated workflows, so an event triggers a workflow. 
  • Escape paper-heavy or spreadsheet-based workflow organization. 
  • Reconfigure workflows with minimal IT involvement for flexibility. 
  • Reuse workflow components when building parallel processes. 
  • Document actions, content and data to help prepare for audits. 
  • Use built-in reporting, auditing and governance to monitor real-time performance and compliance. 
  • Meet changing business needs with a component based solution.
  • Gain access from the public cloud.
  • Quickly provision with immediate access.

Success story

Financial firm consolidates content for efficiency

A large fund administrator had content related to various accounts scattered across its data storage and administrative interfaces rather than readily accessible within each account. With the help of IBM Case Manager, now part of the IBM Business Automation Workflow, the content was consolidated, and each account was simplified with the creation of a unified electronic record.

Your benefits

IBM Business Automation Workflow

IBM Business Automation Workflow helps you: 

  • Deliver better outcomes. 
  • Create and monitor competitive frontor back-office workflows.
  • Automate operations at scale. 
  • Create unified data records.
  • Improve knowledge-worker interactions. 
  • Improve the overall customer experience.
  • Better handle governance requirements. 
  • Improve the governance of workflows.

Deployment options

Choosing an environment 

Choose one or more of the following environments that best meets your needs:

  • Your cloud. Deploy and run the platform in the cloud of your choice with virtualized containers using Kubernetes or Terraforms with IBM Business Automation for Multicloud.
  • IBM Cloud™. Get started quickly with the IBM Business Automation Workflow on Cloud SaaS offering, fully managed by IBM with flexible licenses. 
  • On premises. Gain access to on-premises workflow capabilities with Business Automation Workflow.

The journey to automation

The scale you need to compete

IBM intelligent automation platform extends human work with digital labor using one or more automation capabilities.

Start small with one capability, then mix and match capabilities as business needs evolve.

IBM Business Automation Workflow helps you: 

  • Reduce errors and activity time in workflows using robotic process automation (RPA) bots. 
  • Reduce workflow business-logic complexity and change rules-based decisions faster.
  • Improve case work productivity by increasing knowledgeworker understanding of unstructured content.

Eight great reasons to adopt hybrid cloud with IBM Power Systems

The advantages of adopting hybrid cloud with IBM Power Systems

IBM Power Systems plus IBM Cloud technology offers users a host of valuable business benefits, from scaling out rapidly to full transparency of costs and the ability to test and develop new projects without financial or operational risk 

There is no one-cloud-fits-all option. While the possibilities are endless, the cloud journey can be daunting for enterprises which have unique regulatory and data requirements, extensive IT investments in their on-premise infrastructure, and are currently running anywhere from five to 15 different mission-critical workloads. This is why businesses need to consider a hybrid cloud approach, which helps them to build, deploy and manage applications and data running on-premise, in private clouds and in public clouds. 

With a combination of innovative technology and industry expertise – underpinned with security and a focus on open solutions – IBM Cloud is already helping to move some of the world’s largest enterprises into the next chapter of their cloud journey. Now, users of IBM Power Systems can more easily take part in their own hybrid cloud journey with IBM Power Systems Virtual Servers on IBM Cloud. 

IBM Power Systems Virtual Servers on IBM Cloud deliver IBM Power9 virtual machines, with IBM AIX and IBM i, on the IBM Cloud public infrastructure-as-service platform. It’s the best of IBM Power and the best of IBM Cloud in one convenient, economical, self-managed, pay-as-you-use environment. 

There are significant business benefits driving this IBM Power Systems on IBM Cloud hybrid approach, which address the different challenges organisations face as they expand their IT infrastructure beyond on-premise to meet the demands of the digital economy.

1. A pathway to hybrid cloud

Users of IBM Power Systems have historically relied on in-house infrastructure for the raw performance of Power-based processors. But they have been held back from accessing higher levels of flexibility, agility and efficiency because of obstacles to on-premise growth, such as enormous capital outlay, management hassles and risk. IBM Power Systems Virtual Servers on IBM Cloud now offer an opportunity to realise those benefits, and to help ensure a more seamless and smoother path to cloud based on a hybrid cloud decision framework.

IBM Power Systems users can enjoy fast, self-service provisioning, flexible management both on-premise and off-premise, with access to IBM Cloud services. A pricing model based on pay-as-you-use billing provides full transparency of costs and ensures that organisations know exactly what they are paying for. 

Hybrid cloud offers many ancillary benefits – described below – but one that features heavily in any business case is the ability to scale up and out to meet demand quickly and economically.

“Organisations can turn on provision and get capacity instantly with faster time to value. It is about making IT proactive. It makes sense for organisations that want to modernise their applications to be better equipped for a hybrid multicloud environment,” says Meryl Veramonti, portfolio marketing manager for IBM Cloud. 

“IBM Power Systems users have relied on their on-premise infrastructure and their own datacentres and IT setups. They want to modernise and expand their cloud capabilities, but do not want to pay a huge upfront cost for full migration. Now they don’t have to, because they have a direct path.”

IBM continues to innovate with capabilities that lend themselves towards not only a hybrid cloud model but a hybrid multicloud environment model. With the latest tooling around IBM’s multicloud management, users are able to leverage this offering to develop apps once, and run them anywhere on an open platform architected for their choice of private and public clouds.

2. Modernising infrastructure while maintaining expertise

A proactive and modern IT infrastructure is within easy reach. The migration path to hybrid cloud offers IBM Power Systems users a cost-effective and efficient on-ramp to the cloud in a way that mitigates risk, because they don’t have to change the operating system or the environment with which they are familiar.

“They can keep the operating system and the operating environment and dip their toes in the cloud without a huge upfront cost because there is a pay-as you-go model,” says Veramonti. “There is a head of steam for hybrid cloud because organisations are aware of the benefits, but they want to know exactly how it will work for them and whether they can still do what they do on-premise.” 

An organisation that does not want to shift its entire environment to the cloud, but does want to explore how cloud can benefit their business, is ideally served by IBM Cloud’s migration path to hybrid cloud.

This level of assurance that what works on-premise will work the same way in the cloud is key for IBM’s AIX and IBM i client base. 

“IBM Power Systems have a very specific infrastructure: the build is unique and moving or extending some of their environment to the IBM Cloud is just 

a way to build out from on-premise. It presents a similar environment to their home environment,” says José Rafael Paez, worldwide offering manager for IBM Systems.

3. Cost-effective, low-risk capacity

Organisations that have run into obstacles regarding capacity and wish to use cloud to expand without any costly upfront investment in more equipment or a huge upgrade programme can now reap the rewards. 

Choosing IBM Cloud makes sense for IBM Power Systems users that understandably want to avoid unnecessary risk in migrating critical IT infrastructure. 

“Historically, moving on-premise infrastructure to the cloud is not an easy switch and is a big learning curve,” says Paez. “The ease of transformation and the knowledge that the migration will work compared with choosing a competitor’s cloud platform, which can introduce risk, is a top business benefit for choosing IBM Cloud for risk-averse people.” 

Veramonti cites the example of a manufacturing company that didn’t want to spend more money on outdated on-premise equipment. “They wanted their infrastructure to work with the cloud to gain more power and memory without risking porting everything over to a new environment,” she says.

4. Effective, lower-cost maintenance

Another attractive proposition for the manufacturing firm in moving to hybrid cloud was the reduction in maintenance costs for workloads running in the IBM Cloud.

“The manufacturer was in charge of maintaining everything on-premise, but in the cloud IBM takes care of maintenance because it is all off-premise,” says Veramonti.

The ease of management, as well as the reduction in associated costs, mean that an organisation can rechannel IT resources to focus on innovation rather than keeping the lights on. 

“Organisations that are completely deployed on-premise have to spend a lot of money on hardware, electricity, cooling and operations teams to keep information running with enterprise uptime. Clients using IBM Cloud’s tier-one datacentre will have access to IBM capability and backbone and a high-end infrastructure,” adds Paez.

As well as the flexibility that comes with the guaranteed high performance of IBM Cloud without the maintenance headache, organisations are assured that migration offers an opportunity to make their data more secure.

5. Security and business continuity

Many organisations are rightly concerned about security and business continuity in the digital age, when data, the lifeblood of any organisation, must be made available to the business 24/7 and be protected from outages, cyber attacks and compromises. There are clear business benefits from strengthening disaster recovery by moving to hybrid cloud, and IBM Power Systems users are keen to capitalise on the IBM Cloud for this reason.

“A cloud strategy for disaster recovery has minimal risk by ensuring two locations – one on-premise and one as backup in the cloud,” says Veramonti.

This is an important business advantage for all IBM Power Systems users, big and small. They can enhance business continuity planning and de-risk their on-premise environment.

“Organisations want geo-diversity and by deploying in IBM Cloud datacentres they can gain that diversity,” says Michael Daubman, worldwide offering manager for IBM Cloud Infrastructure Services, IBM Cloud and Cognitive Software.

Daubman points out that IBM Cloud has datacentres with the IBM Power Systems Virtual Server offering in the US, Germany, and soon in many other countries (including the UK in early 2020), which provides organisations with high availability and the opportunity to capitalise on a cloud-based disaster recovery strategy.

6. Development and testing on the latest technology

Another business benefit is that organisations gain access to up-to-date hardware technology, such as the latest IBM Power9 servers. If they want to develop and test software, this is an attractive proposition.

Developing and testing applications is fundamental for the future of any organisation and its ability to innovate. Understandably, many are hesitant about devoting finite on-premise resources to projects that have an inherent risk. A hybrid cloud strategy therefore makes economic sense. The business can use IBM Cloud to develop and test new projects without committing large-scale resources on something that is yet to be proven.

Organisations can get access to new hardware and can develop and test in a flexible cloud model. Power10 will come quickly and when it does we’ll leverage it in the cloud,” says Daubman

From a skills perspective, organisations can also benefit from the best minds behind IBM Cloud. “Who knows Power10 better than the people who build the hardware platform?” he adds

Operational risk is reduced and organisations get access to best-practice architecture and the flexibility provided by IBM Cloud.

“Flexibility is assured because provisioning is managed through a set of application programming interfaces and there is no need to buy and drop in new hardware,” explains Daubman.

The choice of cloud provider is critical to the success of any business pursuing a hybrid cloud strategy; IBM Power Systems users can be reassured that by using their existing relationship with IBM, they have a quality cloud provider with a global reach of datacentres, skills and network.

7. Transparent pay-as-you-go pricing

Risk mitigation in IBM Cloud extends to an openness regarding price. 

“IBM Power Systems on IBM Cloud has transparent pricing. There is no risk or upfront cost. It is 100% owned and operated by IBM Cloud. We have global datacentres across the cloud and data never leaves our hands,” says Veramonti.

Pricing models are not only transparent, but can be customised for individual organisations to suit their specific needs.

“Organisations are charged hourly and billed monthly. They can turn on or turn off cloud resources depending on their needs – for example, Black Friday for a retailer or a seasonal spike in demand for Christmas where there is an influx of data that requires backup. The applications can then be turned off after the holiday season,” says Veramonti.

In the digital economy, this level of flexibility is an especially attractive business benefit. “You do not have to pre-buy capacity for your peak, which makes sense from a cost, management and operational perspective,” says Daubman.

8. Support from IBM Cloud

Demand for skilled IT professionals is increasing as business becomes more data driven. By adopting a hybrid cloud strategy, IBM Power Systems users can access IBM Cloud’s skilled teams around the globe, as well as state-of-the art technology to meet all their needs – from scaling out quickly to managing costs, and being able to test and develop without exposure to financial and operational risk.

These benefits make a compelling case for IBM Power Systems users to adopt a hybrid cloud strategy to future-proof their business. Increasingly, keeping all systems on-premise is becoming a business risk. 

Migrating to IBM Cloud makes sense for IBM clients that want to:

  • Grow their business;
  • Deploy workloads where and when they want them in an IBM Cloud datacentre; 
  • Deploy a resilient, cloud-based disaster recovery strategy;
  • Choose a deployment that is fully customisable;
  • Adopt cloud services to gain access to all the skills, services and added value that the global IBM Cloud network can provide.

Three core scenarios for migrating IBM Power Systems workloads to the IBM Cloud

Core scenarios driving the migration of workloads to the IBM Cloud

There are strong business cases for users of IBM Power Systems to make the move to the cloud, especially regarding business continuity and disaster recovery provision, testing and development, and application modernisation

There is an increasingly compelling business case for organisations to leverage the public cloud for a hybrid environment. For IBM Power Systems users, that path has become even more attractive since the launch of IBM Power Systems Virtual Servers on IBM Cloud, offering a route to run IBM AIX and IBM i workloads easily in the cloud that is cost effective, efficient and low risk.

When considering why and how to migrate, organisations must look at the opportunities and the practicalities of implementation.

Why migrate to IBM Cloud?

For IBM Power Systems clients that have typically relied on a wholly on-premise infrastructure, IBM Power Systems Virtual Servers on IBM Cloud provides a fast and reliable method for spinning up resources in the public cloud. With a pricing model that avoids capital expenditure, it is easy to scale out rapidly, while paying only for what you use is an attractive proposition for organisations that want to test, develop and flexibly grow infrastructure utilisation without having to buy new equipment.

IBM Power Systems Virtual Servers deliver IBM AIX or IBM i with IBM Power9 processor-based virtual machines on IBM Cloud. The advantages are it is a multi-tenant, self-managed, Power-as-a-service in IBM Cloud with consumption based operational expenditure pricing.

IBM Cloud Virtual Server environments deliver full infrastructure-as-a-service capabilities. For IBM Power Systems on IBM Cloud instances, organisations are billed for hourly metering in a pay-as-you-use subscription model. Clients receive self-service virtual server lifecycle management with a pool of compute, memory, storage and network infrastructure. Organisations access the cloud through client-owned IBM Cloud resources and bring their own operating system (OS) images or leverage available OS images.

A further advantage comes for organisations with limited internal skills and resources looking to explore a top-tier hybrid cloud. IBM Cloud manages and supports all the state-of-the-art infrastructure layers up to the operating system, which gives clients the peace of mind that their data and business continuity are in safe hands. 

By examining three of the main use cases driving migration – disaster recovery, software development and testing, and production application hosting – organisations can work with IBM Cloud to employ the latest best practices for a successful project.

IBM Cloud for disaster recovery

One IBM client, a furniture retailer based in Florida, decided to migrate to IBM Cloud to boost its business continuity and disaster recovery capability.

“The company was being hit more and more by weather events and needed to strengthen disaster recovery,” says Michael Daubman, worldwide offering manager for IBM Cloud Infrastructure Services, IBM Cloud and Cognitive Software.

By choosing IBM Power Systems Virtual Servers on IBM Cloud, the retailer did not have to purchase an additional data centre to supplement its on-premise deployment, and gained the agility of a public cloud within the controlled and secure environment of a private cloud.

“This offering was designed in the cloud exactly to the best-practice standards of our clients’ on-premise infrastructure,” says Daubman.

The cloud architecture solution was set up with fibre-attached storage, a dual virtual I/O server (VIOS) system for virtual storage redundancy on PowerVM as the hypervisor and DB2 data management products

“It was super important to have an enterprise solution,” says Daubman. 

Provisioning out onto the cloud meant the retailer could scale up and grow an OS image, paying only for what it needs as it grows. The architecture natively leverages Live Partition Mobility to avoid outages, moving AIX and IBM i workloads from one system to another as required, maintaining a highly available solution.

Daubman highlights how, by taking the required best-practice on-premise architecture and replicating it in the cloud, the retailer was given peace of mind, and the knowledge that all its enterprise software would remain fully supported. “The solution is a cloud-consumable version of the industry best practice for on-premise systems. It is an architecture for production enterprise applications,” he says.

The two critical components of the implementation include leveraging PowerVM hypervisor to provide a secure and scalable virtualisation environment for AIX and IBM i workloads; and providing fibre-attached enterprise-scale IBM Cloud storage.

Daubman points out that network-attached storage is very common in cloud deployments, but it introduced latency, so the retailer required a different solution for enterprise power. The fact that many enterprise software providers make support for their applications conditional on similar direct-attached storage was a huge positive factor for the furniture retailer’s implementation.

“Being able to run software in a supported capacity in the cloud is critical. Fibre Attached storage improves performance, and for a lot of software vendors, it is a requirement,” he says.

IBM Power Systems users can be assured that mission-critical applications are protected and future-proofed with IBM hybrid cloud. Data is copied to the cloud and can be accessed by users around the globe.

“Data can be secured faster and distributed faster. IBM Cloud offers resilience in the cloud, and organisations no longer have to add another datacenter in their on-premise environment. They can meet or exceed their investment in recovery time objective and recovery point objective for their disaster recovery plan,” says Meryl Veramonti, portfolio marketing manager for IBM Cloud. 

Disaster recovery might be the initial business case for adopting cloud, but according to Daubman, it often leads to greater uptake for other uses. “Disaster recovery is often a first step in the journey for a client,” he says.

He points to the fact that the retailer’s intention is to build out for production deployment in the cloud and to link with disaster recovery, becoming a more cloud-focused business.

IBM Cloud for development and testing

Organisations that want to migrate IBM Power Systems workloads to IBM Cloud for software development and testing now have an easy route to implementation because they can turn on and switch off resources quickly, which provides flexibility and makes economic sense.

They can gain enterprise systems as a service for fast, low-risk development and test on the latest IBM Power Systems platforms.

“Our offering allows development teams to test new workloads in the cloud. They can provision an instance and turn it off without thinking about nuances and worries. They just spin into the cloud and payment is metered by the hour. It is very affordable testing,” says José Rafael Paez, worldwide offering manager for IBM Systems.

According to Paez, the biggest headache for an organisation around testing in an on-premise environment is caused by the limited capacity available. They will need a certain amount of capacity for development and testing, but often cannot share capacity with the mission-critical workloads that run the business and take priority. For this reason, development and testing are often sectioned off, which comes at a cost.

“Internal management of assets often goes back and forth, with teams trying to achieve just enough capacity for testing,” says Paez.

Access to a sandbox environment in the IBM Cloud to test new software takes these worries away, and also provides links to the IBM Cloud marketplace and applications.

“A common trait of IBM Power Systems clients is that they are risk-averse. They won’t upgrade to the latest version of AIX unless they need to because they don’t want to mess up mission-critical applications. By providing a sandbox testing environment, they can test new versions of OS and new IBM Power Systems boxes in a safe place in the cloud,” says Paez. “They have a separate space for something the company considers risky, which offers a roadmap into an upgrade. They can test new versions of the AIX operating system and the hardware and add new applications from the cloud marketplace in a safe place.”

The temporary sandbox environment for testing, and its use as a step towards deploying production applications on IBM Power Systems Virtual Servers on IBM Cloud, meets the needs of risk-averse clients who want a remote environment away from critical workloads to test updates and changes. The flexible consumption model is cost-efficient and a stress-free way to evaluate, plan and test next-generation hardware or a new version of the operating system.

With a dedicated link to on-premise connectivity, and IBM Cloud Object Store providing optional backup and custom image hosting, organisations can have peace of mind that testing and developing on IBM Power Systems Virtual Servers on IBM Cloud is the right move. As well as being able to test hardware before a major refresh, such as Power9, and test complex architecture changes, it also offers an initial step into an organisation’s hybrid cloud journey.

IBM Cloud for hosting production applications

Using IBM Cloud for AIX and IBM i production application hosting is the third major use case where organisations can leverage the flexibility of the cloud to deploy core business applications.

Organisations can run an enterprise-level workload in the IBM Cloud if they want to modernise their IT estate in a risk-averse manner.

“If they run into obstacles over capacity, it can help without having to invest in an on-premise upgrade,” says Veramonti.

Daubman says IBM Cloud gives IBM Power Systems users access to the latest hardware, such as Power9 processor-based servers, and allows IBM Cloud to take over datacentre management below the operating system, for which many organisations do not have the skills.

The implementation process gives users the ability to have load-balancing capability as part of the architecture in IBM Cloud and to pursue a hybrid approach to IT. Organisations can burst capacity into IBM Cloud and not have to worry about management overheads.

“It gives organisations the flexibility between a concrete on-premise infrastructure and a flexible cloud,” says Paez.

He says the hybrid connection with the on-premise environment gives organisations a new level of management they may not be accustomed to.

“A positive experience of hybrid cloud with production application hosting pushes a lot of clients to pursue cloud,” says Paez.

Organisations can manage applications in whichever environment they want with IBM’s multicloud manager.

“A simple demonstration proves that if you have a cloud and on-premise environment, you can move workloads from one environment into the other,” says Paez.

By gaining experience of how IBM Power Systems Virtual Servers on IBM Cloud works, any preconceptions about blocks and barriers associated with multiple environments are removed, and organisations are encouraged to expand and develop their hybrid cloud use.The ability to add additional capacity on the fly is particularly appealing to organisations that need to respond to a volatile and competitive landscape.

“They want to be able to cope with an influx of usage caused by seasonal spikes, new products, testing a new application and wanting to play around with that application, without the risk of doing it in a real-world scenario,” says Veramonti.

By increasing their cloud portfolio, clients can modernise legacy workloads and gain the reassurance of being able to access the latest IBM Cloud technologies and skills.

“Many organisations are challenged with skills and resources on-premise, and they are using the cloud more and more,” says Daubman.

IBM Cloud for flexible, transparent pricing

Another business bonus for IBM Power Systems users migrating to IBM Cloud comes from licence payments decreasing. Daubman highlights how licensing for the operating system is based on the exact resources you need at the time.

“You are not paying for licences for the whole machine – only what you need at a point in time. Operational expenses are reduced because you are not licensing a machine. It is a virtual machine and you pay based on the processing power you are using,” he says.

Billing transparency allows organisations to budget and plan effectively. In the digital economy, where responsiveness is a prerequisite to success, being able to scale out into the cloud and subsequently de-provision instantly can save significant costs.

“Billing transparency lets organisations look and plan ahead. You don’t have to plan for all the resources you need today. You can double cores in November for Black Friday, and you don’t have to worry about having enough staff on-premise and calling people in during the holiday season,” says Daubman.

A clear path to the IBM Cloud

IBM Power Systems users now have a clear path to the cloud with the introduction of IBM Power Systems Virtual Servers for IBM Cloud. There are strong business cases to make the move, especially for business continuity and disaster recovery provision; testing and development; and application modernisation. These starting points can be used to explore further how an IBM Cloud hybrid focus can strategically help an organisation on its journey to digital transformation.

IBM Cloud’s global geo-diversity and expertise, with a guarantee of security and compliance in an end-to-end approach for the enterprise, are reassuring for IBM Power Systems users. Reliable and continuous security are provided for the client’s environment, and IBM Cloud provides support, management and delivery across the complete cloud environment, using IBM expertise and proven technology. 

Reliability, performance and affordability give peace of mind to enterprises that are considering hybrid cloud. An organisation opting for IBM Power Systems Virtual Servers on IBM Cloud will soon discover how cloud can support its strategic direction towards a digital future.

Ovum Decision Matrix: Selecting a Cloud Platform for Hybrid Integration Vendor

Catalyst 

Digital business is driving a proliferation of applications, services, data stores, and APIs that need to be connected to deliver critical business processes. Integration is the lifeblood of today’s digital economy, and middleware is the software layer connecting different applications, services, devices, data sources, and business entities. This Ovum Decision Matrix (ODM) is a comprehensive evaluation to help enterprise IT leaders, including chief information officers (CIOs), enterprise/integration architects, integration competency center (ICC)/integration center of excellence (CoE) directors, and digital transformation leaders select a cloud platform provider best suited to their specific hybrid integration requirements. 

Ovum view

Ovum’s ICT Enterprise Insights 2018/19 survey results indicate a strong inclination on the part of IT leaders to invest in integration infrastructure modernization, including the adoption of new integration platforms. IT continues to struggle to meet new application and data integration requirements driven by digitalization and changing customer expectations. Line-of-business (LOB) leaders are no longer willing to wait for months for the delivery of integration capabilities that are mission-critical for specific business initiatives. Furthermore, integration competency centers (ICCs) or integration centers of excellence are being pushed hard to look for alternatives that significantly reduce time to value without prolonged procurement cycles.

Against a background of changing digital business requirements, IT leaders need to focus on revamping enterprise integration strategy, which invariably will involve the adoption of cloud platforms for hybrid integration, offering deployment and operational flexibility and greater agility at a lower cost of ownership to meet multifaceted hybrid integration requirements. With this report, Ovum is changing its nomenclature for defining middleware-as-a-service (MWaaS) suites for hybrid integration and, in future, we will be using the term “cloud platforms (or PaaS products) for hybrid integration” to refer to this market.

We follow the specification of National Institute of Standards and Technology (NIST) for PaaS, according to which PaaS as a cloud service model should meet a range of characteristics, including: 

  • on-demand self-service  
  • broad network access 
  • resource pooling  
  • rapid elasticity 
  • measured service.  

Merely delivering application and/or data integration capabilities via the cloud on a subscription basis does not amount to a PaaS provision for hybrid integration. Some cloud platforms or software components of a cloud platform included in this ODM might not be termed as PaaS according to NIST’s specification, which is why we use the term “cloud platform”.

User productivity tools and deployment flexibility are key characteristics of cloud platforms for hybrid integration that help enterprises respond more quickly to evolving digital business requirements. With DevOps practices, microservices, and containerized applications gaining popularity, IT leaders should evaluate the option of deploying middleware (integration platforms) on software containers as a means to drive operational agility and deployment flexibility. 

Key findings 

  • Integration is still predominantly done by IT practitioners, but IT leaders should consider “ease of use” for both integration practitioners and less-skilled, non-technical users, such as power users, when selecting integration platforms for a range of hybrid integration use cases
  • The latest Ovum forecast reveals that integration PaaS (iPaaS) and API platform market segments are expected to grow at a compound annual growth rate (CAGR) of 59.7% and 61.7% respectively between 2018 and 2023, clearly the fastest growing middleware/PaaS market segments. 
  • The global iPaaS market is showing signs of saturation (not in terms of growth), and vendor offerings do not differ much in terms of technical capabilities. Key areas for iPaaS product development include support for deployment on containers, improvement in user experience (UX) for less-skilled, non-technical users, and machine learning (ML)-led automation of different stages of integration projects ranging from design and development to deployment and maintenance.  
  • PaaS for hybrid integration will significantly cannibalize the established on-premises middleware market, and by the end of 2019, Ovum expects at least 50% of the new spend (not including upgrades of on-premises middleware or renewal of similar licenses) on middleware to be accounted for by PaaS or cloud-based integration services. 
  • Major middleware and leading iPaaS vendors dominate this market, even though their routes to the development of a cloud platform for hybrid integration can be quite different. 
  • PaaS adoption in enterprises is for both strategic and tactical hybrid integration initiatives. IT leaders realize the significant benefits that cloud platforms for hybrid integration bring to the table in terms of greater agility in responding to business requirements and cost savings. 
  •  iPaaS vendors have invested significantly in developing lightweight PaaS-style products for B2B/electronic data interchange (EDI) integration to support key EDI messaging standards, rapid trading partner onboarding and community management, and governance of B2B processes. 

Vendor solution selection 

Inclusion criteria

Ovum has closely tracked the emerging cloud platforms for hybrid integration vendor landscape over the last four years and we have used these observations as the baseline for inclusion/exclusion in this ODM. The criteria for inclusion of a vendor in this ODM are as follows:

  • The cloud platform(s) should deliver significant capabilities across two of the three technology assessment criteria groups: “cloud integration”; “API platform”; and “B2B and mobile application/backend integration”. 
  • There is substantial evidence that the vendor is interested in pursuing a progressive product strategy that helps ascertain product viability and applicability to a range of hybrid integration use cases. 
  • Middleware products are not “cloud washed” and individual components demonstrate essential cloud services characteristics, such as multitenancy, resource sharing, and rapid scalability, as well as allowing usage tracking and metering and supporting the enforcement of service-level agreements (SLAs).
  • The cloud platform(s) should have been generally available as of March 30, 2019. The vendor must have at least 50 enterprise (paid) customers using various components as of May 31, 2019. We did not want to leave out any vendor because of limitations related to significant revenue realization.
  • It should deliver enterprise-grade developer enablement and API-led integration capabilities, and an appropriate UX for less-skilled users (non-developers). 
  • At least the core middleware product should be architecturally coherent and product/component APIs should be available to support internal integration between different components of the middleware stack.

Exclusion criteria

A vendor is not included in this ODM if: 

  • The core middleware component provided by the vendor is restricted to API management, and the rest of the capabilities are delivered in partnership with other vendors. For this reason, specialized API management vendors that do not offer any substantial capabilities for other hybrid integration use cases were excluded from this ODM. This means that cloud based application and data integration capabilities are critical for inclusion in this ODM. 
  • The vendor is unable to commit required time and resources for the development of research to be included in this ODM. Some vendors, which otherwise would qualify for inclusion in this ODM, opted out of the evaluation exercise and were unable to submit the required level of information in response to the evaluation criteria spreadsheet by the cutoff date. (Jitterbit is the only vendor that qualified for inclusion but opted not to participate without citing any specific reason, and we decided to exclude it from this ODM). 
  • There is not enough evidence that the vendor is interested in expanding the features and capabilities to cater for the requirements of emerging use cases and exploiting new market trends.
  • There are indications that the vendor is struggling to grow its business and has partnered with middleware vendors to defend its position in the market, or the customer base is confined to only specific regions. 
  • The vendor did not feature in any of the analyst enquiries from enterprise IT leaders and users, and there were other indicators for a lack of investment and a dedicated strategy for middleware products. 

Ovum ratings

Market leader

This category represents a leading vendor that Ovum believes is worthy of a place on most technology selection shortlists. The vendor has established a commanding market position with its cloud platform for hybrid integration, demonstrating relatively high maturity, cohesiveness, good innovation and enterprise fit, and the capability to meet the requirements of a wider range of hybrid integration use cases, as well as executing an aggressive product roadmap and commercial strategy to drive enterprise adoption and business growth. In terms of scores, to be a leader in this ODM, a vendor must score 8 out of 10 both on “technology” and “execution and market impact” assessment dimensions.

Market challenger

A cloud platform for hybrid integration vendor in this category has a good market position and offers competitive functionality and a good price/performance proposition and should be considered as part of the technology selection. The vendor has established a significant customer base, with its platform demonstrating substantial maturity, catering for the requirements of a range of hybrid integration use cases, as well as continuing to execute a progressive product and commercial strategy. Some vendors included in this category are “strong performers” in terms of technology assessment but did not achieve consistently high or good scores for the “execution and market impact” dimension, which is an essential requirement for achieving a “market leader” rating.

Market follower

A cloud platform for hybrid integration in this category is typically aimed at specific hybrid integration use cases and/or customer segment(s) and can be explored as part of the technology selection. It can deliver the requisite features and capabilities at reasonable charge for specific use cases or requirements. This ODM does not feature any vendor in this category. 

Market and solution analysis 

A major market shift has begun and will not slow down 

Hybrid integration platform

Ovum defines a hybrid integration platform as a cohesive set of integration software (middleware) products that enable users to develop, secure, and govern integration flows connecting diverse applications, systems, services, and data stores as well as enabling rapid API creation/composition and lifecycle management to meet the requirements of a range of hybrid integration use cases. A hybrid integration platform is “deployment-model-agnostic” in terms of delivering requisite integration capabilities, be it on-premises and cloud deployments or containerized middleware.

The key characteristics of a hybrid integration platform include: 

  • support for a range of application, service, and data integration use cases, with an API-led, agile approach to integration reducing development effort and costs 
  • uniformity in UX across different integration products/use cases and for a specific user persona  
  • uniformity in underlying infrastructure resources and enabling technologies 
  • flexible integration at a product/component API level 
  • self-service capabilities for enabling less-skilled/non-technical users
  • the flexibility to rapidly provision various combinations of cloud-based integration services based on specific requirements 
  • openness to federation with external, traditional on-premises middleware platforms 
  • support for embedding integration capabilities (via APIs) into a range of applications/solutions  
  • developer productivity tools, such as a “drag-and-drop” approach to integration-flow development and pre built connectors and templates, and their extension to a broader set of integration capabilities 
  • flexible deployment options such as on-premises deployment, public/private/hybrid cloud deployment, and containerization 
  • centralization of administration and governance capabilities. 

For the purpose of this ODM, we are concerned only with cloud platforms (or PaaS products) for hybrid integration. A comprehensive PaaS suite (see Figure 1) combines iPaaS, apiPaaS, mobileback-end-as-a-service (MBaaS), and other cloud-based integration services such as data-centric PaaS and cloud-based B2B integration services to offer the wide-ranging hybrid integration capabilities required to support digital business.

These individual cloud-based integration services are offered on a subscription basis, with each component having essential cloud characteristics, such as multi tenancy, resource sharing, and rapid scalability. The success of iPaaS as an agile approach to hybrid integration has played a key role in the evolution of this market. For enterprises, PaaS products for hybrid integration represent a good opportunity to shift from legacy middleware platforms that require significant upgrades and investment to remain relevant in the current operating environment. Table 1 provides iPaaS and API platforms software market forecast for the period 2018-23.

 

Deployment of middleware on software containers is in the early stages and event-driven integration is gaining ground

Cloud-native integration is a natural fit to hybrid IT environments

It is obvious that hybrid IT environments call for a cloud-native integration paradigm that readily supports DevOps practices and drives operational agility by reducing the burden associated with cluster management, scaling, and availability. In the same was as a cloud-native integration paradigm, integration runtimes run on software containers, are continuous integration and continuous delivery and deployment (CI/CD)-ready, and are significantly lightweight and responsive enough to start and stop within a few seconds. Many enterprises have made substantial progress in containerizing applications to benefit from a microservices architecture and portability across public, private, and hybrid cloud environments. Containerized applications and containerized middleware represent a good combination. In cases where an application and a runtime are packaged and deployed together, developers can benefit from container portability and the ease of use offered by the application and middleware combination. 

In other terms, it makes sense for applications and middleware to share a common architecture, because DevOps teams can then avoid the overhead and complexity associated with the proposition of running containerized applications on different hardware and following different processes than those that exist with traditional middleware. This is true even in cases that do not involve much rearchitecting of the applications, and DevOps teams can still develop and deploy faster using fewer resources.

A lot of data is generated in the form of streams of events, with publishers creating events and subscribers consuming these events in different ways via different means. Event-driven applications can deliver better customer experiences. For example, this could be in the form of adding context to ML models to obtain real-time recommendations that evolve continually to meet the requirements of a specific use case. Embedding real-time intelligence into applications and real-time reactions or responsiveness to events are key capabilities in this regard.

For distributed applications using microservices, developers can opt for asynchronous event-driven integration in addition to the use of synchronous integration and APIs. Apache Kafka, an open source stream-processing platform, is a good option for such use cases requiring high throughput and scalability. Kubernetes can be used as a scalable platform for hosting Apache Kafka applications. Because Apache Kafka reduces the need for point-to-point integration for data sharing, it can reduce latency to only a few milliseconds, enabling faster delivery of data to users. 

Ovum Decision Matrix: Cloud platforms for hybrid integration, 2019–20 

The ODM chart in Figure 2 represents the results of a comprehensive evaluation of 11 cloud platforms for hybrid integration vendors meeting the inclusion criteria. The bubble size representing vendor positioning is determined by the scores achieved for the “market impact” criteria group under the “execution and market impact assessment” dimension. Table 2 provides a list of market leaders and challengers in alphabetical order (not in terms of scores), and subsequent sections also follow this practice.

Vendor analysis

Axway Ovum SWOT assessment

Strengths:

Axway AMPLIFY platform offers a good foundation for hybrid integration use cases 

Axway has well-established credentials for API management and B2B integration use cases, as evident from the high scores for the “API platform” and “B2B and mobile app/backend integration” criteria groups under the technology assessment dimension. The acquisition and subsequent integration of Appcelerator enabled Axway to cater for mobile application development and back-end integration use cases

Axway uses an OEM partnership to extend its platforms existing API-led integration capabilities with Cloud Elements, or more specifically, “Elements” that provide access to an entire category of applications, such as messaging, customer relationship management (CRM), e-commerce, finance, marketing, and document management, via integration to a single API. Both vendors espouse an APIled approach to integration and so there is synergy here. Axway has executed a progressive product strategy and forged partnerships with several ISVs, such as Cloud Elements, Stoplight.io, Entrust, and RestLet (acquired by Talend) to drive adoption.

Transformation in product strategy came at just the right time

With the AMPLIFY platform, Axway transformed its product strategy and directed investment to offer a unified platform that enables users to develop new digital business applications/services and to subsequently integrate them with other applications/services and data stores. This enables users to rapidly connect and share data with trading partners, derive actionable insights to optimize corresponding engagements, and monetize enterprise data assets. The AMPLIFY platform marked Axway’s shift from a vendor providing a suite of integration, security, and operational intelligence and analytics products to a vendor offering a cohesive, cloud-based hybrid integration platform, which can now support key hybrid integration use cases. This shift is starting to show good results for Axway, which claims that about 24,000 active organizations are using the AMPLIFY platform, a smaller share of which are paid customers. If it was only about technology assessment, Axway would qualify as a leader. However, it narrowly missed out on scoring the required 8 out of 10 for the “execution and market impact” assessment dimension, a key criterion to be rated an overall leader in this ODM.

Weaknesses

Specific gaps exist in its iPaaS capabilities and it needs to improve brand recognition in cloud based hybrid integration platforms market

Part of Axway’s iPaaS are currently limited to Europe and US data centers, while the platform’s virtual private cloud (VPC) customers are deployed and available in all key regions (the US, EU, and AsiaPacific). Native integration to blockchains and key RPA tools in missing. ML-based automation is a work in progress, but Axway plans to offer automation for data mapping. These are important areas for improvement as far as Axway’s iPaaS is concerned, because many of Axway’s key competitors are already offering these features and capabilities. 

Axway featured in a few Ovum conversations with enterprise IT leaders over the last couple of years. Its revenue from and the customer base for the AMPLIFY platform is significantly lower (considering Axway’s overall size and the time since the general availability of the AMPLIFY platform) than several key vendors in this market. However, the company is part way through transitioning from a licensing to subscription model. This is affecting Axway’s topline revenue but is a strategy for the longer term. The vendor expects that a return to top line growth will be evident by the end of 2020.

Axway must focus on investing in marketing and effective evangelism to increase the visibility and raise the profile of its AMPLIFY platform, although the vendor says it is seeing significant growth quarter-on-quarter. It is worth noting that Axway’s Catalyst team comprising experts in digital transformation and API-led innovation areas can help enterprises realize positive outcomes from digital transformation and integration modernization initiatives. The corresponding business strategy would benefit from a keen focus on winning net new deals involving a range of hybrid integration use cases, and Axway has achieved some recent success in this regard. 

Boomi Ovum SWOT assessment

Strengths

Leading iPaaS vendor with growing hybrid integration capabilities

Boomi, a Dell Technologies Business, achieved the highest score for the “cloud integration/iPaaS” criteria group under the technology assessment dimension. It has well-established credentials in the global iPaaS market, with thousands of large and midsize enterprises as customers. Boomi has expanded the capabilities of its iPaaS to support a range of hybrid integration requirements beyond on-premises and SaaS applications and data integration. Boomi’s iPaaS caters to the requirements of two key user personas: developers/integration practitioners and less-skilled, non-technical users. Boomi recently introduced the Boomi API gateway and developer portal to enable secure and scalable interactions with external parties, enhance API discoverability, and provide driver engagement across a broader API consumer base. 

There is a faster deployment option for Boomi iPaaS for Pivotal Kubernetes and Pivotal Application Services environments (PKS/PAS) available from the Pivotal Cloud Foundry marketplace. Boomi Enterprise Innovation Services and Architectural Services provide a package of integration services, advice from architecture experts, and support and resources, with the flexibility to customize to specific customer needs. Boomi provides a cloud-managed B2B/EDI integrated service as part of its unified platform. Users can build, deploy, and manage both traditional EDI and newer web services in the cloud. To simplify the configuration of trading partner profiles and B2B processes, Boomi provides a “configuration-based” platform that eliminates the cost and complexity of writing code. It is impressive to see how Boomi’s integration platform has expanded from iPaaS and API-led integration to cover B2B/EDI integration and simple file transfer use cases.

Good feature-price performance and early mover in exploiting ML for automation 

On a comparative basis, Boomi offers a good feature-price value for enterprises of all sizes. This is evident from its joint highest score for the “scalability and enterprise fit” criteria group under execution and market impact assessment dimension.

Boomi uses ML in the form of Boomi Suggest, Boomi Resolve, and Boomi Assure . It was arguably the first mover in the iPaaS market to start delivering ML-based automation. The Boomi Suggest feature uses millions of indexed mappings to offer automatic recommendations on mappings for new integrations based on successful configurations developed by other users in the past. Boomi also uses crowdsourced contributions from its support team and user community to offer resolutions to common errors within the iPaaS UI. Boomi Suggest offers mapping suggestions with “confidence rankings”, data transformation, and error resolutions via correlations to simplify integration-flow development. 

Weaknesses

Needs to address gaps in the features of Boomi API Management 

Boomi API Management was developed as an extension of the Boomi AtomSphere Platform to cater to the needs of existing users. Since then, the product has expanded in terms of key features and capabilities, and 2018 was a year of major advances in the capabilities of Boomi API Management. However, some of Boomi’s nearest competitors in the iPaaS market have more mature and wellestablished API platform capabilities. 

Over the last couple of years, there has been a slight decoupling (from core the iPaaS product) and dedicated product roadmap and strategy for Boomi API Management. Areas from improvement include support for GraphQL and gRPC standards, greater coverage in performance monitoring reports on key metrics, automated failover for high availability and reliability, better support for the Node.js framework, and a sophisticated API deprecation and retirement processes. Boomi is capable of filling these gaps and developing this as a leading API platform, and recent announcements indicate that this a key priority for Boomi’s product and business management. 

IBM Ovum SWOT assessment 

Strengths

Well-rounded offering catering to the requirements of key hybrid integration use cases 

IBM achieved consistently high scores across the various criteria groups under the technology and execution and market impact assessment dimensions. The IBM Cloud Pak for Integration caters to a range of hybrid integration requirements, including on-premises and SaaS application and data integration, rapid API creation/composition and lifecycle management, API security and API monetization, messaging, event streaming, and high-speed transfer. With IBM Cloud Pak for Integration’s container-based architecture, users have the flexibility to deploy in any environment with Kubernetes infrastructure, as well as to use a self-service approach to integration. IBM is extending its integration platform’s API capabilities to provide support for GraphQL management, and this approach decouples GraphQL management from GraphQL server implementation. IBM Sterling B2B Integration Services and IBM Mobile Foundation cater to the requirements of B2B/EDI integration and mobile application/back-end integration respectively.

The only vendor that can function as a true strategic partner for enterprises embarking on integration modernization initiatives 

IBM’s Agile integration methodology focuses on delivering business agility as part of integration modernization initiatives. It espouses the transitioning of integration ownership from centralized integration teams to application teams, as supported by the operational consistency achieved via containerization. On the operational agility side, cloud-native infrastructure offers dynamic scalability and resilience. For large enterprises embarking on integration modernization initiatives, this methodology can cater to people, processes, and technology aspects to provide the necessary advice and guidance to help enterprises achieve faster time to value across diverse deployment environments. Ovum analyzed the competitive services offerings of all vendors in this ODM and found IBM’s agile integration methodology to be the most comprehensive and well thought out.

Weaknesses

The B2B/EDI integration offering is architecturally different, so users need a separate offering for mobile application/backend integration 

IBM Sterling B2B Integration Services for supporting B2B/EDI integration use cases are architecturally different from the products under the IBM Cloud Pak for Integration. IBM is working on filling this gap and is developing a lightweight PaaS product for B2B/EDI integration. IBM Mobile Foundation is not part of the “Connect” set of product portfolios and is an add-on product. Another area for improvement is the use of ML for automating the different stages of integration projects, ranging from design and development to deployment and maintenance, which IBM is capable of providing by using the capabilities of its Watson platform. Some of the ML-related capabilities are part of IBM’s product roadmap for this middleware portfolio.

Frequent branding, rebranding, and renaming creates confusion in the market  

IBM’s middleware portfolio has undergone various iterations of branding, rebranding, and renaming over the years and this does create confusion in the market. From the days of IBM WebSphere Cast Iron Live to IBM API Connect or even IBM WebSphere Cast Iron Cloud Integration to IBM App Connect, IBM has certainly expanded features and capabilities or significantly transformed specific parts of its middleware portfolio. However, frequent rebranding and renaming exercises can be avoided to ensure a strong, sustained enterprise mindshare, and this will help in avoiding unnecessary confusion in the market. New and potential customers for IBM Cloud Pak for Integration should ask for customer references and case studies and check that these align with their specific requirements.

MuleSoft Ovum SWOT assessment 

Strengths

Comprehensive and cohesive cloud platform catering to a range of hybrid integration use cases

MuleSoft Anypoint Platform is a cohesive PaaS-style product catering to key hybrid integration use cases, this is evident from MuleSoft’s high scores across the “cloud integration” and “API platform” criteria groups under the technology assessment dimension. MuleSoft has further simplified it UX with the API Community Manager, upgrades to Anypoint Exchange, an improved integrated development environment (IDE) for the Mule 4 runtime (Studio 7), Anypoint Visualizer, and template-driven design and note-based collaboration for non-technical users (Flow Designer). Anypoint Partner Manager, MuleSoft’s lightweight PaaS-style B2B solution, caters to the requirements of B2B/EDI use cases, including partner management reporting, partner onboarding, B2B transaction configuration, B2B transaction tracking, and audit logging. While it is not an extensive B2B/EDI integration platform, it can be used by MuleSoft’s customers for meeting less complex B2B/EDI integration needs.

MuleSoft is one of the very few vendors that can support the requirements of all use cases included in this ODM via an architecturally coherent cloud platform that qualifies as a pre-play PaaS product. Visual API designer, API modeling framework parser, API functional monitoring, and several new connectors for a range of applications and endpoints, are some of the capabilities introduced over the last year to drive developer productivity

Weaknesses

ML-based automation can be improved 

Using the application network graph, MuleSoft provides a recommendation engine for suggestions on the next best action. The first application of this engine is the ML-based automapper in flow designer, and MuleSoft has dedicated plans to introduce new capabilities to drive ML-based automation. These are steps in the right direction. However, given MuleSoft’s track record of innovation and fast response to emerging market dynamics, by now it could have exploited ML capabilities to automate different stages of integration projects, ranging from design and development to deployment and maintenance. Some of its nearest competitors already have a better set of capabilities driving MLbased automation.

Ovum SWOT assessment 

Strengths

Comprehensive and cohesive cloud platform catering to a range of hybrid integration use cases

MuleSoft Anypoint platform is a cohesive PaaS-style product catering to key hybrid integration use cases, which is evident from MuleSoft’s high scores across the “cloud integration” and “API platform” criteria groups under the technology assessment dimension. MuleSoft has further simplified its UX with API Community Manager, upgrades to Anypoint Exchange, an improved integrated development environment (IDE) for the Mule 4 runtime (Studio 7), Anypoint visualizer, and template-driven design and note-based collaboration for non-technical users. Anypoint B2B solution caters to the requirements of B2B/EDI use cases, with partner management and reporting supported via Anypoint Partner Manager. Anypoint B2B solution is a lightweight PaaS-style product supporting trading partner onboarding, B2B transaction configuration, B2B transaction tracking, and audit logging. While it is not an extensive B2B/EDI integration platform, it can be used by MuleSoft’s customers for meeting less complex B2B/EDI integration needs. 

MuleSoft is one the few vendors that can support the requirements of all use cases included in this ODM via an architecturally coherent cloud platform that qualifies as a pre-play PaaS product set. Visual API designer, API Modeling Framework parser, API functional monitoring, and several new connectors for a range of applications and endpoints are some of the capabilities introduced over the last year to drive developer productivity.

MuleSoft has seen rapid growth since its acquisition by Salesforce 

MuleSoft is an integration business of over $500m within the broader Salesforce business lines. It has grown much faster than some of its established and larger competitors. The Salesforce acquisition has helped drive broader adoption of MuleSoft Anypoint Platform, both across existing large and midsize Salesforce customers. Because of this growth, there is a key focus on providing a compelling UX to less-skilled, non-technical users. MuleSoft enjoys strong brand recognition in the iPaaS and API platform markets as well as the cloud-based middleware for hybrid integration market. Contrary to the belief of some of its competitors, MuleSoft does not face a major hindrance driven by potential concerns about the neutrality of an integration vendor. MuleSoft’s growth is driven via both direct sales and packaged integration routes.

Weaknesses

ML-based automation could be improved 

Using the application network graph, MuleSoft provides a recommendation engine for suggestions on the next best action. The first application of this engine is the ML-based automapper in flow designer and MuleSoft has dedicated plans to introduce new capabilities to drive ML-based automation. These are steps in the right direction. However, given MuleSoft’s track record of innovation and fast response to emerging market dynamics, it could have exploited ML capabilities by now to automate different stages of integration projects, ranging from design and development to deployment and maintenance. Some of its nearest competitors already have a better set of capabilities driving MLbased automation. 

Oracle Ovum SWOT assessment 

Strengths

A well-balanced, comprehensive PaaS for hybrid integration product set

Oracle has a well-rounded PaaS for hybrid integration portfolio and achieved high scores for various criteria groups under the technology assessment dimension. Oracle Integration Cloud, Oracle’s iPaaS solution has seen rapid growth in terms of revenue over the last three years and, along with other PaaS offering of the portfolio, such as Oracle API Platform, Oracle SOA Cloud Service, and Oracle Mobile Hub, forms a good option for all key hybrid integration use cases. Oracle Self-Service Integration Cloud service aimed at less skilled, non-technical users allows them to build and consume simple integration recipes without any need to code. Oracle offers a uniform UX across various products of this middleware portfolio, something which many of its competitors have struggled to offer.The Oracle API Platform offers a range of capabilities for API creation and end-to-end lifecycle management, and has evolved into a fairly competitive offering over the last three to four years. Oracle exploits ML capabilities for providing recommendations at various stages of the design, testing, and deployment cycle, including but not limited to data mapping, business object/API recommendations in context, and the best next action to provide the logical next step in the flow. Insight capability for business integration analytics is a differentiator for Oracle. 

Rapid sustained revenue growth over the last three to four years 

Oracle has seen rapid revenue growth for its PaaS for hybrid integration portfolio. This has translated into several thousands of large enterprise customers using multiple PaaS offerings to tackle hybrid integration challenges. Oracle has also had success in cross-selling and upselling PaaS products to existing customers, as well as adding a significant number of new customers and securing one of the leading market shares. Most of this success in driving adoption and revenue growth can be attributed to aggressive execution against ambitious product roadmaps, and of course, Oracle’s financial muscle to invest billions of dollars in new product development and mobilize a large global salesforce is also a key strength.

Weaknesses

Specific gaps in products need to be addressed with a focus on the usability for non-Oracle endpoints and workloads  

In terms of its API platform, Oracle should focus on providing support for GraphQL and gRPC standards and SLA compliance, as well as built-in predictive analytics and the ability to send alerts and notifications to subscribers when APIs are versioned is other areas for improvement. Containerized middleware deployment is an emerging trend and one that many of Oracle’s competitors are exploiting for revenue growth. While this is not an officially supported topology from Oracle, containerized middleware deployment is planned for the on-premises execution engine.

When it comes to non-Oracle endpoints and workloads, many enterprise IT leaders are not sure of the usability of Oracle PaaS for integration use cases. They have an understanding that Oracle middleware’s usability is limited to Oracle-to-Oracle and Oracle-to-non-Oracle endpoints. This is definitely not the case with Oracle iPaaS, and Ovum has seen various implementations involving nonOracle to non-Oracle endpoints/applications. Oracle should focus on changing this viewpoint and should deliver more specific messaging for “non-Oracle only” use cases.

Red Hat Ovum SWOT assessment

Strengths

Open source innovation and growing hybrid integration capabilities 

Red Hat’s acquisition by IBM was recently completed, and IBM has emphasized that Red Hat will continue to operate as a separate unit within IBM and will be reported as part of IBM’s Cloud and Cognitive Software division. Our analysis is based on the assumption that this setup in IBM will continue. Red Hat has a long history of open source prowess and engineering expertise that has enabled IT practitioners to experiment and deliver new functionality with its middleware products. Red Hat Fuse was an early entrant to the hybrid integration market, with a focus on cloud-native integration developers. Red Hat Fuse Online (part of Red Hat Integration), Red Hat’s iPaaS offering is different in the sense that it was developed with a key focus on providing a better UX to less technical users. The API platform component of Red Hat Integration exploits the capabilities of 3scale API management and Fuse integration, and is a functionally rich solution for API lifecycle management. Red Hat achieved a high score for the “API platform” criteria group under the technology assessment dimension. Red Hat partners with Trace Financial for EDI-based transformations. For mobile app/back-end integration, Mobile Developer Services (included with Red Hat managed integration) provide key mobile app development capabilities optimized for containers, microservice architectures, and hybrid cloud deployments. This component exploits the capabilities of Feed Henry, a mobile application platform vendor acquired by Red Hat in 2014. 

Red Hat acquired JBoss in 2006 and grew its middleware business for over a decade. Owing to its business model, it took some time for Red Hat to figure out the emerging opportunities in a market where enterprise service bus (ESB) and service-oriented architecture (SOA) infrastructure adoption was declining and iPaaS and API management market segments were growing at high double-digit rates. Then came the trend of deployment and management of middleware on software containers. Red Hat was able to develop a strategy that did not deviate much from its heritage and still deliver products that could compete with iPaaS and API-led integration platforms. This is applicable for serious buyers that are willing and have the capability to experiment and innovate with open source middleware.  

Red Hat’s PaaS portfolio for hybrid integration is a good option for developers and integration practitioners that appreciate the capabilities and flexibility of open source middleware. The cost of exit in a proprietary middleware context is quite high, and it is not easy to achieve a significant level of interoperability with application infrastructure and middleware platforms offered by other vendors. Red Hat Integration as an open source middleware product offers users the flexibility to try and experiment with small integration projects and see what works best for a particular requirement or integration scenario. In a world where a drag-and-drop approach and pre-built connectors and templates are marketed as nirvana for cloud integration, it is good to see Red Hat making integration technical again. With time, we expect Red Hat’s customer base and revenue for this middleware portfolio to grow to an extent where it is comparable to the other iPaaS vendors that provide API lifecycle management capabilities.

Portable architecture and cost-effectiveness

The ability to keep a fully supported and portable architecture intact across private, public, and managed cloud is a key differentiator for Red Hat in this market segment. Red Hat’s strategy is simple: exploit the best open source technologies in the market and communities and adopt new projects based on the market direction. This enables Red Hat middleware to offer better scalability than proprietary or “open core” competitive offerings. Red Hat Integration as a package subscription includes app integration, data integration, messaging, data streaming, and API management capabilities and is bundled with the Red Hat OpenShift container platform. The cost of a one-year subscription for Red Hat Integration is significantly lower than that provided by some of the vendors included in this ODM. Enterprises with access to developers capable of exploiting open source middleware for tackling complex integration challenges can use Red Hat Integration to reduce the costs for hybrid integration projects. If it was only about technology assessment, Red Hat would qualify as a leader. However, it didn’t achieve consistently high scores for the “execution and market impact” assessment dimension, a key criterion to be rated a leader in this ODM.

Weaknesses

Late to market with an option for less skilled users

ICCs/integration COEs are no longer in the driver’s seat and LOBs are aggressive in terms of moving ahead with the adoption of iPaaS for SaaS integration. Some of these products also provide simpler capabilities for rapid API creation and API-led integration. While Red Hat Fuse Online is quite different from Red Hat Fuse in terms of its UX, it still does not offer the type of “ease-of-use” in development of integration flows as is the norm with modern iPaaS solutions. For this reason, Red Hat does not compete head on for tactical integration projects driven by LOBs. This has more to do with Red Hat’s position in the market and its core customer base. Red Hat offers a range of technical connectors, but there are gaps in terms of the coverage of connectors to the common SaaS applications used in enterprises. This again is a basic characteristic of modern iPaaS solutions and one of the reasons why iPaaS has gained traction in the developer and integration practitioner community and less skilled, non-technical users.  

Red Hat does not really compete with point solutions, such as the use of iPaaS for SaaS integration in a LOB, or for that matter, standalone API management. It functions better as a middleware stack vendor. We do not see this as a limitation for large enterprises capable of using open source middleware to solve complex integration issues in hybrid IT environments because the rest of the user base was never a sweet spot for Red Hat.

SAP Ovum SWOT assessment

Strengths

Growing hybrid integration capabilities and a progressive product roadmap

SAP supports the various key use cases included in this ODM, including cloud integration, API lifecycle management, B2B/EDI integration, and mobile app and back-end integration. SAP Cloud Platform Integration Suite, SAP’s iPaaS offering, provides an intuitive web interface with pre-built templates. The integration adviser uses ML capabilities and crowd-sourcing to offer a proposal service for message implementation and mapping guidelines. SAP has a dedicated roadmap for the integration adviser, including complex pattern mapping, optimized integration flow templates offering partner discovery, and further improvements in the proposal service. SAP recently introduced new features and capabilities, such as a public trial version, support for Microsoft Azure in a production release, self-service subscription enablement of integration platform tenants, new connectivity options, and trading partner management. SAP Cloud Platform Integration Suite is unique in the sense that it is a vendor-managed multicloud iPaaS available on a pay-as-you-go license model (SAP Cloud Platform Enterprise Agreement). 

SAP Cloud Platform API Management is SAP’s API lifecycle management product that offers standards-based API access to REST/OData or SOAP services, API analytics on consumption and operations, enterprise-grade security, and developer-centric services to enable users to subscribe, use, and manage API consumption. SAP Cloud Platform Integration Suite supports mobile app and back-end integration requirements. SAP has gradually developed a hybrid integration platform that can be consumed as PaaS. SAP achieved a good score for the “cohesiveness and innovation” criteria group under the execution and market impact assessment dimension.

Weaknesses

Gaps in iPaaS and API lifecycle management capabilities 

SAP does not support the deployment of iPaaS and API lifecycle management solutions on software containers. SAP Cloud Platform API Management is available as a fully cloud-managed service. SAP’s hybrid roadmap for the second half 2020 includes complementing the cloud service with a containerized local gateway runtime that can run in a customer’s private cloud environment. There is  also scope for improvement in the UX for less skilled, non-technical users. Gaps in terms of features and capabilities of SAP Cloud Platform API Management include support for GraphQL and gRPC standards and built-in predictive analytics. SAP does not offer an MFT product as a cloud service, and in the B2B/EDI integration context, SAP Cloud Platform Integration Suite offers an API-based trading partner solution. An improved (next-generation) trading partner management is planned for next year. These are some of the key areas for improvement that should be addressed soon to respond to emerging market dynamics and customer requirements and to remain competitive with the leading vendors in this market.

Product marketing and execution need to improve 

SAP’s product strategy for this product portfolio is driven by the requirements of core SAP ecosystem users, and it focuses on upselling and cross-selling to existing customers using SAP applications, on premise middleware, and other software products. While this is a good option to capitalize on the low hanging market opportunity, such a strategy can slow down the long-term evolution of a leading PaaS vendor providing a hybrid integration platform. This reflects in the number of customers and revenue SAP has realized for this product portfolio, which is lower than several vendors included in this ODM. Over the last couple of years, SAP featured sparsely in Ovum’s conversations with enterprise IT leaders embarking on hybrid integration and integration modernization initiatives. It is no different when it comes to conversations on leading iPaaS vendors because SAP does not enjoy substantial brand recognition beyond its core SAP ecosystem. There is significant scope for improvement in SAP’s product marketing, which should focus on improvising the visibility and raising the profile of SAP Cloud Platform Integration Suite.

Seeburger Ovum SWOT assessment

Strengths

Seeburger BIS in the cloud offers foundational capabilities for hybrid integration use cases 

Seeburger’s cloud platform for hybrid integration uses the features and capabilities of the underlying Seeburger Business Integration Suite (BIS). Seeburger’s middleware stack is well integrated and includes only home-grown solutions. This ensures interaction between the individual modules, and increases the overall stability and availability of the integration platform. Seeburger’s BIS portal is a unified UI layer for the entire platform, regardless of the deployment model. Seeburger BIS in the cloud provides SaaS integration, B2B/EDI as-a-service, API platform, and MFT as-a-service capabilities. Seeburger BIS can be deployed across various IaaS cloud environments, and there is support for deployment on containers. 

Seeburger concentrates on delivering iPaaS as a partner to its customers, and not only operates the integration platform (iPaaS) on a technical level, but also provides them with specialist personnel on request. At the same time, Seeburger is focusing on extending iPaaS support for different IaaS providers. Seeburger’s middleware product strategy means that cross-selling and upselling to existing customers represents a low-hanging opportunity. On a comparative basis, Seeburger’s cloud platform for hybrid integration offers foundational capabilities for tackling a range of integration issues. API creation is supported by a wizard and the BPMN design tool enables the composition of platform services into a new API via a simple drag-and-drop approach. On the B2B/EDI integration side, for trading partner onboarding, Seeburger’s Community Management Application (CMA) enables the use of web forms that can be designed by users. In addition, tailored forms can be created to collect all the required information to streamline the onboarding process. Seeburger achieved a high score for the “B2B and mobile app/backend integration” criteria group under the technology assessment dimension.

Weaknesses

Gaps in iPaaS and API platform capabilities

In the context of iPaaS capabilities, Seeburger does not provide pre-built, dedicated connectors to common endpoints and applications, such as marketing tools, collaboration applications, financial applications, content management systems, analytics data warehouses, and RPA tools. This is, however, part of the 2020 product roadmap. ML-based automation across different stages of integration projects, ranging from design and development to deployment and maintenance is not provided, though it is part of the product strategy and roadmap. There is scope for improvement with a tailored UX for less skilled, non-technical users. 

In the context of API platform capabilities, areas for improvement include support for GraphQL and gRPC standards, wider coverage via dashboard for tracking key metrics and performance monitoring reports on key metrics, built-in predictive analytics capability, and better support for Node.js framework. Seeburger must focus on filling these gaps to effectively compete with its nearest competitors. 

Need to improve brand awareness in the cloud platforms for hybrid integration market 

 

While Seeburger has been in the integration software business for a long time, it is a relatively new vendor in cloud platforms (PaaS) for the hybrid integration market. Over the last few years, Seeburger has used the capabilities of its BIS in the cloud to expand coverage of hybrid integration use cases, including SaaS integration and API-led integration (B2B/EDI integration was always a strong area for Seeburger). However, in comparison to leading iPaaS and API platform vendors, Seeburger has a lower brand awareness in this market. Seeburger has featured only sparsely in Ovum’s conversations with enterprise IT leaders over the last couple of years. This is also reflected in the relatively small revenue and customer base Seeburger has for this product portfolio. This is not surprising because other vendors included in this ODM had entered this market segment well ahead of Seeburger. Seeburger must invest in marketing and evangelism to raise the visibility and profile of its cloud-based hybrid integration platform. 

SnapLogic Ovum SWOT assessment 

Strengths

Timely expansion from an iPaaS to a PaaS portfolio aimed at a range of hybrid integration use cases

SnapLogic Enterprise Integration Cloud, SnapLogic’s iPaaS in its previous form, was a good product with strong credentials across both data and application integration use cases. SnapLogic Intelligent Integration Platform is a broader PaaS-style product aimed at a wider range of use cases, and not limited to only iPaaS and API-led integration. SnapLogic achieved the joint second highest score for the “cloud integration/iPaaS” criteria group under the technology assessment dimension. The hybrid integration platform marketed as an “Intelligent Integration Platform” offers AI-enabled workflows and self-service UX to simplify and accelerate time to value for application and data integration initiatives.

Moving beyond partnerships, SnapLogic has extended its integration platform to API lifecycle management, a good move at the right time. The extended integration platform offers a visual paradigm with a low code/no code approach for iPaaS and API lifecycle management use cases. The August 2019 release of the platform introduced a new API developer portal to expose API endpoints to external consumers. SnapLogic B2B solution integrates its Intelligent Integration Platform with a cloud-based B2B gateway to offer trading partner community management, support for a range of EDI standards, EDI message translation, and transaction monitoring with an audit trail. The combined SnapLogic integration product portfolio is functionally rich and compares well with the larger vendors in this market. 

Substantial strengths across application and data integration use cases and early mover in offering ML-based automation

While we have not looked extensively at data integration use cases in this ODM, it is worth highlighting that if application and data integration are considered together, there are very few vendors that can compete with SnapLogic. Until 2017, SnapLogic’s product strategy tilted toward application and data integration use cases, but with the introduction of API lifecycle management and B2B integration capabilities, it has positioned itself as a capable cloud-based hybrid integration platform provider. 

On an overall basis, SnapLogic is one of the few vendors offering ML-based automation capabilities across the integration lifecycle. SnapLogic’s Iris AI uses AI/ML capabilities to automate highly repetitive, low-level development tasks. Its Integration Assistant provides step-level suggestions for developing an integration flow, as well as offering recommendations for pipeline optimization. Moreover, SnapLogic Data Science is offered as a self-service solution to accelerate ML development and deployment with minimal coding. 

Weaknesses

Specific gaps in terms of API platform capabilities need to be addressed without much delay 

Although SnapLogic has an ambitious product roadmap for API management as it pertains to iPaaS use cases, it still has significant ground to cover to successfully compete with some of its iPaaS competitors providing holistic rapid API creation/composition and end-to-end API management capabilities. Areas for improvement include support for GraphQL and gRPC standards, reuse of existing API definitions via Swagger representation import, better support for API deprecation and retirement processes, and support for the Node.js framework. We believe these gaps exist because this is a new capability area for SnapLogic, where the product roadmap is driven by the most important requirements for its existing customer base. We understand that SnapLogic is not focusing on developing a best-of-breed, standalone cloud-based API platform. However, in the long run, it is critical to fill these gaps if SnapLogic wants to improve and retain its competitive positioning because application and data integration disciplines are converging anyway. 

TIBCO Ovum SWOT assessment 

Strengths

Strong credentials, a robust platform, and well thought-out strategy have delivered a strong competitive positioning 

TIBCO has long enjoyed strong credentials as an integration vendor and has a well-established footprint in the large enterprise segment. TIBCO Cloud Integration (TCI) has gradually evolved as a comprehensive iPaaS product for key hybrid integration use cases. TIBCO achieved consistently high scores across the various criteria groups under the “technology” and “execution and market impact” assessment dimensions. TIBCO Cloud Integration is a functionally rich platform, while the TIBCO Cloud Integration Connect capability is for less skilled, non-technical users. The TIBCO Cloud Integration Develop and Integrate capabilities are aimed at developers and integration practitioners. The platform supports REST APIs, GraphQL, and event-driven integration, and when used as an API platform deployed on premises, it uses a cloud-native, container-based architecture. On the B2B/EDI integration side, integration with TIBCO BusinessConnect Trading Community Management enables rapid trading partner onboarding, while TIBCO Foresight BusinessConnect Insight supports B2B transaction monitoring. TIBCO has developed a compelling value proposition aimed at different user personas and across disparate deployment models, and has undertaken significant investment to drive an improved UX. As a result of a well thought-out business strategy and good execution in terms of product innovation and delivery, TIBCO has maintained a leading position in this market. This is in line with its competitive position in the pre-iPaaS middleware market.

Disciplined and focused execution is the hallmark of TIBCO’s strategy 

While TIBCO does not invest as much in marketing as do some of its nearest competitors, over the past four years it has still managed to transition from an on-premises heavy middleware vendor to a leading vendor providing PaaS for hybrid integration. Functioning under the ownership of Vista Equity Partners, TIBCO has demonstrated disciplined and focused execution when it comes to filling gaps in its existing middleware portfolio (for example, overcoming the failure of TIBCO Cloud Bus, TIBCO’s very first iPaaS offering) and driving innovation to emerge as a leading vendor in this market. TIBCO’s revenue from PaaS for hybrid integration is lower than some of its nearest competitors, but we expect this gap to shrink because TIBCO is capable of achieving above-market average growth in the near future. At its core, TIBCO remains an engineering company delivering innovation to successfully compete with vendors already in this market.

Weaknesses

Scope for improvement in ML-based automation, PaaS-style product for B2B/EDI integration required for exploiting market opportunity

TIBCO Cloud Integration provides ML-enabled capabilities, such as smart mapping, automated discovery of connection metadata, a visual model of impact analysis, the ability to fix and address issues driven by changes in configuration, and heuristics-based mapping of data elements and event payloads. There are significant gaps in this set of capabilities when it comes to exploiting ML for automating different stages of integration projects, ranging from design and development to deployment and maintenance. Some of its nearest competitors are ahead in terms of ML-based automation capabilities in production environments. However, TIBCO is working on providing recommendation services at various stages of the integration lifecycle.

TIBCO would benefit from a lightweight, PaaS-style product aimed at B2B/EDI integration use cases, and should provide a simplified UX along the lines of TIBCO Cloud Integration. This is more about a PaaS product delivering B2B/EDI integration capabilities and not an extensive set of features and capabilities as provided by traditional, dedicated B2B/EDI integration platforms hosted on the cloud. This is a low-hanging market opportunity, because many enterprises are struggling with legacy EDI platforms that are a burden and expensive to maintain. 

WSO2 Ovum SWOT assessment 

Strengths

Open source integration cloud with significant SaaS integration and API lifecycle management capabilities 

For developers and integration practitioners with the skills to exploit open source middleware, WSO2 provides substantial capabilities for SaaS integration, API-led integration, and API lifecycle management. WSO2 API Cloud is a hosted version of the open source WSO2 API Manager, and is a functionally rich offering. WSO2 API Cloud offers a developer portal, a scalable API gateway, and a powerful transformation engine with built-in security and throttling policies, reporting, and alerts. WSO2 achieved a high score for the “API platform” criteria group under the technology assessment dimension.

WSO2 API lifecycle management and integration platforms are centrally managed through a common UI that supports various concerns, such as user and tenant management. WSO2 offers a drag-and drop graphical development environment, a graphical data and type mapper, and graphical flow debugging to simplify the development of integrations. WSO2 Integration Cloud offers good feature price performance. API back-end services hosted on WSO2 Integration Cloud can be exposed to the WSO2 API Cloud. In March 2017, WSO2 also introduced “Ballerina”, a programming language with both textual and graphical syntaxes to enable users to develop integration flows by describing them as sequence diagrams. Ballerina forms the basis for WSO2’s new code-driven integration approach.  

Weaknesses

Does not cater to B2B/EDI integration requirements

WSO2 is the only vendor in this ODM that does not provide a minimal set of capabilities for B2B/EDI integration use cases. While the cloud platforms for hybrid integration market is tilted toward iPaaS and API lifecycle management capabilities, several vendors have gradually expanded to provide support for less complex B2B/EDI integration use cases. The WSO2 Integration Cloud does not provide a tailor-made UX and self-service integration capabilities for less skilled, non-technical users. This is an area in the iPaaS market in which almost all other vendors have invested to better support less skilled, non-technical users. WSO2 is, however, planning to offer low-code, graphical integration based on Ballerina integrator runtime to enable ad hoc integrators to develop integrations

less skilled, non-technical users. WSO2 is, however, planning to offer low-code, graphical integration based on Ballerina integrator runtime to enable ad hoc integrators to develop integrations. WSO2 does not offer ML-based automation across different stages of integration projects, ranging from design and development to deployment and maintenance. This is largely due to its preference to focus on capabilities that are critical for developers and integration practitioners. Other areas for improvement include support for different IaaS clouds, the availability of iPaaS via a regional data center, pre-built connectors for blockchain integration, integration with RPA tools, and centralized management via a web-based console (or other suitable means) for creating, deploying, monitoring, and managing integrations.

Significant scope for improvement in product marketing

Compared to some of its competitors, WSO2 engages in relatively few marketing activities, which hinders its improvement in terms of its brand recognition and competitive market positioning, particularly in regions where it does not have a significant direct presence. Because it mainly targets enterprise/integration architects and hands-on technologists, WSO2’s product marketing activities have a technology-centric flavor. However, it would benefit from including a business-centric approach to sales and marketing to target a wider range of users and decision-makers, such as business leaders funding a LOB-led digital business initiative involving hybrid integration. 

Appendix

Methodology

An invitation followed by the ODM evaluation criteria spreadsheet comprising questions across two evaluation dimensions were sent to all vendors meeting the inclusion criteria, with nine vendors opting to participate. Ovum had thorough briefings with the final nine vendors to discuss and validate their responses to the ODM questionnaire and understand their latest product developments, strategies, and roadmaps. 

This ODM includes observations and input from Ovum’s conversations (including those conducted based on customer references) with IT leaders, enterprise architects, digital transformation initiative leaders, and enterprise developers and integration practitioners using cloud platforms for hybrid integration. 

Technology assessment

Ovum identified the features and capabilities that would differentiate the leading cloud platforms for hybrid integration vendors. The criteria groups and associated percentage weightings are as follows. 

  • Cloud integration/iPaaS (weighting assigned = 40%) 
  • API platform (weighting assigned = 45%) 
  • B2B and mobile application/backend integration (weighting assigned = 15%) 

Execution and market impact assessment 

For this dimension, Ovum assessed the capabilities of a cloud platform for hybrid integration and the associated vendor across the following key areas: 

  • Cohesiveness and innovation (weighting assigned =40%) 
  • Scalability and enterprise fit (weighting assigned =45%) 
  • Market impact (weighting assigned =15%) 

Leadership compass database and big data security

1 Introduction 

Databases are arguably still the most widespread technology for storing and managing business-critical digital information. Manufacturing process parameters, sensitive financial transactions or confidential customer records – all this most valuable corporate data must be protected against compromises of their integrity and confidentiality without affecting their availability for business processes. The area of database security covers various security controls for the information itself stored and processed in database systems, underlying computing and network infrastructures, as well as applications accessing the data. 

However, since the last edition of KuppingerCole’s Leadership Compass on Database Security two years ago, a notable change in the direction the market is evolving has become apparent: as the amount and variety of digital information an organization is managing grows, the complexity of the IT infrastructure needed to support this digital transformation grows as well. 

Nowadays, most companies end up using various types of databases and other data stores for structured and unstructured information depending on their business requirements. Recently introduced data protection regulations like the European Union’s GDPR or California’s CCPA make no distinction between relational databases, data lakes or file stores – all data is equally sensitive regardless of the underlying technology stack. 

Because of this, we have decided to expand the scope of this year’s Leadership Compass to incorporate data protection and governance solutions for NoSQL databases and Big Data frameworks in addition to relational databases we focused on last time. 

Among the security risks databases of any kind are potentially exposed to are the following: 

  • Data corruption or loss through human errors, programming mistakes or sabotage; 
  • Inappropriate access to sensitive data by administrators or other accounts with excessive privileges;  
  • Malware, phishing and other types of cyberattacks that compromise legitimate user accounts; 
  •  Security vulnerabilities or configuration problems in the database software, which may lead to data loss or availability issues; 
  • Denial of service attacks leading to disruption of legitimate access to data; 

Consequently, multiple technologies and solutions have been developed to address these risks, as well as provide better activity monitoring and threat detection. Covering all of them in just one product rating would be quite difficult. Furthermore, KuppingerCole has long stressed the importance of a strategic approach to information security. 

Therefore, customers are encouraged to look at database and big data security products not as isolated point solutions, but as a part of an overall corporate security strategy based on a multi-layered architecture and unified by centralized management, governance and analytics. 

1.1 Market Segment

Because of the broad range of technologies involved in ensuring comprehensive data protection, the scope of this market segment isn’t easy to define unambiguously. In fact, only the largest vendors can afford to dedicate enough resources for developing a solution that covers all or at least several functional areas – the majority of products mentioned in this Leadership Compass tend to focus on a single aspect of database security like data encryption, access management or monitoring and audit. 

The obvious consequence of this is that when selecting the best solution for your particular requirements, you should not limit your choice to overall leaders of our rating – in fact, a smaller vendor with a lean, but flexible, scalable and agile solution that can quickly address a specific business problem may, in fact, be more fitting. On the other hand, one must always consider the balance between a well-integrated suite from a single vendor and a number of best-of-breed individual tools that require additional effort to make them work together. Individual evaluation criteria used in KuppingerCole’s Leadership Compasses will provide you with further guidance in this process. 

To make your choice even easier, we are focusing primarily on security solutions for protecting structured data stored in relational or NoSQL databases, as well as in Big Data stores. Secondly, we are not explicitly covering various general aspects of network or physical server security, identity and access management or other areas of information security not specific for databases, although providing these features or offering integrations with other security products may influence our ratings. 

Still, we are putting a strong focus on integration into existing security infrastructures to provide consolidated monitoring, analytics, governance or compliance across multiple types of information stores and applications. Most importantly, this includes integrations with SIEM/SoC solutions, existing identity, and access management systems and information security governance technologies. 

Solutions offering support for multiple database types as well as extending their coverage to other types of digital information are expected to receive more favorable ratings as opposed to solutions tightly coupled only to a specific database (although we do recognize various benefits of such tight integration as well). The same applies to products supporting multiple deployment scenarios, especially in cloud-based and hybrid infrastructures. 

Another crucial area to consider is the development of applications based on the Security and Privacy by Design principles, which have recently become a legal obligation under the EU’s General Data Protection Regulation (GDPR) and similar regulations in other geographies. Database and big data security solutions can play an important role in supporting developers in building comprehensive security and privacyenhancing measures directly into their applications.

Such measures may include transparent data encryption and masking, fine-grained dynamic access management, unified security policies across different environments and so on. We are taking these functions into account when calculating vendor ratings for this report as well.

Despite our effort to cover most aspects of database and big data security in this Leadership Compass, we are not covering the following products: 

  •  Solutions that primarily focus on unstructured data protection having limited or no database-related capabilities
  •  Security tools that cover general aspects of information security (such as firewalls or antimalware products) but do not offer functionality specifically tailored for data protection 
  • Compliance or risk management solutions that focus on organizational aspects (checklists, reports, etc.) 

1.2 Delivery models 

Since most of the solutions covered in our rating are designed to offer comprehensive protection and governance for your data regardless of the IT environment it is currently located – in an on-premises database, in a cloud-based data lake or in a distributed transactional system – the very notion of the delivery model becomes complicated as well. 

Certain components of such solutions, especially the ones dealing with monitoring, analytics, auditing, and compliance can be delivered as managed services or directly from the cloud as SaaS, but the majority of other functional areas require deployment close to the data sources, as software agents or database connectors, as network proxies or monitoring taps and so on. Especially with complex Big Data platforms, a security solution may require multiple integration points within the existing infrastructure. 

In other words, when it comes to data protection, you can safely assume that a hybrid delivery model is the only viable option. 

1.3 Required Capabilities 

When evaluating the products, besides looking at the aspects of 

  • overall functionality 
  • size of the company 
  • number of customers 
  • number of developers 
  • partner ecosystem 
  • licensing models 
  • platform support 

We also considered the following key functional areas of database security solutions:

  • Vulnerability assessment – this includes not just discovering known vulnerabilities in database products, but providing complete visibility into complex database infrastructures, detecting misconfigurations and, last but not least, the means for assessing and mitigating these risks. 
  •  Data discovery and classification – although classification alone does not provide any protection, it serves as a crucial first step in defining proper security policies for different data depending on their criticality and compliance requirements. 
  • Data-centric security – this includes data encryption at rest and in transit, static and dynamic data masking and other technologies for protecting data integrity and confidentiality. 
  • Monitoring and analytics – these include monitoring of database performance characteristics, as well as complete visibility in all access and administrative actions for each instance, including alerting and reporting functions. On top of that, advanced real-time analytics, anomaly detection, and SIEM integration can be provided. 
  • Threat prevention – this includes various methods of protection from cyber-attacks such as denial-ofservice or SQL injection, mitigation of unpatched vulnerabilities and other infrastructure-specific security measures. 
  • Access Management – this includes not just basic access controls to database instances, but more sophisticated dynamic policy-based access management, identifying and removing excessive user privileges, managing shared and service accounts, as well as detection and blocking of suspicious user activities. 
  • Audit and Compliance – these include advanced auditing mechanisms beyond native capabilities, centralized auditing and reporting across multiple database environments, enforcing separation of duties, as well as tools supporting forensic analysis and compliance audits. 
  • Performance and Scalability – although not a security feature per se, it is a crucial requirement for all database security solutions to be able to withstand high loads, minimize performance overhead and to support deployments in high availability configurations. For certain critical applications, passive monitoring may still be the only viable option. 

2 Leadership

Selecting a vendor of a product or service must not be only based on the comparison provided by a KuppingerCole Leadership Compass. The Leadership Compass provides a comparison based on standardized criteria and can help to identify vendors that shall be further evaluated. However, a thorough selection includes a subsequent detailed analysis and a Proof of Concept of the pilot phase, based on the specific criteria of the customer. 

Based on our rating, we created the various Leadership ratings. The Overall Leadership rating provides a combined view of the ratings for 

  • Product Leadership 
  • Innovation Leadership
  • Market Leadership 

2.1 Overall Leadership 

The Overall Leadership rating is a combined view of the three leadership categories: Product Leadership, Innovation Leadership, and Market Leadership. This consolidated view provides an overall impression of our rating of the vendor’s offerings in the particular market segment. Notably, some vendors that benefit from a strong market presence may slightly drop in other areas such as innovation, while others show their strength, in the Product Leadership and Innovation Leadership, while having a relatively low market share or lacking a global presence. Therefore, we strongly recommend looking at all leadership categories, the individual analysis of the vendors, and their products to get a comprehensive understanding of the players in this market. 

In this year’s Overall Leadership rating we observe the same situation as in the previous release: only the two biggest vendors, namely IBM and Oracle, have reached the Leaders segment, which reflects both companies’ global market presence, broad ranges of database security solutions and impressive financial strengths. 

However, while last time we have positioned IBM slightly in the front, considering the fact that IBM’s solutions are database-agnostic, while half of Oracle’s portfolio only focuses on Oracle databases, this time the situation has changed. During the last year, Oracle has substantially increased its stake in the database security market, primarily with their innovative Autonomous Database technology stack, as well as numerous improvements in their existing products. Thus, we recognize Oracle as this year’s overall leader in Database and Big Data security. 

It is worth mentioning that while maintaining database agnosticism, IBM Data Protection has continued to add support for new data sources and has enhanced their capabilities to facilitate secure hybrid multicloud. IBM has also added support for unstructured data protection making Guardium a universal platform for data discovery, classification, and protection wherever this data resides. 

The rest of the vendors are populating the Challengers segment. Lacking the combination of an exceptionally strong market and product leadership, they are hanging somewhat behind the leaders, but still deliver mature solutions excelling in certain functional areas. We have a mix of companies we had recognized previously – Axiomatics, Imperva and Thales (which has completed the acquisition of Gemalto in early 2019) – and several newcomers like comforte AG, Delphix and SecuPI, each offering excellent solutions in their respective functional areas. 

There are no Followers in this rating, indicating the overall maturity of the vendors representing the market in our Leadership Compass. 

Unfortunately, several vendors we had in the rating last time were unable to participate this time. You can still find them mentioned in the later chapter “Vendors to Watch”. For more technical details about their products, please refer to the previous edition of this Leadership Compass. 

Again, we must stress that the leadership does not automatically mean that these vendors are the best fit for a specific customer requirement. A thorough evaluation of these requirements and a mapping to the product features by the company’s products will be necessary. 

Overall Leaders are (in alphabetical order): 

  • IBM
  • Oracle

2.2 Product Leadership 

The first of the three specific Leadership ratings is about Product Leadership. This view is mainly based on the analysis of product/service features and the overall capabilities of the various products/services.  

In the Product Leadership rating, we look specifically for functional strength of the vendors’ solutions. It is worth noting that, with the broad spectrum of functionality we expect from a complete data security solution, it’s not easy to achieve a Leader status for a smaller company. 

Among the distant leaders are the largest players in the market, offering a wide range of products covering different aspects of database security. 

IBM Security Guardium, the company’s data security platform provides a full range of data discovery, classification, entitlement reporting, near real-time activity monitoring, and data security analytics across different environments, which has led us to recognize IBM as the Product Leader. 

Oracle’s impressive database security portfolio includes a comprehensive set of security products and managed services for all aspects of database assessment, protection, and monitoring – landing the company at the close second place. 

Following them we can find two newcomers of the rating: comforte AG with their highly scalable and fault-tolerant data masking and tokenization platform that has grown from the company’s roots in high performance computing and decade-long experience serving large customers in the financial industry, and SecuPI – a young but ambitious vendor focusing on data-centric protection and GDPR/CCPA compliance for databases, big data and business applications. 

Finally, Thales after the recent acquisition of Gemalto and Imperva with a substantial R&D investment from Thoma Bravo have managed to improve their earlier ratings substantially, making it into the Leaders segment as well. 

Other vendors with their robust, but less functionally broad solutions are populating the Challengers segment. Delphix is a leading provider of data virtualization solutions for cloud migration, application development, and business analytics scenarios, all with a comprehensive set of data desensitization capabilities. Somewhat behind it we find Axiomatics – a leader in dynamic access control with a specialized ABAC solution for databases and Big Data frameworks. 

There are no followers in our product rating. Product Leaders are (in alphabetical order):

  • comforte AG 
  • IBM
  • Imperva
  • Oracle
  • SecuPI
  • Thales

2.3 Innovation Leadership 

Another angle we take when evaluating products/services concerns innovation. Innovation is, from our perspective, a key capability in IT market segments. Innovation is what customers require for keeping up with the constant evolution and emerging customer requirements they are facing.

Innovation is not limited to delivering a constant flow of new releases, but focuses on a customer oriented upgrade approach, ensuring compatibility with earlier versions especially at the API level and on supporting leading-edge new features which deliver emerging customer requirements. 

In this rating, we again observe IBM and Oracle in the Leaders segment, reflecting both companies’ sheer development resources which allow them to constantly deliver new features based on innovative technologies. 

IBM has continued to expand the focus of the Guardium platform – of note is the added support for unstructured data monitoring in on-prem and cloud stores, as well as the incorporation of the latest technological developments like containerized databases, artificial intelligence and consent management. 

Thanks to their recent breakthrough innovations with the Autonomous Database product family, which offers substantial improvements in terms of security, compliance, performance and availability of sensitive data by completely removing human interaction from database operations, Oracle has managed to increase their rating compared to the last edition, landing them at the first place in our innovation chart. 

Most other vendors can be found in the Challengers segment, reflecting their continued investments into delivering new innovative features in their solutions, which, however, simply cannot keep up with the behemoths among the leaders. 

The only company in the Followers segment is Axiomatics. This does not imply any negative assessment of their solutions, however, rather emphasizing the maturity of their technology and lack of major competitors in their narrow area of the market. 

Innovation Leaders are (in alphabetical order): 

  • IBM
  • Oracle

2.4 Market Leadership 

Here we look at Market Leadership qualities based on certain market criteria including but not limited to the number of customers, the partner ecosystem, the global reach, and the nature of the response to factors affecting the market outlook. Market Leadership, from our point of view, requires global reach as well as consistent sales and service support with the successful execution of marketing strategy.

Unsurprisingly, among the market leaders, we can observe all large and established vendors like Oracle, IBM, Thales, and Imperva. All these companies are veteran players in the IT market with a massive global presence, large partner networks and impressive numbers of customers (including those outside of the data security market).

All smaller and younger companies are found in the Challengers segment, indicating their relative financial stability and future growth potential. 

Market Leaders are (in alphabetical order): 

  • IBM
  • Imperva
  • Oracle
  • Thales

3 Correlated View 

While the Leadership charts identify leading vendors in certain categories, many customers are looking not only for, say, a product leader, but for a vendor that is delivering a solution that is both feature-rich and continuously improved, which would be indicated by a strong position in both the Product Leadership ranking and the Innovation Leadership ranking. Therefore, we deliver additional analysis that correlates various Leadership categories and delivers an additional level of information and insight. 

3.1 The Market/Product Matrix 

The first of these correlated views looks at Product Leadership and Market Leadership. 

In this comparison, it becomes clear which vendors are better positioned in our analysis of Product Leadership compared to their position in the Market Leadership analysis. Vendors above the line are sort of “overperforming” in the market. It comes as no surprise that these are mainly the very large vendors, while vendors below the line are often innovative but focused on specific regions. 

Among the Market Champions, we can find the usual suspects – the largest well-established vendors including IBM, Oracle, Thales, and Imperva. 

comforte AG and SecuPI appear in the middle right box, indicating the opposite skew, where strong product capabilities have not yet brought them to strong market presence. Given both companies’ relatively recent entrance to the global database security market, we believe they have a strong potential for improving their market positions in the future. 

Axiomatics and Delphix can be found in the middle segment, indicating their relatively narrow functional focus, which corresponds to limited potential for future growth. 

3.2 The Product/Innovation Matrix 

The second view shows how Product Leadership and Innovation Leadership are correlated. Vendors below the line are more innovative, vendors above the line are, compared to the current Product Leadership positioning, less innovative. 

Here, we see a good correlation between the product and innovation ratings, with most vendors being placed close to the dotted line indicating a healthy mix of product and innovation leadership in the market.  

Among Technology Leaders, we again find IBM and Oracle, indicating both vendors’ distant leadership in both product and innovation capabilities thanks to their huge resources and decades of experience

The top middle box contains vendors that are providing good product features but lag behind the leaders in innovation. Here we find comforte AG, SecuPI, Thales and Imperva, indicating their strong positions in the selected functional areas of data security. 

Delphix has landed in the middle segment, showing that even with somewhat limited functional focus a vendor can still deliver a healthy amount of innovation.

The only company showing a noticeably lower level of innovation is Axiomatics; still, it has landed in the middle left box, indicating strong product capabilities. 

3.3 The Innovation/Market Matrix

The third matrix shows how Innovation Leadership and Market Leadership are related. Some vendors might perform well in the market without being Innovation Leaders. This might impose a risk to their future position in the market, depending on how they improve their Innovation Leadership position. On the other hand, vendors that are highly innovative have a good chance of improving their market position but often face risks of failure, especially in the case of vendors with a confused marketing strategy. 

Vendors above the line are performing well in the market compared to their relatively weak position in the Innovation Leadership rating, while vendors below the line show, based on their ability to innovate, the biggest potential for improving their market position. 

Again unsurprisingly, we can find IBM and Oracle among the Big Ones – vendors that combine strong market presence with a strong pace of innovation. 

Thales and Imperva in the top middle box indicate their strong market positions despite somewhat slower innovation, while comforte AG, Delphix and SecuPI occupy the opposite positions below the dotted line, indicating their strong performance in innovation, which has not yet translated into larger market shares.

Axiomatics can be found in the left middle box, indicating their position as an established player in a small, but mature and “uncrowded” market segment, which inhibits innovation somewhat.

4 Products and Vendors at a glance 

This section provides an overview of the various products we have analyzed within this KuppingerCole Leadership Compass on Database and Big Data Security. Aside from the rating overview, we provide additional comparisons that put Product Leadership, Innovation Leadership, and Market Leadership in relation to each other. These allow identifying, for instance, highly innovative but specialized vendors or local players that provide strong product features but do not have a global presence and large customer base yet. 

In addition, we also provide four additional ratings for the vendor. These go beyond the product view provided in the previous section. While the rating for Financial Strength applies to the vendor, the other ratings apply to the product.

In the area of innovation, we were looking for the service to provide a range of advanced features in our analysis. These advanced features include but are not limited to implementing practical applications of new innovative technologies like machine learning and behavior analytics or introducing new functionality in response to market demand. Where we could not find such features, we rate it as “Critical”

In the area of market position, we are looking at the visibility of the vendor in the market. This is indicated by factors including the presence of the vendor in more than one continent and the number of organizations using the services. Where the service is only being used by a small number of customers located in one geographical area, we award a “Critical” rating.

In the area of financial strength, a “Weak” or “Critical” rating is given where there is a lack of information about financial strength. This doesn’t imply that the vendor is in a weak or a critical financial situation. This is not intended to be an in-depth financial analysis of the vendor, and it is also possible that vendors with better ratings might fail and disappear from the market. 

Finally, a critical rating regarding ecosystem applies to vendors which do not have or have a very limited ecosystem with respect to numbers of partners and their regional presence. That might be company policy, to protect their own consulting and system integration business. However, our strong belief is that the success and growth of companies in a market segment rely on strong partnerships. 

5 Product evaluation 

This section contains a quick rating for every product we’ve included in this report. For some of the products, there are additional KuppingerCole Reports available, providing more detailed information. In the following analysis, we have provided our ratings for the products and vendors in a series of tables. These ratings represent the aspects described previously in this document. Here is an explanation of the ratings that we have used: 

  • Strong Positive: this rating indicates that, according to our analysis, the product or vendor significantly exceeds the average for the market and our expectations for that aspect.
  • Positive: this rating indicates that, according to our analysis, the product or vendor exceeds the average for the market and our expectations for that aspect. 
  • Neutral: this rating indicates that, according to our analysis, the product or vendor is average for the market and our expectations for that aspect. 
  • Weak: this rating indicates that, according to our analysis, the product or vendor is less than the average for the market and our expectations in that aspect. 
  • Critical: this is a special rating with a meaning that is explained where it is used. For example, it may mean that there is a lack of information. Where this rating is given, it is important that a customer considering this product look for more information about the aspect. 

It is important to note that these ratings are not absolute. They are relative to the market and our expectations. Therefore, a product with a strong positive rating could still be lacking in functionality that a customer may need if the market in general is weak in that area. Equally, in a strong market, a product with a weak rating may provide all the functionality a particular customer would need. 

5.1 Axiomatics 

Axiomatics is a privately held company headquartered in Stockholm, Sweden. Founded in 2006, the company is currently a leading provider of dynamic policy-based authorization solutions for applications, databases, and APIs. Despite its relatively small size, Axiomatics serves an impressive number of Fortune 500 companies and government agencies, as well as actively participates in various standardization activities. Axiomatics is a major contributor to the OASIS XACML (eXtensible Access Control Markup Language) standard, and all their solutions are designed to be 100% XACML-compliant. 

Strengths

  • Database-agnostic approach ensures unified policy application across different databases and big data stores 
  •  100% compliance with the XACML standard
  •  Shares the authorization model with other Axiomatics products for applications, APIs, etc.

Challenges

  • Quite narrow functional focus compared to other products in the rating  
  •  Relies on 3rd party components to enforce policies 

The company’s flagship data protection solution is the Dynamic Authorization Suite built around the Axiomatics Policy Server, an enterprise-wide universal Attribute-Based Access Control (ABAC) product. Included in the suite are Axiomatics Data Access Filter MD for managing access to sensitive information in relational databases along with SmartGuard for Big Data frameworks and cloud data stores. 

Implemented as loosely coupled add-ons or proxies, the suite provides policy-based access control defined in standard XACML, as well as dynamic data masking, filtering and activity monitoring transparently for multiple data sources, which integrates seamlessly with other company’s access management solutions for applications, APIs and microservices and other third-party products.

The key features of the solution include dynamic context-aware authorization implemented in a vendor-neutral way, flexible access control to sensitive data based on real-time dynamic data filtering, dynamic data masking and filtering for financial, healthcare, pharmaceutical and other types of personal information, and centralized management of access policies across databases, applications, and APIs. 

5.2 Comforte AG

comforte AG is a privately held software company specializing in data protection and digital payments solutions based in Wiesbaden, Germany. The company’s roots can be traced back to 1998 when its founders came to the market with a connectivity solution for HPE NonStop systems – a fault-tolerant selfhealing server platform for critical business applications. Over the years, comforte’s offering has evolved into a comprehensive solution for protecting sensitive business data with encryption and tokenization, tailored specifically for critical use cases that do not allow even minimal downtime.

Strengths

  • Unique hardened, scalable and fault-tolerant architecture for mission-critical use cases 
  • Deployment flexibility, hybrid cloud, and as-aService scenarios are supported  
  • Broad range of transparent application integration options, support for Big Data and stream processing frameworks 

Challenges

  • Current functionality limited to tokenization and masking (other data protection
  • Somewhat limited market visibility outside of the financial industry 

A few years ago, comforte AG has entered the data-centric security market with their SecurDPS Enterprise solution that combines the company’s patented stateless tokenization algorithm, proven highly scalable and fault-tolerant architecture, flexible access control and policy management, augmented by a broad range of transparent integration options, which allow various existing applications to be quickly included into the enterprise-wide deployment without any changes in infrastructure or code. 

The platform’s decentralized and redundant architecture ensures deployment flexibility in any scenario: hybrid cloud and as-a-Service use cases are supported as well. Patented stateless tokenization algorithm supports limitless scaling across heterogeneous environments. Strong focus on regulatory compliance directly addresses PCI DSS and GDPR requirements. 

5.3 Delphix

Delphix is a privately held software development company headquartered in Redwood City, California, USA. It was founded in 2008 with a vision of a dynamic platform for data operators and data consumers within an enterprise to collaborate in a fast, flexible and secure way. With offices across the USA, Europe, Latin America, and Asia, Delphix is currently serving over 300 global enterprise customers including 30% of the Fortune 100 companies. 

Strengths

  • Based on a universal, high-performance and space-efficient data virtualization technology 
  • Support for a broad range of database types and unstructured file systems
  • Transparent data masking and tokenization capabilities 
  • Preconfigured for GDPR compliance  

Challenges

  • Limited data protection capabilities, lack of encryption support 
  • Limited monitoring and analytics functions 

Delphix Dynamic Data Platform is a software-based data virtualization platform – quickly provisioning virtual copies of masked or unmasked data across different IT environments. Delivered as virtual appliances that can be deployed anywhere, the platform offers unified support for on-prem, cloud and hybrid environments. 

Using compression, intelligent data block sharing and other optimizations and offering self-service capabilities and API-driven automation functions, the Delphix platform ensures that data consumers can get access to the data they need as quickly and efficiently as possible, enabling numerous usage scenarios: cloud migration, data analytics, DevOps automation of data delivery, test data management, and even disaster recovery. 

Since the platform is designed to be fully transparent for existing applications and services, this ensures effortless hybrid cloud deployment for new and existing applications. Powerful selfservice functions for data consumers enable quick provisioning, refreshing, rewinding, and sharing of data sources in minutes instead of hours, powering the emerging DataOps methodology. Integrated data anonymization features come preconfigured for GDPR compliance. 

5.4 IBM

IBM Corporation is a multinational technology and consulting company headquartered in Armonk, New York, USA. IBM offers a broad range of software solutions and infrastructure, hosting and consulting services in numerous market segments. With over 370 thousand employees and market presence in 160 countries, IBM ranks as one of the world’s largest companies both in terms of size and profitability.

Strengths

  •  Full range of security capabilities for structured and unstructured data 
  • Support for hybrid multi-cloud environments
  • Advanced Big Data and Cognitive Analytics  
  • Nearly unlimited scalability 
  • Integrated ecosystem with IBM’s and 3rd party security, identity and analytics products
  • Massive network of technology partners and resellers

Challenges

  •  Setup and operations may be complicated for some customers 

IBM Security, one of the strategic units of the company, provides a comprehensive portfolio including identity and access management, security intelligence and information protection solutions. The product covered in this rating is IBM Security Guardium – a comprehensive data security platform providing a full range of functions, including discovery and classification, entitlement reporting, data protection, activity monitoring, and advanced data security analytics, across different environments: from file systems to databases and big data platforms to hybrid cloud infrastructures. 

Among the key features of the Guardium platform are discovery, classification, vulnerability assessment and entitlement reporting across heterogeneous data environments; encryption, data redaction and dynamic masking combined with real-time alerting and automated blocking of malicious access; and activity monitoring and advanced security analytics based on machine learning. 

Automated data compliance and audit capabilities with Compliance Accelerators for specific frameworks like PCI, HIPAA, SOX or GDPR ensure that following strict personal data protection guidelines becomes a continuous process, leaving no gaps either for auditors or for malicious actors. 

5.5 Imperva

Imperva is an American cybersecurity solution company headquartered in Redwood Shore, California. Back in 2002, the company’s first product was a web application firewall, but over the years, Imperva’s portfolio has expanded to include several product lines for data security, cloud security, breach prevention, and infrastructure protection as well. In 2019, Imperva was acquired by private equity firm Thoma Bravo, making it a privately held company and providing a substantial boost in R&D. At the same time, major changes in product licensing were announced, which reduced a large number of standalone products towards a short list of convenient packages called FlexProtect Plans. 

Strengths

  • Convenient licensing plans for comprehensive data protection 
  • Multiple collection methods ensure minimal performance overhead 
  • Advanced security intelligence and behavior analytics 
  • Large number of out-of-the-box workflows and compliance reports 

Challenges

  • No support for data encryption or dynamic masking 

Instead of multiple SecureSphere products for Discovery and Assessment, Activity Monitoring, Database Firewall, as well as CounterBreach for threat protection and Camouflage for masking, Imperva customers only need to subscribe for a single FlexProtect for Data licensing plan to enable full protection of their sensitive data. 

The new data protection suite offers all the required capabilities, such as the unified protection across relational databases, data warehouses, Big data platforms, and mainframes; comprehensive activity monitoring, auditing, and forensic investigation, augmented with advanced security analytics based on behavior profiling; pre-defined policies, remediation workflows, and hundreds of compliance reports Integrations with other Imperva’s security products ensure that this multi-factored data security can be enforced across endpoints, web applications, and cloud services. 

A notable recent addition to Imperva’s portfolio is Cloud Data Security, a new offering that extends discovery, classification and analytics capabilities to database assets in the cloud. Delivered as SaaS, the platform can be deployed and configured in hours, delivering actionable insights for prioritizing threat remediations immediately.

5.6 Oracle

Oracle Corporation is an American multinational information technology company headquartered in Redwood Shores, California. Founded back in 1977, the company has a long history of developing database software and technologies; nowadays, however, Oracle’s portfolio incorporates a large number of products and services ranging from operating systems and development tools to cloud services and business application suites. 

Strengths

  • Autonomous cloud database platform eliminating human administrative access
  • Automated provisioning, upgrades, backup and DR, no downtime 
  • Comprehensive product portfolio for all areas of database security 
  • Deep integration with other Oracle’s Data Provisioning, Testing and Cloud technologies  

Challenges

  • A number of products are available only for Oracle databases 
  • Big Data and NoSQL products are not yet integrated with RDBMS security solutions  

The breadth of the company’s database security portfolio is impressive: with a number of protection and detection products and a number of managed services covering all aspects of database assessment, protection, monitoring and compliance, Oracle Database Security can address the most complex customer requirements, both on-premises and in the cloud.

The recently introduced Oracle Autonomous Database, which completely automated provisioning, management, tuning and upgrade processes of database instances without any downtime, not just substantially increases security and compliance of sensitive data stored in Oracle databases, but makes a compelling argument for moving this data to the Oracle cloud.

It’s worth noting that a substantial part of the company’s security capabilities is still specifically designed for Oracle databases only, which makes Oracle’s data protection solutions less suitable for companies using other DB types.  

This strategy seems to change slowly however as the company is planning to offer more database-agnostic tools in the future. 

5.7 SecuPI

SecuPI is a privately held data-centric security vendor headquartered in Jersey City, NJ, USA. The company was founded in 2014 by entrepreneurs with a strong background in financial technology, also known for coinventing the very concept of dynamic data masking. After realizing that data masking alone does not solve modern privacy and compliance problems, the company was established with a vision “to do the things the right way”. 

Strengths

  • Integrated data protection and privacy platform with strong focus on GDPR/CCPA 
  • Application-level protection overlays simplify deployment and management 
  • User identity context for more fine-grained policies and monitoring
  • Broad support for big data and EDW platforms 

Challenges

  •  Architecture potentially limits support of less popular or legacy platforms 
  • Small market presence compared to competitors

As opposed to most competitors that encrypt information at the database level, SecuPI’s approach is to embed encryption overlays directly into application stacks. Thus, the solution can only focus on supporting a few of major development platforms like Java or .NET instead of numerous distinct data source types. In addition, this approach gives the platform access to real user identities and not to typical service accounts used to connect to databases. With this technology, SecuPI delivers a single privacyfocused data protection platform for on-prem and cloud-based applications, which is easy to deploy and to operate thanks to the centralized management of data protection policies.

SecuPI software platform brings data-centric security and compliance closer to application owners and business units, enabling sensitive data discovery, classification, anonymization, and minimization across the whole organization, with centralized policy management along with real-time monitoring of all data flows and user activities without any changes in existing applications and network infrastructures. 

Built-in controls for user consent management, anonymization and other data subject rights (such as the right to be forgotten) ensure that all existing applications can be made compliant with GDPR and similar regulations quickly and without the need to adapt existing database structures.

5.8 Thales

Thales is a leading provider of data protection solutions headquartered in Austin, Texas, USA. With over 40 years of experience in information security, the company is a veteran player in such areas like hardware security modules (HSM), data encryption, key management and PKI. The company’s modern history began in 2000 when it became a part of Thales Group, an international company based in France, which provides solutions and services for defense, aerospace and transportation markets. In 2019, Thales completed the acquisition of Gemalto, its largest competitor in the data protection market, thus substantially increasing both its market position and functional capabilities with new services like Authentication and Access Management. 

Strengths

  • Comprehensive transparent encryption, tokenization and masking capabilities  
  • High-performance thanks to hardware encryption support 
  • Centralized management across all environments, even 3rd party products 
  • Standard APIs for adding encryption support to existing applications

Challenges

  • Primary focus on data protection only, no coverage of other functional areas  

In this rating we focus primarily on the Vormetric Data Security Platform, a unified data protection platform providing customers the flexibility, scale and efficiency to address different security requirements like transparent encryption of the entire database environments, privileged user access controls, granular fieldlevel data protection with encryption, tokenization and data masking, and a single security manager for maximizing value and minimizing the total cost of ownership. 

Notable features of the platform include centralized management of encryption keys and policies across all environments and products, application encryption APIs for embedding transparent encryption into existing apps, and dynamic masking with format-preserving tokenization. Live Data Transformation enables in-place encryption of data without the need to move it elsewhere first; this helps reduce maintenance windows for rotating encryption keys or other scenarios like versioned backups. Tight integrations with storage vendors enable innovative capabilities like efficient storage deduplication of transparently encrypted data. 

6 Vendors to watch 

In addition to the vendors evaluated in detail in this Leadership Compass, there are several companies that for various reasons were unable to participate in the rating but are nevertheless worth mentioning. Some of the vendors below are focusing primarily on other aspects of information security yet show a notable overlap with the topic of our rating. Others have just entered the market as startups with new, yet interesting products worth checking out. 

6.1 Dataguise  

Dataguise is a privately held company headquartered in Fremont, CA, United States. Founded in 2007, the company provides a sensitive data governance platform to discover, monitor and protect sensitive data on-premises and in the cloud across multiple data environments. Although the company primarily focuses on Big Data infrastructures, supporting all major Hadoop distributions and many Hadoop-as-a-Service providers, their solution supports traditional databases, as well as file servers and SharePoint. 

From a single dashboard, customers can get a clear overview of all sensitive information stored across the corporate IT systems, understand which data is being protected and which is at risk of exposure, as well as ensure compliance with industry regulations with a full audit trail and real-time alerts. 

6.2 DataSunrise 

DataSunrise is a privately held company based in Seattle, WA, United States. It was founded in 2015 with the goal of developing a next-generation data and database security solution for real-time data protection in heterogeneous environments. 

The company’s solution combines data discovery, activity monitoring, database firewall and dynamic data masking capabilities in a single integrated product. However, the company does not focus on cloud databases only, offering support for a wide range of database and data warehouse vendors. In addition, DataSunrise provides integrations with a number of 3rd party SIEM solutions and other security tools. 

6.3 DB CyberTech

DB CyberTech (formerly DB Networks) is privately held database security vendor headquartered in San Diego, CA, United States. Founded in 2009, the company focuses exclusively on database monitoring through non-intrusive deep protocol inspection, database discovery, and artificial intelligence. 

By combining network traffic inspection with machine learning and behavioral analysis, DB Networks claims to be able to provide continuous discovery of all databases, analyze interactions between databases and applications and then identify compromised credentials, database-specific attacks and other suspicious activities which reveal data breaches and other advanced cyberattacks. 

6.4 McAfee

McAfee is a veteran American computer security vendor headquartered in Santa Clara, California. Founded in 1987, the company has a long history in developing a broad range of endpoint protection, network, and data security solutions. Between 2011 and 2016, McAfee has been a wholly owned subsidiary of Intel. Currently, the company is a joint venture between Intel and an investment company TPG Capital. 

In the database security market, McAfee offers a number of products that form the McAfee Database Security Suite providing unified database security across physical, virtual, and cloud environments. The suite provides comprehensive functionality in such areas as database and data discovery, activity monitoring, privileged access control, and intrusion detection – all through a non-intrusive network-based architecture.

6.5 Mentis Inc 

MENTIS is a privately held company that provides sensitive information management solutions since 2004. It is headquartered in New York City, USA. The company offers a comprehensive suite of products for various aspects of discovery, management, and protection of critical data across multiple sources, built on top of a common software platform and delivered as a fully integrated yet flexible solution.

With this platform, MENTIS is able to offer business-focused solutions for such common challenges as GDPR compliance, migration to public clouds and sensitive data management for cross-border operations. The company promises quick and simple deployment for most customers with pre-built controls for data masking, monitoring, auditing and reporting for popular enterprise business applications. 

6.6 Micro Focus 

Micro Focus is a large multinational software vendor and IT consultancy. Originally established in 1976 in Newbury, United Kingdom, nowadays the company has a large global presence and a massive portfolio of products and services for application development and operations management, data management and governance, and, of course, security. In recent years, Micro Focus has grown substantially through a series of acquisitions, and in 2017, it merged with the HPE’s software business.

Voltage SecureData Enterprise, the company’s data security platform provides a comprehensive solution for securing sensitive enterprise data through transparent encryption and pseudonymization across multiple database types and Big Data platforms, on premises, in the cloud, and on the edge.

6.7 Microsoft

Microsoft is a multinational technology company headquartered in Redmond, Washington, USA. Founded in 1975, it has risen to dominate the personal computer software market with MS-DOS and Microsoft Windows operating systems. Since then, the company has expanded into multiple markets like desktop and server software, consumer electronics and computer hardware, mobile devices, digital services and, of course, the cloud. 

Given their leading position in multiple IT environments – on endpoints, in data centers and in the public cloud, Microsoft has the unique opportunity to collect vast amounts of security-related telemetry and convert it into security insights and threat intelligence. In recent years, the company has established itself as a notable security solution provider, and even though they do not yet offer specialized database security products, their portfolio in the areas of information protection and security analytics is worth checking. 

Even more interesting are the recent developments in their SQL Server platform, which focus on the concept of Confidential Computing – performing operations on sensitive data within secured enclaves. Combined with the existing encryption capabilities, this technology enables consistent data protection at any stage: at rest, in transit, and in use. 

6.8 Protegrity

Protegrity is a privately held software vendor from Stamford, CT, USA. Since 1996, the company has been in the enterprise data protection business. Their solutions implement a variety of technologies, including data encryption, masking, tokenization and monitoring across multiple environments – from mainframes to clouds. 

Protegrity Database Protector is a solution for monitoring and securing sensitive information in databases, storage and backup systems with policy-based access controls. Big Data Protector extends this protection to Hadoop-based Big Data platforms – protecting the data both at rest and in transit, as well as in use during various stages of processing. 

Protegrity Data Security Gateway provides transparent protection for data moving between multiple devices, without the need to modify any existing applications or services. 

6.9 Trustwave

Trustwave is a veteran cybersecurity vendor headquartered in Chicago, IL, United States. Since 1995, the company provides managed security services in such areas as vulnerability management, compliance, and threat protection. 

Trustwave DbProtect is a security platform that provides continuous discovery and inventory of relational databases and Big Data stores, agentless assessment of each asset for configuration problems, vulnerabilities, dangerous user rights, and privileges and potential compliance violations and finally enables comprehensive rep

The solution’s distributed architecture can meet the scalability demands of large organizations with thousands of data stores. 

7 Methodology 

KuppingerCole Leadership Compass is a tool which provides an overview of a particular IT market segment and identifies the leaders in that market segment. It is the compass which assists you in identifying the vendors and products/services in a particular market segment which you should consider for product decisions. 

It should be noted that it is inadequate to pick vendors based only on the information provided within this report. 

Customers must always define their specific requirements and analyze in greater detail what they need. This report doesn’t provide any recommendations for picking a vendor for a specific customer scenario. This can be done only based on a more thorough and comprehensive analysis of customer requirements and a more detailed mapping of these requirements to product features, i.e. a complete assessment. 

7.1 Types of Leadership 

We look at four types of leaders: 

  • Product Leaders: Product Leaders identify the leading-edge products in a particular market segment. These products deliver to a large extent what we expect from products in that market segment. They are mature.
  • Market Leaders: Market Leaders are vendors which have a large, global customer base and a strong partner network to support their customers. A lack of global presence or breadth of partners can prevent a vendor from becoming a Market Leader. 
  • Innovation Leaders: Innovation Leaders are those vendors which are driving innovation in the market segment. They provide several of the most innovative and upcoming features we hope to see in the market segment. 
  • Overall Leaders: Overall Leaders are identified based on a combined rating, looking at the strength of products, the market presence, and the innovation of vendors. Overall Leaders might have slight weaknesses in some areas but become an Overall Leader by being above average in all areas. 

For every area, we distinguish between three levels of products: 

  • Leaders: This identifies the Leaders as defined above. Leaders are products which are exceptionally strong in particular areas. 
  • Challengers: This level identifies products which are not yet Leaders but have specific strengths which might make them Leaders. Typically, these products are also mature and might be leading-edge when looking at specific use cases and customer requirements. 
  • Followers: This group contains products which lag behind in some areas, such as having a limited feature set or only a regional presence. The best of these products might have specific strengths, making them a good or even the best choice for specific use cases and customer requirements but are of limited value in other situations. 

Our rating is based on a broad range of input and long experience in that market segment. Input consists of experience from KuppingerCole advisory projects, feedback from customers using the products, product documentation, and a questionnaire sent out before creating the KuppingerCole Leadership Compass, as well as other sources. 

7.2 Product rating 

KuppingerCole as an analyst company regularly does evaluations of products/services and vendors. The results are, among other types of publications and services, published in the KuppingerCole Leadership Compass Reports, KuppingerCole Executive Views, KuppingerCole Product Reports, and KuppingerCole Vendor Reports. KuppingerCole uses a standardized rating to provide a quick overview of our perception of the products or vendors. Providing a quick overview of the KuppingerCole rating of products requires an approach combining clarity, accuracy, and completeness of information at a glance. 

KuppingerCole uses the following categories to rate products: 

  • Security
  • Functionality
  • Integration
  • Interoperability
  • Usability

Security – security is measured by the degree of security within the product. Information Security is a key element and requirement in the KuppingerCole IT Model (#70129 Scenario Understanding IT Service and Security Management1 ). Thus, providing a mature approach to security and having a well-defined internal security concept are key factors when evaluating products. Shortcomings such as having no or only a very coarse-grained, internal authorization concept are understood as weaknesses in security. Known security vulnerabilities and hacks are also understood as weaknesses. The rating then is based on the severity of such issues and the way vendors deal with them. 

Functionality – this is measured in relation to three factors. One is what the vendor promises to deliver. The second is the status of the industry. The third factor is what KuppingerCole would expect the industry to deliver to meet customer requirements. In mature market segments, the status of the industry and KuppingerCole expectations usually are virtually the same. In emerging markets, they might differ significantly, with no single vendor meeting the expectations of KuppingerCole, thus leading to relatively low ratings for all products in that market segment. Not providing what customers can expect on average from vendors in a market segment usually leads to a degradation of the rating, unless the product provides other features or uses another approach which appears to provide customer benefits.

Integration – integration is measured by the degree in which the vendor has integrated the individual technologies or products in their portfolio. Thus, when we use the term integration, we are referring to the extent to which products interoperate with themselves. This detail can be uncovered by looking at what an administrator is required to do in the deployment, operation, management, and discontinuation of the product. The degree of integration is then directly related to how much overhead this process requires. For example: if each product maintains its own set of names and passwords for every person involved, it is not well integrated. 

And if products use different databases or different administration tools with inconsistent user interfaces, they are not well integrated. On the other hand, if a single name and password can allow the admin to deal with all aspects of the product suite, then a better level of integration has been achieved.

Interoperability—interoperability also can have many meanings. We use the term “interoperability” to refer to the ability of a product to work with other vendors’ products, standards, or technologies. In this context, it means the degree to which the vendor has integrated the individual products or technologies with other products or standards that are important outside of the product family. Extensibility is part of this and measured by the degree to which a vendor allows its technologies and products to be extended for the purposes of its constituents. We think Extensibility is so important that it is given equal status so as to ensure its importance and understanding by both the vendor and the customer. As we move forward, just providing good documentation is inadequate. We are moving to an era when acceptable extensibility will require programmatic access through a well-documented and secure set of APIs. Refer to the Open API Economy Document (#70352 Advisory Note: The Open API Economy2 ) for more information about the nature and state of extensibility and interoperability.

Usability —accessibility refers to the degree in which the vendor enables the accessibility to its technologies and products to its constituencies. This typically addresses two aspects of usability – the end user view and the administrator view. Sometimes just good documentation can create adequate accessibility. However, we have strong expectations overall regarding well-integrated user interfaces and a high degree of consistency across user interfaces of a product or different products of a vendor. We also expect vendors to follow common, established approaches to user interface design. 

We focus on security, functionality, integration, interoperability, and usability for the following key reasons: 

  • Increased People Participation—Human participation in systems at any level is the highest area of cost and potential breakdown for any IT endeavor. 
  • Lack of Security, Functionality, Integration, Interoperability, and Usability—Lack of excellence in any of these areas will only result in increased human participation in deploying and maintaining IT systems. 
  • Increased Identity and Security Exposure to Failure—Increased People Participation and Lack of Security, Functionality, Integration, Interoperability, and Usability not only significantly increases costs, but inevitably leads to mistakes and breakdowns. This will create openings for attack and failure. 

Thus, when KuppingerCole evaluates a set of technologies or products from a given vendor, the degree of product Security, Functionality, Integration, Interoperability, and Usability which the vendor has provided are of the highest importance. This is because the lack of excellence in any or all areas will lead to inevitable identity and security breakdowns and weak infrastructure. 

7.3 Vendor rating 

For vendors, additional ratings are used as part of the vendor evaluation. The specific areas we rate for vendors are: 

  • Innovativeness 
  • Market position 
  • Financial strength 
  • Ecosystem

Innovativeness – this is measured as the capability to drive innovation in a direction which aligns with the KuppingerCole understanding of the market segment(s) the vendor is in. Innovation has no value by itself but needs to provide clear benefits to the customer. However, being innovative is an important factor for trust in vendors, because innovative vendors are more likely to remain leading-edge. An important element of this dimension of the KuppingerCole ratings is the support of standardization initiatives if applicable. Driving innovation without standardization frequently leads to lock-in scenarios. Thus, active participation in standardization initiatives adds to the positive rating of innovativeness. 

Market position – measures the position the vendor has in the market or the relevant market segments. This is an average rating overall markets in which a vendor is active, e.g. being weak in one segment doesn’t lead to a very low overall rating. This factor considers the vendor’s presence in major markets.

Financial strength – even while KuppingerCole doesn’t consider size to be a value by itself, financial strength is an important factor for customers when making decisions. In general, publicly available financial information is an important factor therein. Companies which are venture-financed are in general more likely to become an acquisition target, with massive risks for the execution of the vendor’s roadmap. 

Ecosystem – this dimension looks at the ecosystem of the vendor. It focuses mainly on the partner base of a vendor and the approach the vendor takes to act as a “good citizen” in heterogeneous IT environments. 

Again, please note that in KuppingerCole Leadership Compass documents, most of these ratings apply to the specific product and market segment covered in the analysis, not to the overall rating of the vendor. 

7.4 Rating scale for products and vendors 

For vendors and product feature areas, we use – beyond the Leadership rating in the various categories – a separate rating with five different levels. These levels are 

  • Strong positive – Outstanding support for the feature area, e.g. product functionality, or outstanding position of the company, e.g. for financial stability. 
  • Positive – Strong support for a feature area or strong position of the company, but with some minor gaps or shortcomings. E.g. for security, this can indicate some gaps in fine-grain control of administrative entitlements. E.g. for market reach, it can indicate the global reach of a partner network, but a rather small number of partners. 
  • Neutral – Acceptable support for feature areas or acceptable position of the company, but with several requirements we set for these areas not being met. E.g. for functionality, this can indicate that some of the major feature areas we are looking for aren’t met, while others are well served. For company ratings, it can indicate, e.g., a regional-only presence. 
  • Weak – Below-average capabilities in the product ratings or significant challenges in the company ratings, such as very small partner ecosystem. 
  • Critical – Major weaknesses in various areas. This rating most commonly applies to company ratings for the market position or financial strength, indicating that vendors are very small and have a very low number of customers. 

7.5 Spider graphs 

In addition to the ratings for our standard categories such as Product Leadership and Innovation Leadership, we add a spider graph for every vendor we rate, looking at specific capabilities for the market segment researched in the respective Leadership Compass. For the field of Database and Big Data Security, we look at the following eight areas: 

  • Vulnerability assessment – Discovering known vulnerabilities in database products, providing complete visibility into complex database infrastructures, detecting misconfigurations and the means for assessing and mitigating these risks. 
  • Discovery & Classification – Crucial first step in defining proper security policies for different data depending on their criticality and compliance requirements. 
  • Data-centric Security – Data encryption at rest and in transit (and in use wherever available), static and dynamic data masking and other technologies for protecting data integrity and confidentiality. 
  • Monitoring & Analytics – Monitoring of database performance characteristics, complete visibility for all access and administrative actions for each instance, including alerting and reporting functions, advanced real-time analytics, anomaly detection, and SIEM integration. 
  • Threat Prevention – Various methods of protection from cyber-attacks such as denial-ofservice or SQL injection, mitigation of unpatched vulnerabilities and other infrastructure-specific security measures. 
  • Access Management – Access controls for database instances, dynamic policy-based access management, identifying and removing excessive user privileges, managing shared and service accounts, detection, and blocking of suspicious user activities. 
  • Audit & Compliance – Advanced auditing mechanisms beyond native capabilities, centralized auditing and reporting across multiple database environments, enforcing separation of duties, forensic analysis, and compliance audits. 
  • Performance & Scalability – Ability to withstand high loads, minimize performance overhead and to support deployments in high availability configurations.

These spider graphs add an extra level of information by showing the areas where products are stronger or weaker. Some products show gaps in certain areas while being strong in other areas. These might be a good fit if only specific features are required. Given the breadth and complexity of the full scope of database security, only very few largest vendors have enough resources to offer solutions that cover all of the areas; thus, we do not recommend overlooking smaller, more specialized products – often they may provide substantially better return of investment. 

7.6 Inclusion and exclusion of vendors 

KuppingerCole tries to include all vendors within a specific market segment in their Leadership Compass documents. The scope of the document is global coverage, including vendors which are only active in regional markets such as Germany, Russia, or the US. 

However, there might be vendors which don’t appear in a Leadership Compass document due to various reasons: 

  • Limited market visibility: There might be vendors and products which are not on our radar yet, despite our continuous market research and work with advisory customers. This usually is a clear indicator of a lack of Market Leadership. 
  • Denial of participation: Vendors might decide on not participating in our evaluation and refuse to become part of the Leadership Compass document. KuppingerCole tends to include their products anyway as long as sufficient information for evaluation is available, thus providing a comprehensive overview of leaders in the particular market segment. 
  • Lack of information supply: Products of vendors which don’t provide the information we have requested for the Leadership Compass document will not appear in the document unless we have access to sufficient information from other sources.
  • Borderline classification: Some products might have only a small overlap with the market segment we are analyzing. In these cases, we might decide not to include the product in that KuppingerCole Leadership Compass. 

Despite our effort to cover most aspects of database and big data security in this Leadership Compass, we are not planning to review the following products: 

  • Solutions that primarily focus on unstructured data protection having limited or no database-related capabilities; 
  •  Security tools that cover general aspects of information security (such as firewalls or antimalware products) but do not offer functionality specifically tailored for data protection; 
  • Compliance or risk management solutions that focus on organizational aspects (checklists, reports, etc.) 

The target is providing a comprehensive view of the products in a market segment. KuppingerCole will provide regular updates on their Leadership Compass documents. 

We provide a quick overview of vendors not covered and their offerings in the chapter Vendors to watch. In that chapter, we also look at some other interesting offerings around the Database and Big Data Security market and in related market segments. 

Data security challenges in a hybrid multicloud world

Deploying in a hybrid, multicloud environment

Let’s face it, cloud computing is evolving at a rapid pace. Today, there’s a range of choices for moving applications and data to cloud that includes various deployment models, from public and private to hybrid cloud service types. As part of a broader digital strategy, organizations are seeking ways to utilize multiple clouds. With a multicloud approach, companies can avoid vendor lock-in and take advantage of the best-of-breed technologies, such as artificial intelligence (AI) and blockchain. The business benefits are clear: improved flexibility and agility, lower costs, and faster time to market. According to an IBM Institute for Business Value survey of 1,106 business and technology executives, by 2021, 85% of organizations are already operating multicloud environments. 98% plan to use multiple hybrid clouds by 2021. However, only 41% have a multicloud management strategy in place.1 When it comes to choosing cloud solutions, there’s a plethora of options available. It’s helpful to look at the differences between the various types of cloud deployment and cloud service models.

Understanding cloud deployment models

Over the past decade, cloud computing has matured in several ways and has become a tool for digital transformation worldwide. Generally, clouds take one of three deployment models: public, private or hybrid.

Public cloud

A public cloud is when services are delivered through a public internet. The cloud provider fully owns, manages and maintains the infrastructure and rents it to customers based on usage or periodic subscription, for example Amazon Web Services (AWS) or Microsoft Azure.

Private cloud

In a private cloud model, the cloud infrastructure and the resources are deployed on premises for a single organization, whether managed internally or by a third party. With private clouds, organizations control the entire software stack, as well as the underlying platform, from hardware infrastructure to metering tools.

Hybrid cloud

It offers the best of both worlds. A hybrid cloud infrastructure connects a company’s private cloud and third-party public cloud into a single infrastructure for the company to run its applications and workloads. Using the hybrid cloud model, organizations can run sensitive and highly regulated workloads on a private cloud infrastructure and run the less sensitive and temporary workloads on the public cloud. However, moving applications and data beyond firewalls to the cloud exposes them to risk. Whether your data is in a private cloud or a hybrid environment, data security and protection controls must be in place to protect data and meet government and industry compliance requirements.

Types of cloud service models

Data security differs based on the cloud service model being used. There are four main categories of cloud service models: infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), and database as a service (DBaaS), which is a flavor of PaaS. IaaS allows organizations to maintain their existing physical software and middleware platforms, and business applications on the infrastructure provided and managed by the service provider. Organizations benefit from this approach when they want to quickly take advantage of the cloud while minimizing impact and using existing investments. PaaS allows companies to use the infrastructure, as well as middleware or software provided and managed by the service provider. This flexibility removes a significant burden on a company from an IT perspective and allows it to focus on developing innovative business applications.

DBaaS solutions are hosted and fully managed database environments by a cloud provider. For example, a firm might subscribe to Amazon RDS for MySQL or Microsoft Azure SQL Database. SaaS is a service model that outsources all IT and allows organizations to focus more on their core strengths instead of spending time and investment on technology. It offers SaaS to the end users. In this cloud service model, a service provider hosts applications and makes them available to organizations. With each step, from IaaS to PaaS to SaaS to DBaaS, organizations give up some level of control over the systems that store, manage, distribute and protect their sensitive data. This increase in trust placed in third parties also presents an increase in risk to data security. Regardless of the chosen architecture, it’s ultimately your organization’s responsibility to ensure that appropriate data security measures are in place across environments.

Data security challenges to your cloud environment

Chances are, you’re already on your journey to the cloud. If your organization is like the vast number of businesses, your sensitive data resides in locations you can’t control and is managed by third parties that may have unfettered access. Research by the Ponemon Institute has found that insider threats are significantly increasing in frequency and cost. According to the institute’s findings, “the average global cost of insider threats rose by 31 percent in two years to $11.45 million and the frequency of incidents spiked by 47 percent in the same time period.” 4 The surveyed organizations had a global head count of 1,000 or more employees.

Determining how best to store data is one of the most important decisions an organization can make. The cloud is well-suited for long-term, enterprise-level data storage that allows organizations to benefit from massive economies of scale, which translates into lower expenses. And, this feature often makes cloud-based data centers a smarter place to store business-critical information than a stack of servers down the hall. 

Even as the expense of acquiring storage drops, it can be expensive in the long term due to increased business use and the number of personnel managing the storage systems. However, while putting data storage in the hands of third-party service providers can help save money and time, it can also pose serious security challenges and create new levels of risk.

Cloud deployments work on a shared responsibility model between the cloud provider and the consumer. In the case of an IaaS model, the cloud consumer has room to implement data security measures much like what they would normally deploy on premises and exercise tighter controls. 

On the other hand, for SaaS services, cloud consumers for the most part have to rely on the visibility provided by the cloud provider which, in essence, limits their ability to exercise more granular controls. 

It’s important to understand that whatever your deployment model or cloud service type, data security must be a priority. What’s of great concern is that your sensitive data now sits in many places, both within your company’s walls and outside of them. And, your security controls need to go wherever your data goes. 

Keep your sensitive data safe essentially everywhere 

Who has access to sensitive data in your organization? How sure are you that your staff or privileged users haven’t inappropriately accessed sensitive customer data?

In other words, you can’t protect what you don’t know. Simply locking down network access may not serve the purpose. After all, employees rely on this network to access and share data. This access means that the effectiveness of your data security is largely in the hands of your employees, some of which may no longer work directly for your company but still maintain access. Automated discovery, classification and monitoring of your sensitive data across platforms is crucial to enforce effective, in-context security policies and to help address compliance with regulations. 

Generally, in cloud environments, cloud service providers (CSPs) have the ability to access your sensitive data, which makes CSPs a new frontier in insider threats. Additionally, cybercriminals know that CSPs store vast amounts of important data, making such environments prime targets for attacks. To counteract these threats, sophisticated analytics-based tools that verify authorized and normal access must be utilized. Learn more

Consider encryption for cloud storage

With cloud storage, your data may move to a different place, on a different media, than its location today. The same is true of virtualization. Not only cloud-based data, but also cloud-based computing resources might shift rapidly in terms of both location and hardware underpinnings. The shifting nature of the cloud means that your security approach needs to address different kinds of cloud-based storage. Your approach also must account for copies, whether long-term backups or temporary copies, created during data movement. 

To address these challenges, you should deploy cross-platform solutions and employ strong encryption to help ensure that your data is unusable to unauthorized persons in the event that it’s mishandled. 

Even if your data is not primarily stored in the cloud, both the form in which data leaves and returns to your enterprise and the route data takes are important concerns. Data is only as secure as the weakest link in the processing chain. So, even if data is primarily kept encrypted and behind a firewall onsite, if it’s transmitted to an offsite backup or for third-party processing, the data may be exposed.

Malware detection or behavioral analysis that’s designed to spot suspicious activities can help prevent an internal or external data breach—and serve valuable functions in their own right. 

Encryption, however, helps protect data wherever it exists, whether it’s at rest or in motion.

Organizational challenges to your cloud environment

With data growing at an exponential rate, organizations are facing a growing list of data protection laws and regulations. What are at risk? Customers’ personal information, such as payment card information, addresses, phone numbers and social security numbers, to name a few. To have an effective security solution, organizations should adopt a risk-based approach to protecting customer data across environments. 

Here are five challenges that could impact your organization’s security posture: 

  •  Ensuring compliance 
  •  Assuring privacy 
  •  Improving productivity 
  •  Monitoring access controls 
  •  Addressing vulnerabilities

IBM Security™ Guardium® data protection platform is designed to help your organization meet these challenges with smarter data protection capabilities across environments. 

Keep up with compliance

The realities of cloud-based storage and computing mean that your sensitive data across hybrid multicloud systems could be subject to industry and government regulations. 

If your data is in a public cloud, you must be aware of how the CSP plans to protect your sensitive data. For example, according to the European Union (EU) General Data Protection Regulation (GDPR), information that reveals a person’s racial or ethnic origin are considered sensitive and could be subject to specific processing conditions.5 These requirements apply even to companies located in other regions of the world that hold and access the personal data of EU residents.

Understanding where an organization’s data resides, what types of information it consists of, and how these relate across the enterprise can help business leaders define the right policies for securing and encrypting their data

Additionally, it could also help with demonstrating compliance with regulations, such as:

  • Sarbanes-Oxley (SOX) 
  • Payment Card Industry Data Security Standard (PCI DSS) 
  • Security Content Automation Protocol (SCAP) 
  • Federal Information Security Management Act (FISMA) 
  • Health Information Technology for Economic and Clinical Health Act (HITECH) 
  • Health Insurance Portability and Accountability Act (HIPAA) 
  • California Consumer Privacy Act (CCPA). 

IBM Security Guardium solutions are designed to monitor and audit data activity across databases, files, cloud deployments, mainframe environments, big data repositories, and containers. The process is streamlined with automation, thus lowering costs and time for compliance requirements. Learn more

Address privacy issues

With the proliferation of smartphones, tablets and smart watches, managing access controls and privacy can become a daunting task. One of the challenges for security administrators is ensuring that only individuals with a valid business reason have access to personal information. For example, physicians should have access to sensitive information, such as a patient’s symptoms and prognosis data, whereas a billing clerk only needs the patient’s insurance number and billing address.

Your customers expect you to make their privacy a priority. Start with developing a privacy policy, describing the information you collect about your customers and what you intend to do with it.

IBM Security Guardium Insights provides security teams with risk-based views and alerts, as well as advanced analytics based on proprietary machine learning (ML) technology to help them uncover hidden threats within large volumes of data across hybrid environments. Learn more

Hear from Kevin Baker, Chief Information Security Officer at Westfield, on the data privacy challenges facing his organization, and his approach to addressing them through the necessary insights and automation while scaling to support innovation with IBM Security Guardium Insights. 

Improve productivity

Security and privacy policies should enable and enhance, not interfere with business operations. Policies should be built into everyday operations and work seamlessly within and across all environments—in private, public, on-premises and hybrid environments—without impacting your productivity. For example, when private clouds are deployed to facilitate application testing, consider using encryption or tokenization to mitigate the risk of exposing that sensitive data.

IBM® Guardium solutions can help your security teams monitor user activity and respond to threats in real time. This process is streamlined with automated and centralized controls, thus reducing the time spent on investigations and empowering database administrators and data privacy specialists to make more informed decisions. 

According to Ponemon Institute, IBM Guardium solutions can help make IT security teams more efficient.7 Prior to deploying the Guardium solution, about 61% of the surveyed IT security teams’ time was spent identifying and remediating data security issues. Post deployment, the average percentage of time spent on such activities was 40%, a decrease of 42%.

Monitor access controls

The lifecycle of a data breach is getting longer, states a study by the Ponemon Institute. In fact, the institute’s research found that 49% of the data breaches studied were due to human error, including system glitches and “‘inadvertent insiders” who may be compromised by phishing attacks or have their devices infected or lost/stolen.” 

Cybercriminals could range from individuals to state-sponsored hackers with disruptive intentions. They could be rogue computer scientists trying to show off or make a political statement, or they may be tough, organized intruders. They could be disgruntled employees or even foreign state-sponsored hacker who want to collect intelligence from government organizations.

Breaches can also be accidental, such as stolen credentials, human error or misconfigurations, for example, when permissions are set incorrectly on a database table, or when an employee’s credentials are compromised. One way to avoid this issue is by authorizing both privileged and ordinary end users with 

“least possible privilege” to minimize abuse of privileges and errors. Organizations should protect data from both internal and external attacks in physical, virtual and private cloud environments

Perimeter defenses are important, but what’s more important is protecting the sensitive data wherever it resides. This way, if the perimeter is breached, sensitive data will remain secure and unusable to a thief. Declining perimeters make protection of data at its source crucial.

A layered data security solution can help administrators examine data access patterns and privileged user behaviors to understand what’s happening inside their private cloud environment. The challenge is to implement security solutions without hampering the business’ ability to grow and adapt, therefore providing appropriate access and data protections to ensure data is managed on a need-to-know basis, wherever it resides. 

Address vulnerability assessments

When it comes to defending against attackers, what worked in the past may not work today. Many organizations rely on diverse security technologies that could be operating in silos. According to a study by Forrester Consulting, on average, organizations are managing 25 different security products or services from 13 vendors.

The number of data repository vulnerabilities is vast, and criminals can exploit even the smallest window of opportunity. Some of these vulnerabilities include missing patches, misconfigurations, and default system settings that could leave gaps that cybercriminals are hoping for. This complexity is increasingly difficult to keep track of and manage as data repositories become virtualized. 

Furthermore, companies that move to cloud often struggle to evolve their data security practices in a way that enables them to protect sensitive data while enjoying the benefits of the cloud. The more cloud services your organization uses, the more control you may need to manage the different environments. 

Think about the use of homegrown tools that are in place today for data security. Will the homegrown tools you’re using today work tomorrow? For example, with data-masking routines or database activity monitoring scripts, will there be coding changes required to make them work on a virtual database? Chances are that a significant investment will be required to update these homegrown solutions. In short, organizations need a data-centric approach to security wherein security strategies are built into the fabric of their hybrid, multicloud environments. 

Unlike a point solution, IBM Security Guardium Insights supports heterogeneous integration with other industry-leading security solutions. Guardium data protection also provides best-of-breed integration with IBM Security solutions, such as IBM QRadar® SIEM for proactive data protection.

A smarter data security approach

As cloud matures and scales rapidly, we must realize that effective data security isn’t a sprint, but a marathon—an ongoing process that continues through the life of data.

While there’s no one-size-fits-all approach for data security, it’s crucial that organizations look to centralize data security and protection controls that can work well together. This approach can help security teams improve visibility and control over data across the enterprise and cloud.

What constitutes an effective cloud security strategy?

  • Discover and classify your structured and unstructured sensitive data, online and offline, regardless of where it resides and classify sensitive IP and data that’s subject to regulations, such as PCI, HIPAA, Lei Geral de Proteção de Dados (LGPD), CCPA, and GDPR.
  • Assess risk with contextual insights and analytics. How is your critical data being protected? Are access entitlements in accordance with industry and regulatory requirements? Is the data vulnerable to unauthorized access and security risks based on a lack of protection controls?
  • Protect sensitive data sources based on a deep understanding of what data you have and who has and should have access to it. Protection controls must accommodate the different data types and user profiles within your environment. Flexible access policies, data encryption and encryption key management should help keep your sensitive data protected.
  • Monitor data access and usage patterns to quickly uncover suspicious activity. Once the appropriate controls are in place, you need to be quickly alerted to suspicious activities and deviations from data access and usage policies. You must also be able to centrally visualize your data security and compliance posture across multiple data environments without relying on multiple, disjointed consoles. 
  • Respond to threats in real time. Once alerted to potential vulnerabilities and risk, you need the ability to respond quickly. Actions can include blocking and quarantining suspicious activity, suspending or shutting down user sessions or data access, and sending actionable alerts to IT security and operations systems. 
  • Simplify compliance and its reporting. You need to be able to demonstrate data security and compliance to both internal and external parties and make appropriate modifications based on results. Demonstrating compliance with regulatory mandates often requires storing and reporting on years’ worth of data security and audit data. Data security and compliance reporting must be comprehensive, accounting for your entire data environment.

Encrypt data in hybrid, multicloud environments

Since we can no longer rely on the perimeter to secure an organization’s sensitive data, it’s crucial for today’s business leaders to wrap the data itself in protection. IBM Security Guardium Data Encryption is a suite of modular, integrated and highly scalable encryption, tokenization, access management, and encryption key management solutions that can be deployed essentially across all environments. These solutions encode your sensitive information and provide granular control over who has the ability to decode it.

Strong encryption is a common answer to the challenge of securing sensitive data wherever it resides. However, encryption raises complicated issues of portability and access assurance. Data is only as good as the security and reliability of the keys that protect it. How are keys backed up? Can data be transparently moved among cloud providers, or shared between cloud-based and local storage? 

IBM Security Guardium Key Lifecycle Manager can help customers who require more stringent data protection. The solution offers security-rich, robust key storage, key serving and key lifecycle management for IBM and non-IBM storage solutions using the OASIS Key Management Interoperability Protocol (KMIP). With centralized management of encryption keys, organizations will be able to meet regulations, such as the PCI DSS, SOX and HIPAA.

IBM Security Guardium platform was named a Leader in the Forrester Wave: Data Security Portfolio Vendors, Q2 2019. According to the report, the Guardium platform is a “good fit for buyers seeking to centrally reduce and manage data risks across disparate database environments.”

Discover a new approach to data security

At the core of protecting a hybrid, multicloud environment is the need for organizations to adopt solutions that offer maximum visibility and business continuity and help meet compliance and customer trust. 

IBM Security Guardium platform is centered on the overarching value proposition of a “smarter and more adaptive approach” to data security. Further, the solution supports a wide array of cloud environments, including private and public clouds, across PaaS, IaaS, and SaaS environments, for continuous operations and security. 

The Ponemon Institute conducted a survey of organizations that use the Guardium solution to monitor and defend their company’s data and databases. It found that 86% of respondents said the ability to use the Guardium solution to manage data risk across complex  IT environments, such as a multicloud or hybrid cloud ecosystem, is very valuable. Similarly, ML and automation is a significant benefit in managing data risks across the enterprise.

With the Guardium solution, your security team can choose the system architecture that works for your enterprise. For example, your team can deploy all of the Guardium components in the cloud, or choose to keep some of those components, such as a central manager, on premises. This flexibility allows existing customers to easily extend their data protection strategy to the cloud without impacting existing deployments.

IBM Modern Integration Field Guide

What are IBM Cloud Paks?

Beyond containers and Kubernetes, you need to orchestrate your production topology and provide management, security and governance for your applications. IBM Cloud Paks are enterprise-ready, containerized software solutions that run on Red Hat® OpenShift® on IBM Cloud™ and Red Hat Enterprise Linux. Built on a common integration layer, IBM Cloud Paks include containerized IBM middleware and common software services for development and management.

  • IBM Cloud Pak™ for Applications. Quickly build cloud-native apps by leveraging built-in developer tools and processes, including support for microservices functions and serverless computing. 
  • IBM Cloud Pak™ for Data. Simplify the collection, organization, and analysis of data. Turn data into insights through an integrated catalog of IBM, open source, and third-party microservices add-ons. 
  • IBM Cloud Pak™ for Integration. Achieve the speed, flexibility, security, and scale required for all of your integration and digital transformation initiatives, including API lifecycle, application and data integration, messaging and events, high-speed transfer, and integration security
  • IBM Cloud Pak™ for Automation. Deploy on your choice of clouds, with low-code tools for business users and real-time performance visibility for business managers. Migrate your automation runtimes without application changes or data migration. Automate at scale without vendor lock-in.
  • IBM Cloud Pak™ for Multicloud Management. Gain consistent visibility, automation, and governance across a wide range of hybrid, multicloud management capabilities including integration with existing tools and processes
  • IBM Cloud Pak™ for Security. Integrate security tools to gain insights into threats across hybrid, multicloud environments.

IBM Cloud Pak for Integration 

Building integrated solutions requires you to use more than one integration pattern at a time. Simplify the management of your integration architecture and reduce cost. Running on Red Hat OpenShift, IBM Cloud Pak for Integration gives you the agility to deploy workloads on-premises and on private and public clouds. 

  • API lifecycle management. Create, secure, manage, share, and monetize APIs across clouds while you maintain continuous availability. 
  • Application and data integration. Integrate your business data and applications quickly and easily across any cloud system
  • Enterprise messaging. Simplify, accelerate, and facilitate the reliable exchange of data with a trusted, flexible, and security-rich messaging solution. 
  • Event streaming. Use Apache Kafka to deliver messages more easily and reliably and to react to events in real time. 
  • High-speed data transfer. Reliably send, share, stream, and sync large files and data sets at maximum speed. 
  • Platform-level security, automation, and monitoring. Quickly set up and manage gateways, control access on a per resource basis, deploy your integration flows, and monitor all of your traffic.

API lifecycle management 

Bridge the gap between cloud and on-premises applications quickly and easily by abstracting your back-end implementation as APIs. One of the best ways to do this is by exposing services as APIs for external consumption and let the consuming applications compose the integration logic

  • Expand. Provide a standard API interface. Include global API discovery to access key business functions as fine-grained services. Encourage data reuse and mashups driven by innovative transformation use cases.
  • Integrate. Create a significant impact on your business goals by exposing core services through managed APIs. Enable projects to integrate with one another and discover the benefits of synergy across the enterprise.
  • Scale. Be prepared to scale dynamically based on the demands of your expanding ecosystem and other usage metrics.

Application & data integration

Integrate all your business data and applications across any cloud more quickly and easily using open standards. From the simplest SaaS application to the most complex legacy systems, this pattern alleviates the concern about mismatched sources, formats, or standards. 

  • Integrate applications. Connect applications and data sources on-premises or across multiple clouds to coordinate the exchange of business information as a coarse-grained service so that core data and transactions maintain their integrity. In contrast to the API management pattern, this pattern is best suited for coarse-grained services. 
  • Integrate data. In near real time, synchronize data across multiple endpoints in the integration landscape to achieve a cohesive view of data, gathered from legacy back ends to SaaS applications, to DBaaS repositories, to analytics cloud services.
  • Incorporate agile integration. Unify cross-enterprise capabilities. Enforce the use of core enterprise services and business processes. Include cognitive augmentation within your integration logic. Set up agile organizational models and governance practices.

Enterprise messaging

Simplify, accelerate, and facilitate the reliable exchange of data with a flexible and enhance security-rich IT services messaging solution. Extend traditional messaging capabilities in modern applications to communicate with new technologies from AI, IoT devices, and other digital channels. 

  • Ensure secure and reliable messaging. Preserve message integrity throughout the network, protect data, and ensure regulatory compliance with security-rich functions. Provide reliable delivery without message loss, duplication, or complex recovery.
  • Unify your enterprise. More easily integrate heterogeneous application platforms using industry-standard JMS messaging protocols, scalable publish-subscribe, and a choice of APIs
  • Expect high performance and scalable message transfer. Your apps can rely on a highly available solution with fully automated failover, dynamically distributed messaging workloads, high throughput, and a low-latency solution.
  • Simplify management and control. Use a dashboard to gain insights with visibility to message and file tracking. Audit data movement and transaction completion.

Event streaming

Take advantage of event streams to build adaptive solutions with engaging, more personalized user experiences by responding to events before the moment passes. By design, events occur in a continuous stream from a multitude of sources in a low-latency, high-velocity manner. 

  • Decrease system complexity. Loose coupling allows event producers to emit events without any knowledge about who is going to consume those events. Likewise, event consumers don’t need to be aware of the event emitters. 
  • Simplify the interface. One event producer can reach multiple endpoints with a single call.
  • React to events as they happen. Enable the following scenarios: IoT device, streaming analytics, real-time back-end transactions, geolocation tracking, and auditing
  • Facilitate machine learning. Improve predictive analytics by moving to real-time event streaming from batch processing

High-speed file transfer

Enterprises need a reliable, fast, secure data transfer and synchronized system that is hybrid and multi-cloud IT services. An integration platform makes it possible to securely transfer data across geographies faster than traditional tools, between any kind of storage, whether it’s on-premises, in the cloud, or across diverse cloud vendors.

  • Integrate application data. Coordinate the exchange of business information so that data is available when and where it is needed.
  • Transform data for analytics. Access, cleanse, and prepare data to create a consistent view of your business within a data warehouse or data lake.
  • Enrich enterprise data. Augment DBaaS content with data from enterprise back-end systems for a 360-degree view of the user. Allow partner and in-house data sources to sync and complement each other’s updates
  • Transfer data. Move huge amounts of data between on-premises and cloud or from cloud to cloud rapidly and predictably with enhanced levels of IT service security. Speed the adoption of cloud platforms when data is very large and needs to be exchanged across long distances. 

IBM Garage: Accelerate your journey

Modernization comes in many flavors, and rewriting your entire estate is not feasible. Big bang modernization efforts are risky, so it is best to break large initiatives into smaller projects with measurable impact. Your goal is to accelerate value, deliver frequently, and reduce risk. IBM Garage experts can help.

  • Co-create. Identify a business modernization opportunity. Define and build the MVP with your squad, get feedback, and co-create a solution.
  • Co-execute. Manage risk by choosing the right approach to modernize your current estate. Accelerate your journey through automation and technology
  • Co-operate. Harden for production, standardize operations, and improve DevOps efficiency across your application estate. 

Agile Integration: Container-based and microservices-aligned lightweight integration runtimes

Integration Has Changed

IDC estimates that spending on digital transformation initiatives will represent a $20 trillion market opportunity over the next 5 years. What’s behind this staggering explosion of spending? The ever-present, ever-growing need to build new customer experiences through connected experiences across a network of applications that leverage data of all types

That’s no easy task – bringing together processes and information sources at the right time and in the right context is difficult at best, particularly when you consider the aggressive adoption of SaaS business applications. New data sources need to be injected into business processes to create competitive differentiation

The Value of Application Integration for Digital Transformation

When you consider your agenda for building new customer experiences and focus on how data is accessed and made available for the services and APIs that power these initiatives, you can see several significant benefits that application integration brings to the table:

  • Effectively addressing disparity – Access data from any system in any format and build homogeneity from it, no matter how diverse your multicloud landscape grows
  • Expertise of the endpoints – Modern integration includes smarts around complex protocols and data formats, but it also incorporates intelligence about the actual objects, business and functions within the end systems
  • Innovation through data – Applications owe much of their innovation to their opportunity to combine data beyond their boundaries and create meaning from it, a trait particularly visible in microservices architecture
  • Enterprise-grade artifacts – Integration flows inherit a tremendous amount of value from the runtime, which includes enterprise-grade features for error recovery, fault tolerance, log capture, performance analysis, and much more.

The integration landscape is changing to keep up with enterprise and marketplace computing demands, but how did we get from SOA and ESBs to a modern, containerized, agile approach to integration?

The Journey So Far – SOA and the ESB pattern

Before we can look forward to the future of agile integration, we need to understand what came before. SOA (service oriented architecture) patterns emerged at the start of the millennium, and at first the wide acceptance of the standards SOA was built upon heralded a bright future where every system could discover and talk to any other system via synchronous exposure patterns.

This was typically implemented in the form of the ESB (enterprise service bus) – an architectural pattern that was aimed at providing synchronous connectivity to backend systems typically over web or on-site embedded services. While many enterprises successfully implemented the ESB pattern, it became something of a victim of its own success.

  • ESB patterns often formed a single infrastructure for the whole enterprise, with tens or hundreds of integrations installed on a production server cluster. Although heavy centralization isn’t required by the ESB pattern, the implemented topologies almost always fell prey to it. 
  • Centralized ESB patterns often failed to deliver the significant savings companies were hoping for. Few interfaces could be re-used from one project to another, yet the creation and maintenance of interfaces was prohibitively expensive for any one project to take on. 
  • SOA was more complex than just the implementation of an ESB, particularly around who would fund an enterprise-wide program. Cross Enterprise initiatives like SOA and its underlying ESB struggled to find funding, and often that funding only applied to services that would be reusable enough to cover their creation cost.

The result was that creation of services by this specialist SOA team sometimes became a bottleneck for projects rather than the enabler that it was intended to be. This typically gave the centralized ESB pattern a bad name by association.

All that said, the centralized ESB pattern does bring some benefits, especially if they have a highly skilled integration team with a low attrition rate, and who receive a predictable and manageable number of new integration requirements. A single, centralized ESB certainly simplifies consistency and governance of implementation. However, many organizations have more fluid and dynamic requirements to manage, and are also under pressure to implement integration using similar cloud native technologies and agile methods as are being used in other parts of the organization. A case in point is the move to microservices architecture typically found in the application development space. 

Service oriented architecture (SOA) vs microservice architecture

SOA and microservices architecture share many words in common, but they are in fact completely separate concepts.

Service-oriented architecture and the associated ESB pattern is an enterprise-wide initiative to make the data and functions in systems of record readily available to new applications. We create re-usable, synchronous interfaces such as web services and RESTful APIs to expose the systems of record, such that new innovative applications can be created more quickly by incorporating data from multiple systems in real time

Microservices architecture, on the other hand, is a way of writing an individual application as a set of smaller (microservice) components in a way that makes that application more agile, scalable, and resilient. So in summary, service oriented architecture is about real-time integration between applications, whereas microservices architecture is about how we build the internals of applications themselves. 

The Case for Agile Integration

Why have microservices concepts become so popular in the application space? They represent an alternative approach to structuring applications. Rather than an application being a large silo of code running on the same server, the application is designed as a collection of smaller, completely independently-running components.

Microservices architecture enables three critical benefits:

  • Greater agility – Microservices are small enough to be understood completely in isolation and changed independently. 
  • Elastic scalability – Their resource usage can be fully tied into the business model.
  • Discrete resilience – With suitable decoupling, changes to one microservice do not affect others at runtime.

With those benefits in mind, what would it look like if we re-imagined integration, which is typically deployed in centralized silos, with a new perspective based on microservices architecture? That’s what we call an “Agile Integration.”

There are three related, but separate aspects to agile integration:

Aspect 1: Fine-grained integration deployment. What might we gain by breaking out the integrations in the siloed ESB into separate runtimes that could be maintained and scaled independently? What is the simplest way that these discrete integrations be made

Aspect 2: Decentralized integration ownership. How should we adjust the organizational structure to better leverage a more autonomous approach, giving application teams more control over the creation and exposure of their own integrations?

Aspect 3: Cloud native integration infrastructure. How can we best leverage the container-based infrastructure that underpins cloud native applications, to provides productivity, operational consistency, and portability for both applications and integrations across a hybrid and multi-cloud landscape

 

Aspect 1: Fine-grained Integration Deployment

Traditional integration is characterized by the heavily centralized deployment of integrations in the ESB pattern. Here, all integrations are deployed to a single heavily nurtured (HA) pair of integration servers has been shown to introduce a bottleneck for projects. Any deployment to the shared servers runs the risk of destabilizing existing critical interfaces. No individual project can choose to upgrade the version of the integration middleware to gain access to new features. 

Using the same concepts as microservice architecture, we could break up the enterprise-wide ESB into smaller, more manageable and dedicated pieces. Perhaps in some cases we can even get down to one runtime for each interface we expose. These “fine-grained integration deployment” patterns provide specialized, right-sized containers, offering improved agility, scalability and resilience, and look very different to the centralized ESB patterns of the past. Figure 1 demonstrates in simple terms how a centralized ESB differs from fine-grained integration deployment.

Fine-grained integration deployment draws on the benefits of a microservices architecture. Let’s revisit what we listed as microservices benefits in light of fine-grained integration deployment:

  • Agility: Different teams can work on integrations independently without deferring to a centralized group or infrastructure that can quickly become a bottleneck. Individual integration flows can be changed, rebuilt, and deployed independently of other flows, enabling safer application of changes and maximizing speed to production.
  • Scalability: Individual flows can be scaled on their own, allowing you to take advantage of efficient elastic scaling of cloud infrastructures.
  • Resilience: Isolated integration flows that are deployed in separate containers cannot affect one another by stealing shared resources, such as memory, connections, or CPU.

Aspect 2: Decentralized integration ownership

A significant challenge faced by service-oriented architecture was the way that it tended to force the creation of centralized integration teams and infrastructure to implement the service layer. 

This created ongoing friction in the pace at which projects could run since they always had the central integration team as a dependency. The central team knew their integration technology well, but often didn’t understand the applications they were integrating, so translating requirements could be slow and error prone. 

Many organizations would have preferred the application teams own the creation of their own services, but the technology and infrastructure of the time didn’t enable that. 

The move to fine-grained integration deployment opens a door such that ownership of the creation and maintenance of integrations can also be distributed out to the application teams. It’s not unreasonable for business application teams to take on integration work, streamlining the implementation of new integrations. 

Furthermore, API management has matured to the point where application teams can easily manage the exposure of their own APIs, again without resorting to a separate centralized integration team. 

Microservices design patterns often prefer to increase decoupling by receiving event streams of data and building localized data representations rather than always going via API calls to retrieve data in real time. Agile integration also considers how best to enable teams to publish and consume event streams both within and beyond application boundaries.

Aspect 3: Cloud-native integration infrastructure

Integration runtimes have changed dramatically in recent years. So much so that these lightweight runtimes can be used in truly cloud-native ways. By this we are referring to their ability to hand off the burden of many of their previously proprietary mechanisms for cluster management, scaling, availability and to the cloud platform in which they are running.

This entails a lot more than just running them in a containerized environment. It means they have to be able to function as “cattle not pets,” making best use of the orchestration capabilities such as Kubernetes and many other common cloud standard frameworks. 

Clearly, Agile Integration requires that the integration topology be deployed very differently. A key aspect of that is a modern integration runtime that can be run in a container-based environment and is well suited to cloudnative deployment techniques. Modern integration runtimes are almost unrecognizable from their historical peers. Let’s have a look at some of those differences:

  • Fast lightweight runtime: They run in containers such as Docker and are sufficiently lightweight that they can be started and stopped in seconds and can be easily administered by orchestration frameworks such as Kubernetes.
  • Dependency free: They no longer require databases or message queues, although obviously, they are very adept at connecting to them if they need to. 
  • File system based installation: They can be installed simply by laying their binaries out on a file system and starting them up—ideal for the layered file systems of Docker images. 
  • DevOps tooling support: The runtime should be continuous integration and deployment-ready. Script and property file-based install, build, deploy, and configuration to enable “infrastructure as code” practices. Template scripts for standard build and deploy tools should be provided to accelerate inclusion into DevOps pipelines.
  • API-first: The primary communication protocol should be RESTful APIs. Exposing integrations as RESTful APIs should be trivial and based upon common conventions such as the Open API specification. Calling downstream RESTful APIs should be equally trivial, including discovery via definition files.
  • Digital connectivity: In addition to the rich enterprise connectivity that has always been provided by integration runtimes, they must also connect to modern resources. For example, NoSQL databases (MongoDb and Cloudant etc.), and messaging services such as Kafka. Furthermore, they need access to a rich catalogue of application intelligent connectors for SaaS (software as a service) applications such as Salesforce
  • Continuous delivery: Continuous delivery is enabled by command line interfaces and template scripts that mesh into standard DevOps pipeline tools. This further reduces the knowledge required to implement interfaces and increases the pace of delivery
  • Enhanced tooling: Enhanced tooling for integration means most interfaces can be built by configuration alone, often by individuals with no integration background. With the addition of templates for common integration patterns, integration best practices are burned into the tooling, further simplifying the tasks. Deep integration specialists are less often required, and some integration can potentially be taken on by application teams as we will see in the next section on decentralized integration. 

Modern integration runtimes are well suited to the three aspects of an agile integration methodology: fine-grained deployment, decentralized ownership, and true cloud-native infrastructure. 

Along with integration runtimes becoming more lightweight and container friendly, we also see API management and messaging/eventing infrastructure moving to container-based deployment. This is generally in order to benefit from the operational constancy provided by orchestration platforms such as Kubernetes that provides auto scaling, load-balancing, deployment, internal routing, reinstatement and more in a standardized way, significantly simplifying the administration of the platform.

Agile Integration for the Integration Platform

Throughout this paper, we have been focused on the application integration features as deployed in an agile integration architecture. However, many enterprise solutions can only be solved by applying several critical integration capabilities. An integration platform (or what some analysts refer to as a “hybrid integration platform”) brings together these capabilities so that organizations can build business solutions in a more efficient and consistent way. 

Many industry specialists agree on the value of this integration platform. Gartner notes:

The hybrid integration platform (HIP) is a framework of on-premises and cloud-based integration and governance capabilities that enables differently skilled personas (integration specialists and non-specialists) to support a wide range of integration use cases.… Application leaders responsible for integration should leverage the HIP capabilities framework to modernize their integration strategies and infrastructure, so they can address the emerging use cases for digital business. 

One of the key things that Gartner notes is that the integration platform allows multiple people from across the organization to work in user experiences that best fits their needs. This means that business users can be productive in a simpler experience that guides them through solving straightforward problems, while TeraPixels Systems IT specialists in San Diego have expert levels of control to deal with the more complex enterprise scenarios. These users can then work together through reuse of the assets that have been shared; while preserving governance across the whole.

Satisfying the emerging use cases of the digital transformation is as important as supporting the various user communities. The bulk of this paper will explore these emerging use cases, but first we should further elaborate on the key capabilities that must be part of the integration platform.

IBM Cloud Pak for Integration

IBM Cloud Integration brings together the key set of integration capabilities into a coherent platform that is simple, fast and trusted. It allows you to easily build powerful integrations and APIs in minutes, provides leading performance and scalability, and offers unmatched end-to-end capabilities with enterprise-grade security. IBM Cloud Pak for Integration is built on the open source Kubernetes platform for container orchestration. 

IBM Cloud Pak for Integration is the most complete hybrid integration platform in the industry including all of the key integration capabilities your team needs:

Application and Data Integration Connects applications and data sources on-premises or in the cloud, in order to coordinate exchange business information so that data is available when and where needed.

API Lifecycle Exposes and manages business services as reusable APIs for select developer communities both internal and external to your organization. Organizations adopt an API strategy to accelerate how effectively they can share their unique data and services assets to then fuel new applications and new business opportunities.

Enterprise Messaging Ensures real-time information is available from anywhere at anytime by providing reliable message delivery without message loss, duplication or complex recovery in the event of a system or network issue. 

High Speed Data Transfer: Move huge amounts of data between on-premises and cloud or cloud-to cloud rapidly and predictably with enhanced levels of security. Facilitates how quickly organizations can adopt cloud platforms when data is very large

Secure Gateway Extend Connectivity and Integration beyond the enterprise with DMZ-ready edge capabilities that protect APIs, the data they move, and the systems behind them

Hybrid Integration Platforms: Digital Business Calls for Integration Modernization and Greater Agility

Integration is the lifeblood of today’s digital economy. Hybrid integration is a key business imperative for most enterprises, as digitalization has led to a proliferation of applications, services, APIs, and data stores that need to be connected to realize end-to-end functionality and, in many cases, an entirely new digital business proposition. A hybrid integration platform caters to a range of integration needs, including on-premises app integration, cloud application integration, messaging, event streaming, rapid API creation and lifecycle management, B2B/EDI integration, mobile application/back-end integration, and file transfer. User productivity tools and deployment flexibility are key characteristics of a hybrid integration platform that helps enterprises respond faster to evolving digital business requirements. 

Ovum view

Ovum ICT Enterprise Insights 2018 survey results indicate a strong inclination on the part of IT leaders to invest in integration infrastructure modernization, including the adoption of new integration platforms. IT services professionals in Orange County and other areas continue to struggle to meet new application and data integration requirements driven by digitalization and changing customer expectations. Line-of-business (LoB) leaders are no longer willing to wait for months for the delivery of integration capabilities that are mission-critical for specific business initiatives. Furthermore, integration competency centers (ICCs) or integration centers of excellence (COEs) are being pushed hard to look for alternatives that significantly reduce time to value without prolonged procurement cycles.

Digital business calls for flexible integration capabilities that connect diverse applications, services, APIs, and data stores; hybrid integration continues to be a complex IT issue. The current enterprise IT agenda gives top priority to connecting an ever-increasing number of endpoints and mitigating islands of IT infrastructure and information silos that make the vision of a “connected enterprise” difficult to achieve. Hybrid integration, which involves disparate applications, data formats, deployment models, and transactions, is a multifaceted problem for which there is no simple solution. For example, while an enterprise service bus (ESB) can be appropriate for data/protocol transformation and on-premises application integration, integration PaaS (iPaaS) is clearly a popular solution for SaaS-to-on-premises and SaaS-to-SaaS integration.

The center-of-gravity hypothesis applies to integration architecture. There is a greater inclination to deploy integration platforms closer to applications and data sources. APIs continue to gain prominence as flexible interfaces to digital business services and enablers for enterprises looking to innovate and participate in the wider digital economy. The unrelenting drive toward SaaS is leading to a rapid shift of integration processes to the cloud. A combination of these trends is driving the emergence of a new agile hybrid integration paradigm, with cloud-based integration platforms used for cloud, mobile application/back-end, B2B/EDI, and data integration. This integration paradigm or pattern is gaining popularity as enterprises do not have the luxury of executing dedicated, cost intensive and time-consuming integration projects to meet digitalization-led, hybrid integration requirements. Enterprise IT leaders realize that existing legacy integration infrastructure offers less flexibility and is difficult to maintain, so they are now more open to new integration approaches or platforms that improve developer productivity and allow them to “do more with less.” Moreover, traditional, heavyweight middleware is a barrier for enterprises looking to achieve agile hybrid integration to meet critical digital business requirements.

Agile hybrid integration calls for modular solutions that integrate well with each other and offer a uniform user experience (UX) and developer productivity tools to reduce time to integration and cost of ownership. For example, enterprises need to achieve integration within a few days of subscribing to a new set of SaaS applications, and frequently need to expose SaaS applications via representational state transfer (REST) APIs for consumption by mobile applications. They may also need to design and manage a new set of APIs for externalization of the enterprise or monetization of new applications and enterprise data assets. A hybrid integration platform can meet all these requirements, with modular integration solutions deployed on-premises, in the cloud, or on software containers according to the requirements of specific use cases. 

In the background of changing digital business requirements, IT leaders need to focus on revamping their enterprise integration strategy, which invariably will involve adoption of a hybrid integration platform that offers deployment and operational flexibility and greater agility at a lower cost of ownership to meet multifaceted hybrid integration requirements. Integration modernization initiatives aim to use new integration patterns, development and cultural practices, and flexible deployment options to drive business agility and reduce costs. It is important to identify a strategic partner (and not just a software vendor with systems integration capabilities) that can provide essential advice and best practices based on years of practical experience to ensure that integration modernization initiatives stay on track and deliver desired outcomes. 

Recommendations

  • Enterprise IT leaders should focus on developing a forward-looking strategy for hybrid integration using the best of existing on-premises middleware and specific cloud-based integration services (i.e., PaaS products for hybrid integration). For all practical purposes, and in most cases, it would make sense to opt for a hybrid integration platform. This does not imply a complete “rip and replace” strategy for deciding the future of existing on-premises middleware. With DevOps practices, microservices, and containerized applications gaining popularity, IT leaders should evaluate the option of deploying middleware (integration platforms) on software containers as a means to driving operational agility and deployment flexibility
  • With several middleware vendors focusing on developing a substantial proposition for hybrid integration, it would be better to exploit a more cohesive set of integration capabilities provided by the same vendor. A “do it yourself” approach to integration or federation between middleware products offered by different vendors is rarely easy, and it is of course easier to train users on a hybrid integration platform offering a uniform UX
  • Integration is still predominantly carried out by IT service practitioners; however, IT leaders should consider “ease of use” for both integration practitioners and less skilled, non-technical users (e.g., power users) when selecting integration platforms for a range of hybrid integration use 

Integration modernization is a recurring theme driven by digitalization and the need for greater agility 

Hybrid integration complexity continues to drive integration modernization

Over the last couple of years, “integration modernization” has regularly featured in Ovum’s conversations with enterprise IT leaders. Digitalization has led to an almost unrelenting need for expose and consume APIs and exploiting digital assets to cater for ever-changing customer requirements and drive growth via new digital business models. Digital business initiatives call for more open, agile, and API-led integration capabilities, reducing time to integration. Enterprises need to develop customer-centric and more flexible business processes that can easily be extended via APIs to a range of access channels. The business side is asking some tough questions, including how fast and at what cost IT can deliver the desired integration capabilities. Ovum ICT Enterprise Insights 2018 survey results show that over 60% of respondent enterprises are planning substantial investment (including strategic investment in new iPaaS solutions) in iPaaS solutions over the next 18-month period. The survey results indicate that about 58% of respondent enterprises are planning substantial investment in API platforms over the same period. These figures clearly indicate enterprise interest in investing in new integration platforms to tackle hybrid integration challenges

Hybrid integration platform

Hybrid integration involves a mix of on-premises, cloud, B2B/EDI, mobile application/back-end integration, rapid API creation and lifecycle management, messaging, events, and file transfer use case scenarios of varying complexity (see Figure 1). Owing to specific business-IT requirements, enterprises may not have the flexibility to use “on-premises only” middleware or only cloud-based integration platforms. In certain cases, even the same integration capabilities (e.g., API management) need to be used both as on-premises middleware and as a cloud service (i.e., PaaS).  

An important aspect of hybrid integration requirements driven by digitalization is the need to support a range of user personas, including application developers, integration practitioners, enterprise/solution architects, and less skilled business users (i.e., non-technical users). Given the persistent time and budget constraints, enterprises often do not have the luxury of deploying only technical resources for hybrid integration initiatives and ICCs/integration COEs are not always in the driver’s seat. Simplified and uniform UX, self-service integration capabilities, and developer productivity tools are therefore critical in meeting hybrid integration requirements.

Ovum defines a hybrid integration platform as a cohesive set of integration software (middleware) products that enable users to develop, secure, and govern integration flows, connecting diverse applications, systems, services, and data stores, as well as enabling rapid API creation/composition and lifecycle management to meet the requirements of a range of hybrid integration use cases. A hybrid integration platform is “deployment model agnostic” in terms of delivering requisite integration capabilities, be it on-premises and cloud deployments or containerized middleware.

The key characteristics of a hybrid integration platform include:

  • support for a range of application, service, and data integration use cases, with an API-led, agile approach to integration, reducing development effort and costs 
  • uniformity in UX across different integration products or use cases and for a specific user persona 
  • uniformity in underlying infrastructure resources and enabling technologies 
  • flexible integration at a product or component API level 
  • self-service capabilities for enabling less skilled, non-technical users 
  • the flexibility to rapidly provision various combinations of cloud-based integration services based on specific requirements 
  • openness to federation with external, traditional on-premises middleware platforms 
  • support for embedding integration capabilities (via APIs) into a range of applications or solutions
  • developer productivity tools (e.g., a “drag-and-drop” approach to integration flow development and pre-built connectors and templates) and their extension to a broader set of integration capabilities 
  • flexible deployment options: on-premises deployment, public, private, and hybrid cloud deployment, and containerization 
  • centralization of administration and governance capabilities.

Specific features and capabilities of hybrid integration platforms vary from vendor to vendor, and certain hybrid integration platforms may not offer some of the above-specified capabilities. It is noteworthy that the evolution from traditional middleware and PaaS for specific integration use cases (e.g., iPaaS for SaaS integration) to a hybrid integration platform is a work in progress for a majority of middleware vendors. 

iPaaS is now a default option for SaaS integration, and the iPaaS model for delivery of cloud integration capabilities is no longer about only offering dozens or hundreds of connectors and pre-built integration templates. It is important for iPaaS vendors to target new user personas and a broader set of integration use cases. In this context, we see two key developments: self-service integration capabilities for less skilled, non-technical user enablement, and artificial intelligence (AI)/machine learning (ML) capabilities simplifying development of integration flows

Terapixels offers IT services in San Diego that has a hybrid of on site and remote monitoring. In addition, this hybrid environments call for a cloud-native integration paradigm that readily supports DevOps practices and drives operational agility by reducing the burden associated with cluster management, scaling, and availability. As per such a cloud-native integration paradigm, integration runtimes run on software containers, are continuous integration and continuous delivery and deployment (CI/CD) ready, and are significantly lightweight and responsive enough to start and stop within a few seconds. Many enterprises have made substantial progress in containerizing applications to benefit from a microservices architecture and portability across public, private, and hybrid cloud

microservices architecture and portability across public, private, and hybrid cloud environments. Containerized applications and middleware represent a good combination; in cases where an application and a runtime are packaged and deployed together, developers can benefit from container portability and ease of use offered by the application and middleware combination. 

It also makes sense for applications and middleware to share a common architecture, as DevOps teams can then avoid the overhead and complexity associated with the proposition of running containerized applications on different hardware and following different processes to the existing ones with traditional middleware. This is true even in cases that do not involve much re-architecting of the applications; DevOps teams can still develop and deploy faster using fewer resources

Developers are increasingly building APIs that support new applications that use loosely coupled microservices. Each microservice has a particular function that can be independently scaled or maintained without impacting other loosely coupled services. A microservices architecture can involve both internal and external APIs, with internal APIs invoked for inter-service communication and external API calls initiated by API consumers. IT leaders must realize that microservices management is different in scope from API management and focus on effectively meeting both requirements

Good API design and operations principles (i.e., API- and design-first principles) are gaining ground in enterprises that have previous experience of experimenting with enterprise API initiatives linked to new digital business services. Consequently, API platforms are gaining traction. Multicloud API management and deployment on software containers are areas of significant interest to large enterprises. An API platform enables users to develop, run, manage, and secure APIs and microservices, and offers a superset of capabilities in comparison to those provided by API lifecycle management solutions. As the graphical approach to integration flows provided by application integration capabilities can now be deployed as microservices, these technologies jointly provide a holistic approach to the rapid creation/composition of APIs and the subsequent management of their lifecycle and operations. A key benefit of an API platform is the ability to create, test, and implement an API rapidly and reiterate the cycle to create a new version of it based on user feedback (i.e., the application of DevOps-style techniques to API lifecycle and operations).  

Internet of Things (IoT) integration use cases call for message-oriented middleware (MoM) that offer standards-based message queue (MQ) middleware to ease integration with enterprise applications and data stores. It is particularly suitable for heterogeneous environments, as any type of data can be transported as messages; MQ middleware is frequently used in mainframe, cloud, mobile, and IoT integration use case scenarios. A hybrid integration platform should support integration requirements of such use cases.

A lot of data is generated in the form of streams of events, with publishers creating events and subscribers consuming these events in different ways or via different means. Event-driven applications can deliver better customer experiences. For example, this could be in the form of adding context to ML models to obtain real-time recommendations that evolve continually as per the requirements of a specific use case. Embedding real-time intelligence into applications and real-time reaction or responsiveness to events are key capabilities in this regard.

For distributed applications using microservices, developers can opt for asynchronous event-driven integration, in addition to the use of synchronous integration and APIs. Apache Kafka, an open source stream-processing platform, is a good option for such use cases that require high throughput and scalability. Kubernetes can be used as a scalable platform for hosting Apache Kafka applications. As Apache Kafka reduces the need for point-to-point integration for data sharing, it can reduce latency to just a few milliseconds, thereby enabling faster delivery of data to the users. A hybrid integration platform should cater to the integration requirements of event-driven applications.

A hybrid integration platform with simplified UX, scalable architecture, and flexible deployment options

Key attributes at architectural and operational levels simplify hybrid integration and drive developer productivity and cost savings

The IBM Cloud Pak for Integration (shown in Figure 2) solves a range of hybrid integration requirements, including on-premises and SaaS application and data integration, rapid API creation/composition and lifecycle management, API security and API monetization, messaging, event streaming, and high-speed transfer. IBM offers a holistic integration platform exploiting a container based portable architecture for a range of hybrid integration use cases, as well as providing essential advice and support to help enterprises succeed with their integration modernization initiatives.

IBM Cloud Pak for Integration was built for deployment on containers and provides a modern architecture that includes the management of containerized applications and Kubernetes, an open source container orchestration system. An interesting trend is the adoption of DevOps culture, microservices, and PaaS for responsiveness to changes driven by digital business requirements. With IBM Cloud Pak for Integration’s container-based architecture, users have the flexibility to deploy on any environment that has Kubernetes infrastructure, as well as exploit a self-service approach to integration. IBM Cloud Pak for Integration enables simplified creation and reuse of integrations, their deployment close to the source, and self-service integration to deliver faster time to integration at lower cost. It offers the benefit of a unified UX for developing and sharing integrations, which promotes integration asset reuse to improve developer productivity.

With IBM Cloud Pak for Integration, users can deploy integration capabilities easily onto a Kubernetes environment. This provision helps achieve faster time to value for integration modernization initiatives by integrating the monitoring, logging, and IT security systems of a private cloud environment to ensure uniformity across a cloud integration platform deployment. Containerization fosters the flexibility of cloud private architecture, thereby helping users meet performance and scalability requirements as specified in the service-level agreements (SLAs) of their business applications. Another benefit is common administration and governance enabled via a single point of accessibility. This mitigates the need for logging in to multiple tools and better supports access management across different teams. In terms of deployment flexibility, IBM supports deployment on any cloud or on-premises deployment.

IBM espouses an approach that differentiates API management from microservices management but also combines the two to offer more than the sum of the parts. Istio running on Kubernetes allows users to manage the interactions between microservices running in containers. Integration between Security Gateway and Istio service mesh (involving security, application resiliency, and dynamic routing between microservices) can offer a good solution to end-to-end routing. IBM has optimized the gateway for cloud-native workloads. An interesting trend is the growth in the number of API providers offering additional endpoints to adapt to emerging architectural styles, such as GraphQL. GraphQL APIs have the ability to use a single query to fetch required data from multiple resources. IBM is extending its integration platform’s API capabilities to provide support for GraphQL management, and this approach decouples GraphQL management from GraphQL server implementation. 

IBM’s Agile Integration methodology

Agile integration focuses on delivering business agility as part of an integration modernization initiative. It espouses the transition of integration ownership from centralized integration teams to application teams, as supported by the operational consistency achieved via containerization. On the operational agility side, cloud-native infrastructure offers dynamic scalability and resilience. 

A good case in point is a fine-grained integration deployment pattern involving specialized, right-sized containers that deliver improved agility, scalability, and resilience. This is quite different from traditional, centralized ESB patterns, which is why IBM redesigned each of these capabilities, including the application integration features, to be deployed in a microservices-aligned manner. With a fine-grained deployment pattern, enterprises can improve build independence and production speed to drive deployment agility. In a nutshell, as part of integration modernization initiatives, “agile integration” caters to people, processes, and technology aspects to provide necessary advice and guidance to help enterprises achieve faster time to value across diverse deployment environments.

 

Infrastructure as a Service: A Cost-Effective Path to Agile and Competitive IT

The Time to Move to Cloud is Now

Traditional on-premises infrastructure with high upfront costs and weeks long scaling lead times is no longer sufficient to effectively support today’s needs and required responsiveness. IT is increasingly moving to a direct revenue-supporting position within the enterprise. Applications may require scaling from hundreds to tens of thousands of users, or go from one geographic location to multiple locations, in a matter of days.

Not being able to do this has direct revenue impact. Responding to this high velocity of change requires an IT infrastructure layer with comparable flexibility and scalability. Likewise, built-in resilience at the IT infrastructure layer is a necessity to move forward confidently with the digital transformation of the business

Cloud infrastructure or infrastructure as a service (IaaS)is designed to deliver scalable, automated, and utility financial model capabilities. IaaS services are consumed on a pay-as-you-go basis, with no upfront costs, and on-demand scalability. In addition, IaaS services from the major providers are delivered from a globally distributed set of data centers, and designed for immediate availability, resilience, and lower upfront investment. 

From a broader perspective, IaaS and cloud technologies bring to enterprises three key capabilities.

» Low upfront investments. Get started on initiatives at no cost and in turn achieve faster launch and faster time to market for new initiatives. This is important as organizations ramp up their digital assets and experiment with the best ways to leverage technologies—shifting away from a costly capex model to a more beneficial opex one.

» Rapid scaling and resilience. From a capacity and a geographic footprint perspective, cloud technologies allow successful initiatives to be quickly scaled up and replicated across physical locations as needed, allowing solutions to address availability, expansion, and scaling needs at any time without customer disruptions

» Access to a broad ecosystem of higher layer services and partners. This Includes access to faster and more cost effective development tools and databases, advanced analytics capabilities, and technologies like AI/ML. These can jumpstart projects and lead to faster application development and deployments and avoid upfront investment to build these platform capabilities in-house.

Most Common Entry Points and Use Cases for Cloud Infrastructure

With the increase in familiarity and acceptance, cloud IaaS is gaining adoption across nearly all types of enterprise IT use cases and organizations are moving applications to cloud through a variety of the following paths and entry points:

» IT Data Center consolidation and expansion: Legacy technology infrastructure can be rigid and limited in use and management. Often it requires manual input and resource to maintain applications and services and does not scale quickly or easily to suit business needs, without major expenditure and potential down time. Cloud technology offers an increased agility, automation, and intelligent services to all aspects for the datacenter. It enables quick scalability, reducing resource demands and costs, and can improve ROI by expanding services on a global scale

» Business continuity and disaster recovery: Improving IT resiliency and maintaining business continuity are more important than ever for any enterprise. The flexibility and agility of cloud makes it an optimal solution to mitigate risks and maintain business continuity. In fact, the cloud often improves uptime, performance, and availability of applications, data, and workloads from traditional on-premises environments. In the cloud, organizations can recover applications, data, and workloads completely and quickly.

» Application modernization and migration: Another approach is the application modernization and migration path to cloud, where an application is first re-architected to take advantage of the native capabilities available on cloud such as containers, scale-out capability, and other readily available services. The specific path selected is typically determined by the workload itself and the level of technical capability available to move that workload to cloud.

» Virtual machine migration:One commonly seen path to cloud adoption of enterprise applications is the “lift and shift”migration of virtual machines (VMs) into cloud environments. This involves moving the applications on a VM into an identical or near identical VM in an IaaS environment. While this may still require minor configuration changes in the application or deployment scripts, it reduces the rework required on the application before moving it to cloud

» Regulated workloads: With the maturity of cloud services and the expansion of cloud capabilities, cloud infrastructure is also seeing adoption for regulated workloads and highly secure sensitive workloads. These have been enabled by specific capabilities that allow such workloads to run in the cloud such as dedicated bare metal services and built-in security capabilities.

Security Concerns, Skill Sets, and Migration are Top Challenges with Cloud

While cloud IaaS is gaining traction across enterprises, cloud adoption is not without its own challenges. One key challenge reported with cloud adoption continues to be security. Security concerns can be broadly broken into the following three types:

» The ability of the cloud provider to secure its platform sufficiently. The last decade has helped demonstrate to the enterprise IT world that cloud providers’ investments in security often exceed what is possible by enterprises, and that public cloud IaaS offers comparable and often better security than possible on-premises safeguards. 

» The ability of customers to secure their applications running on the cloud platform. This is critical given the shared security model of public cloud. Cloud providers are responsible for security of the infrastructure stack, while the customers need to be responsible for the security of their application that runs on the cloud platform. This often includes use of proprietary tools from the cloud provider, and a good understanding of the platform’s security framework. Protecting applications and data by using the cloud provider’s security framework correctly, continues to be a challenge for enterprises.With increased familiarity and skillset availability, this is a challenge that will be resolved in time. 

» Cloud adoption can bring forward resource limitations. These include the availability of cloud skill sets, lack of clarity around cloud adoption planning, and execution of application transformation and migration. The typical workaround, seen particularly among large enterprises with IT applications that are designed to support thousands of users, is to engage managed services or professional services to assist in this adoption. Availability of a strong service partner with an extensive ecosystem of experts and partners has thus emerged as an important enabler for organizations looking to migrate and transform their businesses in the cloud. 

Advantages of Infrastructure as a Service (IaaS) and Cloud Adoption

IaaS empowers IT service organizations with a foundation for agility—the ability to make IT changes easily, quickly, and cost effectively—in the infrastructure layer. Early adopters are seeing benefits in business metrics such as operational efficiency and customer retention. Key business benefits customers report includes the following: 

» Business agility – enabled by the rapid scalability of IaaS. Organizations can easily scale their IT footprint as needed depending on business needs. IaaS enables faster time to launch for initiatives, swifttime to market for new offerings, and rapid iterations to stay current with market needs.

» Improved customer experience – delivered by high availability architectures built on a resilient public cloud IaaS platform. Organizations that build their services on the cloud can maintain availability through critical phases such as outages, periods of growth or high utilization of services, thus increasing customer satisfaction and loyalty with the solution. This leads to smooth customer base expansion during growth periods.

» Total Cost of Ownership (TCO) benefits – possible because of the “pay as you use” cost model for infrastructure minimizes the need for large upfront investments and over-provisioning. IaaS compute, storage and networking resources can be provisioned and used within minutes when needed and terminated when not needed, allowing instantaneous on-demand access to resources.

» Geographic reach – enabled by the globally distributed set of data centers, all of which deliver a consistent infrastructure environment close to the respective geographies. A Solution that is successful initially in one region can be easily replicated on the IaaS service in other geographies with minimal additional qualification or contract renegotiation, allowing shorter lead times for regional expansion. This allows a cloud-based solution to rapidly expand beyond physical boundaries and reach customers and markets across the globe as needed. 

» Easy access to new technologies and services – through the cloud ecosystem of higher layer service and partners. Thisbroader cloud ecosystem has emerged as a major source of benefits for IaaS customers. Nearly a third of the respondents to IDC’s IaaSView 2019 report indicate this ecosystem is a top driver of their decision to adopt cloud.

Recent Trends in Enterprise IaaS Usage: Multi Cloud and Hybrid Cloud Patterns

Two popularly seen cloud adoption patterns in enterprise IT today are “multicloud” and “hybrid cloud” environments: 

Hybrid Cloud. IDC defines hybrid cloud as the usage of IT services (including IaaS, PaaS, SaaS apps, and SaaS-SIS cloud services) across one or more deployment models using a unified framework. The cloud services used leverage more than one cloud deployment model across public cloud and private cloud deployments. Customers sometimes also include cloud and noncloud combinations when they describe an environment as hybrid cloud (sometimes referred to also as hybrid IT). 

This model is rapidly gaining adoption among enterprise IT organizations (see Figure 2). Factors driving the adoption of hybrid cloud include the desire to retain a higher level of control on certain data sets or workloads, as well as proximity and latency requirements requiring certain workloads to stay on premises. 

Multicloud. IDC defines multicloud as an organizational strategy or the architectural approach to the design of a complex digital service (or IT environment)that involves consumption of cloud services from more than one cloud service provider. These may be directly competing cloud services such as hosted private cloud versus public cloud compute services, public object storage from more than one public cloud service provider, or IaaS and SaaS from one or more cloud service providers. Multicloud encompasses a larger universe than hybrid clouds. 

Factors driving multicloud usage include organic reasons such as independent projects scaling in different parts of the organizations on different cloud platforms, as well as intentional reasons such as a desire to leverage specific cloud platforms for specific unique capabilities. A major factor gating the adoption of multicloud more broadly is the cost/complexity associated with enabling consistent management/governance of many different cloud options.

The Benefits and Differentiators of IBM Cloud IaaS

IBM Cloud Infrastructure as a Service (IaaS) forms the foundation layer of the IBM Cloud portfolio, and delivers the compute, storage, and network functionality, as well as the required virtualization, for customers to build their IT infrastructure environments on these services. The customer continues to be responsible for management of the higher layers of the stack operated on the IaaS platform. Figure 3 shows the functionality delivered in the different layers of the IBM Cloud portfolio and the split of management responsibilities between IBM and the customer in each of these layers. 

IBM Cloud IaaS and the broader IBM cloud ecosystem bring to customers allthe business benefits discussed earlier of cloud IaaS adoption. These are delivered through a combination of IBM’s global datacenter footprint, resilient, scalable, and broad IaaS portfolio. This is complemented by the rich ecosystem of cloud services and partners, including access to the latest technology capabilities such as artificial intelligence and quantum computing. In addition, IBM is in a unique position as a trusted long-time enterprise technology partner, and brings the following differentiated strengths and capabilities to businesses:

» Security and trust –IBM Cloud is built to deliver security across all its services, integrated through the service and delivered as a service. This includes audit compliance and ability to support standards such as PCI 3.0, HIPAA and GDPR, which are often challenging and expensive for enterprises to meet in house with on premises infrastructure. This also includes specific security capabilities like the IBM Cloud Pak for Security and the IBM Data Security Services; and the IBM Cloud Hyper Protect Services with built in data in motion and data at rest protection as well as Keep Your Own Key (KYOK) capability for the most security sensitive use cases. These are further enhanced by IBM’s long track record as a security-conscious technology company and a trusted partner to enterprises, alleviating concerns of misuse of customer data. These have been instrumental in recent large customer wins in the U.S., from some of the largest and most well-known enterprise brands.

» Offerings for specific enterprise needs, such as SAP, VMware, Bare Metal – IDC research shows that most of the enterprise adoption of cloud IaaS for production use starts with existing applications. To offer consistency, IBM Cloud offers specialized qualified IaaS offerings for common enterprise IT services in Orange County. These include bare metal offerings specifically qualified to run SAP and VMware solutions, as well as a mature and broad range of configurable bare metal offerings. These allow customers to configure their cloud IaaS environment to closely match the existing environment for business-critical applications, minimize migration risk, and enjoy the agility and broader benefits of moving to cloud IaaS.

» Access to services and expertise across the globe – The rapid adoption of cloud has outpaced skillset evolution. IBM’s services divisions, Global Technology Services and Global Business Services, acts as an effective delivery arm for IBM’s technology offerings, and can assist customers on their cloud adoption and capability building. These bring to customers professional expertise across containerization, microservices, DevOps,

» Hybrid cloud and multicloud enablers – The 2019 acquisition of Red Hat brought to IBM Cloud a strong suite of cloud-native software including the Red Hat OpenShift platform, a compelling cloud-native platform that could be delivered both on customer premises and on multiple public clouds. These complement the Cloud Paks product portfolio at IBM, which is also designed to deliver a consistent experience for specific enterprise use cases on customer premises and public cloud platforms. IBM Cloud Paks and the IBM Red Hat OpenShift platform are designed with the intent of offering a unified customer experience across public cloud and customer premises infrastructure. These products address one of the top challenges reported by enterprises using cloud today:the lack of consistency across clouds and across premises, which limits the ability to effectively build a multicloud or hybrid cloud environment. The Red Hat OpenShift platform also offers open source compatibility with open source frameworks like Kubernetes and Knative, allowing portability and reducing risk of lock-in for customers. These recent additions and evolutions to the portfolio are complemented by IBM’s long track record of building and operating complex private cloud platformsfor enterprise customers

Conclusion

The cloud value propositions of flexibility and scalability were ideally suited for the initial use cases that deployed on cloud IaaS, such as startup and hobbyist/shadow-IT workloads. While these value propositions continue to be important, enterprise use cases require more from their IT stack. These include end-to-end security, flexibility to operate across multiple premises and platforms, and partners to support the enterprise’s vertical-specific needs. IBM Cloud offers an expansive global cloud infrastructure service inclusive of open hybrid and multicloud enablers and the broad IBM ecosystem of technology and service partners designed to address these needs. With these capabilities and its strong technology portfolio, IBM is well poised to be a trusted cloud partner to enterprises as they transition their IT to the cloud