Orange County 949-556-3131

San Diego 619-618-2211

Toll Free 855-203-6339

GLOBAL STATE OF PRIVILEGED ACCESS MANAGEMENT (PAM) RISK & COMPLIANCE 2018

WHY THIS REPORT IS A MUST READ:

  • URGENT CHALLENGES: Protecting access to privileged credentials- –the preferred target of cybercriminals and malicious insiders—is rapidly evolving as a must have compliance requirement.
  • DISTURBING SURVEY RESULTS: While more than 60% of organizations must satisfy regulatory compliance requirements around privilege credential access, a staggering 70% would Fail an Access Controls audit!
  • LIKELY CONSEQUENCES: Millions of dollars in regulatory fines, business operations at higher risk of severe compromise or even shutdown.
  • RECOMMENDED ACTIONS: Develop a Privilege Access Management lifecycle security program to secure access and meet compliance mandates. Privilege Access Management is not a simple checkbox but an important continuous process.

COMPLY OR DIE: THE PRIVILEGED ACCESS MANAGEMENT (PAM) SECURITY IMPERATIVE

Most organizations only begin to implement Privileged Access Management after a failed audit or major cybersecurity attack that can cost millions of dollars and cause reputational damage.

This report describes the results from a groundbreaking global study by Thycotic that reveals major risk and compliance gaps in how organizations manage and secure their privileged accounts and access to sensitive systems, infrastructure and data. The 2018 Global State of Privileged Access Management (PAM) Risk & Compliance report highlights where many organizations are failing to fully put security controls in place to protect their most critical information assets.

“Privileged Access” encompasses access to computers, networks and network devices, software applications, digital documents, and other digital assets that upper management, IT administrators, and service account users work with daily. Access to privileged accounts allows more rights and permissions than those given to standard business users.

Privileged account access is the prize most frequently targeted by cybercriminals and malicious insiders because this access (often undetected) leads to highly valuable and confidential information, such as company IP, customer identities, financial information, and personal data.

While most organizations acknowledge the important role privileged credential access plays in their cybersecurity posture, most are failing to act on protecting and securing their privileged accounts. This report helps explain where and why.

PART 1

ASSESSMENT SURVEY RESULTS SHOW MOST ORGANIZATIONS FALL SHORT ON PAM POLICIES, PROCESSES AND CONTROLS

MANY ORGANIZATIONS WORLDWIDE ARE AT HIGH RISK OF PRIVILEGED ACCOUNT COMPROMISE AND FAILING TO MEET COMPLIANCE REQUIREMENTS

Among organizations surveyed, more than half of the respondents indicated that privileged account management is a required or regulated compliance issue within their organization or industry. While PAM security adoption is being driven by regulatory requirements, it also appears that many organizations are adopting privileged account security measures to reduce the risk of the growing cyber threats and to protect against both external and internal attacks.

Thus, establishing privileged account access controls is a growing priority driven by auditors, controllers, and greater awareness of threats targeting privileged accounts. In fact, cybercriminals are targeting employees at a higher rate than ever before.

Despite acknowledging the importance of PAM security, organizations do not follow through where it counts…

INADEQUATE POLICIES

Recent updates to compliance and regulatory standards require organizations to publish and distribute access control policies that cover privileged accounts and passwords in detail, so that they can limit access to information and systems. Yet the clear majority of respondents to the Risk Assessment still fail to ensure access control policies include privileged accounts and passwords.

This puts privileged accounts at risk of compromise as well as failing to meet compliance standards such as Access Control Policy part of ISO/IEC 27002:2013 & PCI Requirement 8.4. These require asset owners to determine appropriate access control rules, access rights and restrictions for specific user roles. The strictness of the access rules must reflect the associated information security risks.

POORLY EXECUTED PROCESSES

For employees to handle privileged accounts and passwords securely, organizations must develop consistent processes when providing users with access. This ensures they not only gain access but do so with additional security controls that harden the protection and security of privileged accounts.

Failing to implement processes on access control means a much higher risk of rogue access, inconsistent results, higher costs, failed audits and ultimately cyber breaches that could easily go undetected. It is important to have consistent, repeatable processes for Privileged Accounts. To ensure security and conserve resources, automating processes will reduce mundane tasks such as rotating passwords, enabling and revoking access, as well as making it easier to create risk and compliance reports. Ultimately implementing a solid Privilege Access Management solution can also save a company money to invest in other areas of the business.

You cannot secure and manage what you do not know you have. Today’s complex IT environments, organizations may contain more than double the number of privileged accounts that are assumed to be in place. That’s because undiscovered accounts can occur very easily in virtual environments by cloning and copying virtual machines, or even when restoring snapshots. Employees today can also easily install rogue software not approved by IT, and it is likely these applications come with default credentials or service accounts that expose them to a much higher risk of compromise.

Some of the survey’s findings in process failures include:

  • 62% of organizations fail at provisioning processes for privileged access
  • 51% fail to use a secure logon process for privileged accounts
  • 73% fail to audit and remove test or modify default accounts before moving applications to production
  • 70% of organizations fail to fully discover privileged accounts—and 40% do nothing at all to discover these accounts
  • 55% fail to revoke access after an employee is terminated

INSUFFICIENT CONTROLS

All critical systems in any organization should have full audit logs to track log-ins and activities. Systems should be configured to issue a log entry and alert when an account is added to, or removed from a domain administrators group, or when a new local administrator account is added on a system. The audit logs need to be regularly checked for integrity or monitored with change detection, and access to audit logs restricted. Without auditing and tracking, you have no accountability for who is using these accounts and no way to properly analyze an incident and mitigate its damage.

Third party contractors are mostly treated like internal employees when it comes to access controls. However, organizations should ensure that security access controls for vendors or contractors are much more rigorous since they do not have full control over the security behaviors of third parties. Many recent high-profile data breaches resulted from third-party contractors, supply chains, partners and consultants. Often suppliers do not have rigorous security practices in place, putting the entire work environment at risk. Your security is only as good as the security controls that 3rd party contractors have in place when you are not managing and securing privileged access.

See where you face the greatest PAM risks

To see how well your organization meets current standards and best practices associated with Privilege Access Management be sure to take the free PAM Risk Assessment Tool at https://thycotic.com/solutions/free-pam-risk-assessment- tool/ . You’ll receive a Risk score along with a PDF report that reviews your survey answers and suggests ways to reduce your risks.

Control Deficiencies identified in the survey include:

  • 73% of organizations fail to require multi-factor authentication with privileged accounts
  • 63% do not track and alert on failed logon attempts for privileged accounts
  • 78% fail to use a dedicated system for all administrative tasks
  • 70% fail to limit third- party access to privileged accounts

PART 2

RECOMMENDATIONS: ESTABLISH A LIFE CYCLE APPROACH TO PRIVILEGED ACCESS MANAGEMENT

When planning, implementing, or expanding a more secure approach to Privileged Access Management leading analysts and practitioners emphasize building out a program that encompasses the complete Privilege Account Management (PAM) lifecycle. That means

  1. Understanding the need for Privilege Access Management among exec and IT staff
  2. Identifying privileged accounts across all systems
  3. Managing and Protecting access to privileged accounts and restricting their use
  4. Monitoring privileged account use on a continuous basis
  5. Detecting anomalies in privileged account use indicating potential fraudulent activities
  6. Responding to privileged account suspected compromise immediately and with targeted actions
  7. Review and report to continuously improve your Privilege Access Management access controls

 

Like any IT security measure designed to help protect critical information assets, managing and protecting privileged account access requires both a plan and an ongoing program. You must identify which privileged accounts should be a priority in your organization, as well as ensuring that those who are using these privileged accounts are clear on their acceptable use and responsibilities. This report briefly describes a PAM lifecycle model which provides a high-level roadmap that global organizations can use to establish their own Privileged Access Management program.

DEFINE:

Define and classify privileged accounts. Every organization is different, so you need to map out what important business functions rely on data, systems, and access. One approach is to reuse a disaster recovery plan that typically classifies important systems and specifies which need to be recovered first. Make sure to align your privileged accounts to your business risk and operations.

Develop IT security policies that explicitly cover privileged accounts. Many organizations still lack acceptable use and responsibilities for privileged accounts. Treat privileged accounts separately by clearly defining a privileged account and detailing acceptable use policies. Gain a working understanding of who has privileged account access, and when those accounts are used.

DISCOVER:

Discover your privileged accounts. Use an automated PAM software to identify your privileged accounts. and implement continuous discovery to curb privileged account sprawl, identify potential insider abuse, and reveal external threats. This helps ensure full, on-going visibility of your privileged account landscape crucial to combatting cybersecurity threats.

MANAGE & PROTECT:

Protect your privileged account passwords. Proactively manage, monitor, and control privileged account access with password protection software. Your solution should automatically discover and store privileged accounts; schedule password rotation; audit, analyze, and manage individual privileged session activity; and monitor password accounts to quickly detect and respond to malicious activity.

Limit IT admin access to systems. Develop a least-privilege strategy so that privileges are only granted when required and approved. Enforce least privilege on endpoints by keeping end-users configured to a standard user profile and automatically elevating their privileges to run only approved and trusted applications. For IT administrator privileged account users, you should control access and implement super user privilege management for Windows and UNIX systems to prevent attackers from running malicious applications, remote access tools, and commands. Least-privilege and application control solutions enable seamless elevation of approved, trusted, and whitelisted applications while minimizing the risk of running unauthorized applications.

MONITOR:

Monitor and record sessions for privileged account activity. Your PAM solution should be able to monitor and record privileged account activity. This will help enforce proper behavior and avoid mistakes by employees and other IT users because they know their activities are being monitored. If a breach does occur, monitoring privileged account use also helps digital forensics identify the root cause and identify critical controls that can be improved to reduce your risk of future cybersecurity threats.

DETECT ABNORMAL USAGE:

Track and alert on user behavior. With up to 80% of breaches involving a compromised user or privileged account, gaining insights into privileged account access and user behavior is a top priority. Ensuring visibility into the access and activity of your privileged accounts in real time will help spot suspected account compromise and potential user abuse. Behavioral analytics focuses on key data points to establish individual user baselines, including user activity, password access, similar user behavior, and time of access to identify and alert on unusual or abnormal activity.

RESPOND TO INCIDENTS:

Prepare an incident response plan in case a privileged account is compromised. When an account is breached, simply changing privileged account passwords or disabling the privileged account is not acceptable. If compromised by an outside attacker, hackers can install malware and even create their own privileged accounts. If a domain administrator account gets compromised, for example, you should assume that your entire Active Directory is vulnerable. That means restoring your entire Active Directory, so the attacker cannot easily return.

REVIEW AND AUDIT:

Audit and analyze privilege account activity. Continuously observing how privileged accounts are being used through audits and reports will help identify unusual behaviors that may indicate a breach or misuse. These automated reports also help track the cause of security incidents, as well as demonstrate compliance with policies and regulations. Auditing of privileged accounts will also give you cybersecurity metrics that provide executives with vital information to make more informed business decisions.

BOTTOM LINE

Bottom Line: The key to improving cybersecurity around Privileged Access Management stems from an understanding and implementation of a PAM lifecycle approach. Only a comprehensive solution can ensure that your “keys to the kingdom” are properly protected from hackers and malicious insider threats. And that your access controls meet the regulatory requirements for compliance mandates in your industry.

PART 3

PAM RISK ASSESSMENT & COMPLIANCE SURVEY METHODOLOGY

Launched in late 2017, the Thycotic Privileged Account Management Risk Assessment survey has engaged nearly 500 global IT security professionals. The survey poses 20 questions to participants according to a matrix scoring system. For each survey question, points are assigned for the Risk Value (the probability the threat will occur), along with additional points for the Impact Value (the severity of an event if it should occur).

REGULATORY STANDARDS INCORPORATED INTO THE RISK ASSESSMENT TOOL

The Thycotic PAM Risk Assessment online tool encompasses questions based on a combination of several regulatory standards that include:

  • ISO – ISO/IEC 27002 is the information security standard published by the International Organization for Standardization (ISO) and by the International Electrotechnical Commission (IEC) https://www.iso.org/ standard/54533.html
  • EU GDPR – The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) is a regulation by which the European Parliament, the Council of the European Union and the European Commission intend to strengthen and unify data protection for all individuals within the European Union (EU).
  • NIST – The National Institute of Standards and Technology (NIST) is a measurement standards laboratory, and a non-regulatory agency of the United States Department of Commerce. http://csrc.nist.gov/projects/iden_ ac.html.
  • PCI – The Payment Card Industry Data Security Standard (PCI-DSS) is a proprietary information security standard for organizations that handle branded credit cards from the major card schemes.
  • CIS CSC – The Center for Internet Security Critical Security Controls for Effective Cyber Defense is a publication of best practice guidelines for computer security.

PART 4

NEXT STEPS – HOW THYCOTIC CAN HELP

As a global leader in Privileged Access Management and Least Privilege Management, Thycotic is unique in offering an end- to-end solution that encompasses least privilege management on endpoints with application control, along with protecting privileged account passwords across the entire IT infrastructure.

Thycotic provides free tools and software solutions for the key components that enable you to build out a Privileged Access Management program from privileged access training and account discovery to incident response for a compromised account.

To learn more, visit these Thycotic online resources:

  • Get Free PAM security tools at thycotic.com/free-tools/
  • Find Free Privileged Account Password protection trial software at thycotic.com/secret-server/
  • Get Free Least Privilege Management trial software at thycotic.com/privilege-manager/
  • Learn more about PAM compliance solutions at thycotic.com/solutions

ABOUT THYCOTIC

Thycotic, a global leader in IT security, is the fastest growing provider of Privilege Management solutions that protect an organization’s most valuable assets from cyber-attacks and insider threats. Thycotic secures privileged account access for more than 7,500 organizations worldwide, including Fortune 500 enterprises. Thycotic’s award winning Privilege Management Security solutions minimize privileged credential risk, limit user privileges and control applications on endpoints and servers. Thycotic was founded in 1996 with corporate headquarters in Washington, D.C. and global offices in the U.K. and Australia. For more information, please visit www.thycotic.com.

Six steps for building a robust incident response function

Introduction: This is the decade of incident response

Organizations globally realize that working only to prevent and detect cyberattacks will not protect them against cyber security threats. That is why IBM Resilient® was developed: to arm security teams with a platform for managing, coordinating, and streamlining incident response (IR) processes.

IBM Security has had the privilege of working with organizations of all sizes and across all industries as they implement Resilient solutions to develop more sophisticated and robust incident response functions. These organizations build IR processes that are consistent, repeatable, and measurable, rather than ad hoc. They make communication, coordination, and collaboration an organization-wide priority. They leverage technology that empowers the response team to do their job faster and more accurately.

But there are challenges to building and managing a more robust IR program. Three challenges in particular stand out:

  1. The volume of cyber security incidents is increasing Forty-two percent of cyber security professionals say their organization ignores a significant number of security alerts because they can’t keep up with the volume, according to Enterprise Strategy Group1.
  2. The volume of cyber security incidents is increasing Forty-two percent of cyber security professionals say their organization ignores a significant number of security alerts because they can’t keep up with the volume, according to Enterprise Strategy Group1.
  3. Organizations are too complex and underprepared for effective response Insufficient planning and preparedness and complexity of IT and business processes are the top barriers to responding to cyberattacks3.

To solve these challenges, many IBM Resilient customers are striving to align their people, process, and technology so that IR analysts understand who is responsible for which tasks, when tasks need to be done, and how to do them. This emerging concept is known as incident response orchestration.

Incident response orchestration empowers security analysts by putting IR processes and tools right at their fingertips. They can access important incident information in an instant, make accurate decisions, and take decisive action. It leverages automation to increase the productivity of security analysts and technologies—alleviating the skills gap and the volume of alerts.

But IR orchestration is a process, not a product. It requires strong foundational blocks—trained people, proven processes, and integrated technologies. Orchestration is built on these core elements, and the effectiveness of an organization’s orchestration efforts lies entirely on the quality of these fundamental pieces.

Mapping your IR maturity

Over the years, IBM Resilient customers have increased their IR sophistication at various levels across a spectrum of maturity. Maturity levels are often necessitated by industry, available resources, or experience, but most IBM Resilient customers continually look to evolve their IR function into a more advanced phase.

This model maps the journey from an ad hoc and insufficient incident response function to one that is fully coordinated, integrated, and primed for continuous improvement and optimization.

The road to orchestrated incident response starts with developing people, process, and technology. That is the purpose of this guide: to show you the primary key steps in the process of building a robust IR function.

Step 1: Understand threats, both external and internal

Every organization faces a unique threat landscape, and the first step in building out your incident response function is to develop a detailed understanding of this landscape.

Part of your threat landscape is the nature of the cyberattacks your organization will contend with. That may include specific threats that your organization has addressed in the past (for example, malware infections or phishing attacks), as well as threats that are known to affect your industry broadly (such as ransomware attacks on healthcare organizations, or DDoS attacks on internet infrastructure companies).

Additionally, a robust threat model should consider all possible actors and incidents. For example, a recent survey of a dozen healthcare organizations found that many struggle with an “inadequate threat model” and focus “almost exclusively on the protection of patient health records.” 4 The survey found that rather than developing a holistic view of their IT environment and possible threats, staff at healthcare organizations rarely venture beyond the narrow focus of regulations like the US HIPAA law. More serious threats that didn’t directly affect patient health information—such as ransomware that targets healthcare devices—lurked in organizational blind spots.

The spectrum of possible cyber incidents your organization may face is broad, and each will warrant its own IR process. To get started, among the questions you might ask are

  • What kinds of attacks or adverse incidents has our organization experienced in the past?
  • Have we sustained a malware infection in the recent past? If so, what kind of malware (botnet, theft of data, ransom)? When and for how long did the incident last and how was it resolved?
  • Have our employees been the victims of targeted phishing email scams designed to steal employee credentials? If so, which employees?
  • Has our organization been the subject of criticism in popular online forums or by hacktivist groups or other online personalities?
  • Has our organization been specifically targeted by a denial-of-service attack or other form of intentional online disruption?

In attempting to understand the threats facing your organization, consider what types of attacks your competitors, business partners, and peer companies have encountered. Have you seen similar attacks?

Preparing for privacy breaches

While cyberattacks themselves can be enormously damaging, the potential for regulatory fines can be equally if not more damaging to an organization. It’s essential for security teams to assess what regulations will apply to them in the event of a breach-based on your industry and the data you hold that may be targeted— and how they can be best prepared to ensure compliance. Questions to ask include:

  • What are your privacy obligations—including industry regulations, state/federal data breach laws, and contractual agreements?
  • When do you need to provide notification of privacy breaches (factors often include breach size and whether the data was encrypted—but vary across geographies and industries)?
  • Who needs to be notified, and how (customers, attorney general’s office, others)?
  • What is the time limit for notification?

Privacy obligations are already a major concern for security and privacy professionals, and it’s likely to increase with the EU’s incoming General Data Protection Regulation, or GDPR, which goes into effect in May 2018.

The GDPR is a globally focused privacy law that introduces steep, sweeping changes. It applies to any organization globally that does business with EU citizens or organizations, includes a 72-hour window for data breach notification (which is much tighter than most current laws in the US), and can impose potentially enormous fines for non-compliance (20 million euros, or four percent of an organization’s annual revenue). Organizations should take steps and set roles, responsibilities, and processes for complying with GDPR now.

Assessing your organization

Additionally, your threat landscape is not just the external factors and risks that may impact you, but also your internal challenges and shortcomings. As described earlier, the cybersecurity skills gap looks to be a challenge that our industry will need to manage for the foreseeable future—and organizations should assess how it impacts them today and work to manage it.

To identify your internal skills gap, evaluate the current skills you have versus the skills you’ll need to effectively combat and manage the external threats you face. Performance metrics such as time-to-completion on individual tasks and workload balance are good indictors of the skills you have today and where the gaps are. And by using tabletop exercises and analysis, you can further validate your assessment and find additional gaps you may have overlooked.

Finally, your threat landscape—the attacks you face, the regulations you’re beholden to, and your organizational skills shortage—is a continually evolving assessment. As the cybercrime market, privacy regulations, and other industry trends shift, your landscape will too. Be sure to set regular intervals to review and update your threat landscape accordingly.

Case Study: Top 10 European Bank

One IBM Resilient customer faced a unique challenge: they had three security teams around the world who managed incidents with their own specific processes. This led to valuable threat information becoming siloed, a lack of central management and oversight, and no dependable way to test and improve IR processes.

The organization’s security leadership knew it had to standardize IR plans across the organization and enable centralized incident management and oversight.

The plan: the security leadership team brought the groups together to collectively develop combined, standardized response plans for specific incident types—incorporating the most effective and proven processes from across the three groups. Additionally, the organization implemented a single incident response platform (IRP) for the three groups to:

  • Centrally manage incidents across the organization
  • Enable better context gathering and collaboration
  • Provide better visibility to management
  • Create a feedback loop that ensures new IR plans, tests, and improvements are shared across the organization

With this new strategy, the organization’s security teams can continually gain value from the organization’s experience and intelligence collectively.

Step 2: Build a standardized, documented, and repeatable incident response plan

Surveys indicate that insufficient planning and preparedness is still the single biggest barrier to cyber resilience today. It is, perhaps, not surprising then that most organizations don’t have a proper incident response plan in place. According to the 2016 Cyber Resilient Organization study from the Ponemon Institute, only 25 percent of organizations have a cyber security incident response plan (CSIRP) in place and applied consistently across the organization. The remaining 75 percent either don’t have a plan at all, follow informal, ad hoc processes, or don’t have their plan applied across the organization.

As a result, many IR functions are slow, inefficient, and ineffective–which increases the likelihood of a costly, damaging cyberattack, increases employee dissatisfaction and burnout, and puts security leadership’s jobs at risk. However, having a standardized, documented, and repeatable IR plan addresses these risks and ensures your team knows exactly what to do, and when and how to do it. It also provides a platform for continual improvement, enabling your organization to stay ahead of ever-evolving cyber threats.

The challenge: creating a proper IR plan is time-consuming and requires a dedicated, organization-wide effort. To that end, security leadership needs to work to make incident planning a priority. An incident response planning workshop can ensure that all your team’s stakeholders come together to develop consistent, documented, and standardized response plans.

Your team should engage with executives and even the board of directors to ensure they understand the risks and let other relevant leaders know that they’ll be expected to contribute. This includes marketing, HR, legal, IT, and other business units.

During the workshop, your teams (with security leadership’s guidance) can come together to walk through specific incident scenarios and:

  • Map out specific steps that need to be taken to resolve an incident throughout its lifecycle.
  • Determine roles and responsibilities
  • Identify the key technologies and channels of communications to be leveraged during a response
  • Build processes around permissions and escalations

Resources like NIST, SANS, and CERT can provide great frameworks for these conversations and plans—but, ultimately, your IR plans will need to be specific to your organization. Therefore, it’s important to involve all contributors across the organization. You will need to tap the know-how and experience of your existing IT and security teams, key stakeholders within your organization, as well as executives, and legal and compliance officers. External third party entities like business partners and suppliers can also be part of the conversation.

By the end of these exercises and conversations, your team should have well-thought-out, repeatable, and documented plans that can be centralized, followed by anyone on your team, and continually improved upon over time.

Case Study: Fortune 100 Technology Company

One IBM Resilient customer had made major technological investments in their SOC, and needed to ensure their people and processes were equally developed. Their plan: use simulations to test processes and develop SLAs and executive reporting.

This customer established regular, quarterly simulations that focused specifically on complex and unlikely events—ensuring they wouldn’t be caught off-guard by most severe threats. To gain organizational support, the security leadership developed incident response SLAs. These metrics were grouped by incident types and severity, and provided a standard for the incident response team to strive for. Additionally, the SLAs enabled the CISO to demonstrate performance to the board —and set budget accordingly. Today, this customer continues to experience hundreds of incidents daily—but their well- trained team can manage and resolve them in a streamlined, effective manner.

Step 3: Proactively test and improve IR processes

Cyber adversaries are continually striving to gain new advantages. Cyber security teams need to make staying ahead a priority.

One of the most effective ways to keep IR capabilities driving forward is running simulations—and doing them in a dedicated, results-driven manner.

IR simulations provide a useful method for overcoming the “insufficient planning and preparation” barrier. Simulations ensure that your entire IR function—people, processes, and technology—are primed and ready for real-world incidents, while also uncovering opportunities for future improvements.

The key for security leaders is to ensure that their simulations are effective, and there are specific steps your team can take to ensure your team is making improvements and making them stick.

To start, security leaders should plan upfront to make the simulation meaningful. Do you want to practice a commonly seen incident, or prepare for something unexpected? Both types are valid to explore.

Security leaders should also build specific, thoughtful simulations that include important details your analysts will need to search for. In other words, make your team think critically about the simulation and ensure it’s more than just a check-the-box exercise.

Additionally, make your simulations measurable. Set goals and track key metrics such as time-to-completion and level of completeness. And replay simulations to measure improvements (or regressions).

Finally, make IR simulations an organization-wide event. Include participants from HR, legal, marketing, and other groups to ensure they will be ready to play their parts when a real incident hits. Similarly, share the results of your post- mortem analysis across the organization. This will help keep your team honest and educate leadership on where and what resources are needed.

Step 4: Leverage threat intelligence

Cyber criminals are working together—collaborating and sharing information across the dark web. Security professionals should be working together, too.

As part of the 2016 Cyber Resilient Organization study, the Ponemon Institute compared high-performing respondents (those whose cyber resilience had increased in the last year) to average organizations to identify key differences. One of the many findings: high-performing organizations are more likely to participate in a threat-sharing program (70 percent versus only 53 percent of average organizations).

The threat intelligence (TI) industry has seen increasing buzz in recent years, and for good reason: security teams are seeking better insight and awareness into the activity in their environments.

Leveraging threat intelligence is a big part of becoming more aware. But there are challenges to implementing it. Security teams often need to navigate countless feeds of varying quality, as well as manage the signal-to-noise problem.

Fortunately, many IBM Resilient customers have years of experience implementing and experimenting with a variety of threat intelligence feeds. Based on their combined experiences, here are three key ways to effectively leverage TI for better incident response:

  • Anchor threat intelligence in incident response plans One IBM Resilient customer, a major media network, found their analysts spent far too much time investigating threat intelligence data. They were chasing issues that didn’t apply to them, which drained resources and severely limited their effectiveness. To fix this, the team grounded threat intelligence data into their existing incident response processes. Analysts escalate indicators of compromise (IoCs) into incidents, and they can access vital information about potential threats when needed—using the available intel when relevant to the circumstances they face. This led to huge improvements in time management and team effectiveness.
  • Use integrations and correlation to make threat intelligence actionable By integrating threat intelligence with other data sources like SIEMs and EDR tools, analysts can gain fuller incident context and the information becomes more actionable. They can refine and target the scope of the data by considering the context, severity, and patterns. This helps analysts better understand what they’re contending with and what would be best to do about it.
  • Track and measure the usefulness of your sources There are plenty of intel feeds and none are one-size-fits-all. Examples include open source, closed communities, commercial sources—and then there’s the threat intelligence platforms. Record how often individual feeds provide information, and the quality and how critical the information provided is. You’ll soon discover if certain feeds are redundant or need to be adjusted in any way.

As we’ll explore further in upcoming sections, incident response platforms (IRPs) can automate much of the manual portions of cyber incident investigation and response. Among other improvements, IRPs use data analysis and specialized logic in an approach called artifact visualization. This allows you to see how seemingly disparate incidents might be related by noting the commonalities between them—such as IT assets involved, malicious software used, malicious infrastructure communicated with, and so on.

Organizations that can identify incidents and grasp the disparate artifacts that make up the story of a breach will drive down response times from days or weeks to hours. This also helps to implement practical controls in areas like user access, data security, and communications that will prevent future incidents from occurring.

Step 5. Streamline incident investigation and response

As noted in the Verizon Data Breach Investigation Report, fewer than a quarter of all incidents Verizon reviewed were detected in “days or less,” while the majority took days, weeks, or months to detect5 . With cyber incidents lasting undetected for weeks or months, malicious actors have the opportunity to establish a beachhead on compromised networks that can be difficult to remove.

One reason is that most organizations rely on ad hoc processes for investigating even straight-forward cyber incidents like phishing attacks on employees—and because of the skills gap, organizations who have the right tools and technology may struggle to find enough resources to efficiently manage the deluge of incidents.

As organizations add integrated data and threat intelligence sources to their IR processes, the opportunities to orchestrate responses in a sophisticated way grows—starting with the automation of low-level tasks.

Automation is a useful method of streaming menial, repetitive tasks, and making your team faster and smarter. When used in a broader incident response orchestration strategy (learn more about orchestration in the next section), automation can empower your team to be strategic decision makers.

In the case of an outbreak of malware, for example, a suspicious sample detected on one endpoint can be automatically grabbed and fed to an endpoint agent or next-generation threat detection platform to observe and classify. Based on the outcome of that analysis, further automated and manual processes can be queued up: identifying other infected hosts on the network and requesting permission to quarantine them, identifying a vulnerability associated with that malware infection and scheduling emergency patches to vulnerable systems, or firing off requisite notifications to internal staff or external monitors, for example. And, at each stage, requests, responses, and actions can be documented for future reference.

To begin with automation, pinpoint the right processes to streamline. These are often time-consuming, menial, and inefficient tasks that take up inordinate amounts of analysts’ time, and can be safely and reliably automated. Security leaders should also analyze the risk and complexities of automating a process versus the potential efficiencies gained.

To ensure safe and reliable automation, test the processes’ fidelity. Script manual actions that keep human decision- making and approval involved. Once your team builds a comfort level to know that the process is right and the technology works properly, you can decide to fully automate.

However, it’s important to note that while technology-based automation can save time, it’s only as strong as your overall IR function—and is most effectively leveraged in an orchestrated incident response strategy.

Step 6. Orchestrate across people, process, and technology

The promise of incident response orchestration—making response faster and more automated—has drawn the attention and interest of many security experts across the industry. But as referenced in the last section, successful and effective orchestration and automation requires a strong overall IR function. The key to effective orchestration lies entirely on the quality of an organization’s IR fundamentals: people, process, and technology.

The earlier sections of this guide have been created to help you ensure these fundamental building blocks are well- thought-out, strong, and primed for future improvements. To refresh, here are essential questions to ask when assessing the strength of your IR foundation:

  • People: Have you ensured your IR team is well-coordinated and well-trained? Do they have the right skills to address all aspects of an incident’s lifecycle? Do they have means for collaboration and analysis?
  • Process: Do you have well-defined, repeatable, and consistent IR plans in place? Are they easy to update and refine? Are you regularly testing and measuring them?
  • Technology: Does your technology provide valuable insight and intelligence in a directed fashion? Does it enable your team to make smart decisions and quickly act on those decisions?

By addressing these questions, you can ensure your orchestration efforts will align these building blocks with real effect. If you haven’t developed this foundation, the benefits of orchestration will be marginal.

The goal of incident response orchestration is to empower your response team by ensuring the humans in the loop know exactly what to do when a security incident strikes, and have the processes and tools they need to act quickly, effectively, and correctly.

Orchestration and automation are both growing in popularity among cyber security professionals, but orchestration is different in that it supports and optimizes the human-centric elements of cyber security—like helping to understand context and decision making—and empowers them as central to security operations.

This is a critical distinction because security threats are uncertain problems. Responding to a threat is hardly ever a cut-and-dried issue. Automation is a great tool for quickly and effectively executing specific tasks—but since threats are often evolving and adversaries are changing tactics, human decision-making is needed to step in for things like escalating issues or troubleshooting.

Automation is an effective tool in the broader orchestration process, but it’s the human element that makes orchestration the game-changer that it is.

Orchestration applies differently to each specific organization. It should map to your unique threat landscape, IT and security environments, and company priorities. But for a quick example, the following is a classic use case of how we see orchestration employed in many of the organizations we work with.

In the top left of the graphic, you can see that as an incident is escalated from a SIEM alert, a record is automatically created in the organization’s incident response platform (IRP). From there, in the bottom right, the platform automatically gathers and delivers valuable incident context from the built-in threat intelligence feeds and additional sources.From here, the security analysts already have critical information when they step in and take control. These analysts can leverage additional integrations to manually take on additional tasks deemed necessary—including gathering additional information about an incident from other security tools (such as endpoint security tools or web gateways) or starting to remediate the issue by alerting the IT help desk or going to the identity management to pull users off the network.

There are many different ways to orchestrate IR processes, but the goal is always the same: put your analysts in the best position to respond to threats.

As incident response processes mature, organizations enter a phase of proactive response, in which information gleaned from incident response becomes strategic to an organization. With proactive response, intelligence from the IR team can be fed back into a security and IT organization — shaping technology investments and acquisitions, sharpening employee skill sets, and broadening an organization’s understanding of risk to encompass a broader ecosystem of physical security assets and providers, threat intelligence providers, regulators and government agencies, and more.

While few companies — even within the Fortune 500 — have achieved this level of maturity, we expect the strategic application of incident response to become more common as more firms migrate to mature incident response platforms in the coming years.

Conclusion: Building a resilient, response-ready organization

It is tempting to imagine that technology advancements will soon turn incident response into a push button function that can be performed by even junior employees. The truth is that IR is, and will be, complicated and multifaceted and will require the attention of intelligent security analysts.

Mature incident response combines people, processes, and technology as part of a continuum. The job of technology isn’t to replace human analysts, but to empower them to do more: delivering better intelligence about specific threats, streamlining response processes, and making sure that security analysts are ready to respond.

Additionally, a mature cyber security incident response function can beget a larger, cultural transformation within your organization: integrating your security team more closely with IT operations and management, and enlisting them in the process of responding to cyber incidents in a comprehensive way.

As incident response processes mature, organizations enter a phase of proactive response, in which information gleaned from incident response becomes strategic to an organization. With proactive response, intelligence from the IR team can be fed back into a security and IT organization — shaping technology investments and acquisitions, sharpening employee skill sets, and broadening an organization’s understanding of risk to encompass a broader ecosystem of physical security assets and providers, threat intelligence providers, regulators and government agencies, and more.

While few companies — even within the Fortune 500 — have achieved this level of maturity, we expect the strategic application of incident response to become more common as more firms migrate to mature incident response platforms in the coming years.

The Total Economic Impact Of IBM Resilient

Executive Summary

IBM provides a security incident response (IR) solution called Resilient that helps its customers address security incidents quickly in an automated and orchestrated manner. IBM commissioned Forrester Consulting to conduct a Total Economic ImpactTM (TEI) study and examine the potential return on investment (ROI) enterprises may realize by deploying Resilient. The purpose of this study is to provide readers with a framework to evaluate the potential financial impact of the Resilient platform on their organizations.

To better understand the benefits, costs, and risks associated with this investment, Forrester interviewed a Resilient customer with several years of experience using the solution. Forrester found that, as an incident response platform, the solution provides significant benefits by shortening the response time for security incidents through the enablement of automation and orchestration to security professionals — effectively shortening the time-to-contain security incidents. Security tools and devices across the enterprise are more frequently put into play sooner with dynamic playbooks that cut analysis and triage times required by incident responders.

Prior to using Resilient, the interviewed customer leveraged a ticketing system that provided little in the way of automation. This system yielded limited success, leaving the customer with little intelligence due to a lack of integration to the security tool stack. These limitations led to the need for a significant army of security professionals who needed to be specialized in a wide variety of security areas to be able to identify and contain threats.

Key Findings

Quantified benefits. The interviewed organization experienced the following risk-adjusted present value (PV) quantified benefits:

  • Orchestration and automation saved 25 minutes per security analyst and over an hour in total per security incident. With over 350 cybersecurity incidents per week, the interviewed organization was saving nearly 22,750 hours of security analyst man-hours in the first year. Accounting for the rise in cybersecurity incidents over the years and the relative high cost of security analysts, this translated to a three- year savings worth $4.5 million in labor costs. The reduction in effort by the security analysts to handle incidents resulted in increased time for them to perform advanced analysis of threats and develop new countermeasures to further improve the organization’s security posture.
  • End users benefited from quicker incident response and improved uptime. While the Resilient platform did not offer direct improvement on the detection of incidents, it did allow incident responders to contain threats much more quickly after the initial detection. On a per incident basis, business users saved half an hour due to the reduction in time-to- contain as they no longer needed to wait as long for security analysts to investigate and perform remediation steps. Additionally, the quicker time- to-contain led to avoided image restores and wider scale remediation action on the endpoints. In all, the organization saved between 11,830 and 15,645 hours per year with the Resilient platform.
  • Resilient, as the incident response platform, brought visibility to the efficacy of existing security tools, enabling security professionals to realize the full potential of the organization’s library of tools. With Resilient acting as the central dashboard orchestrating the response to security incidents, security professionals were able to centrally collect data and determine points in the security architecture that were less responsive. With the insight, security professionals could identify the exact point of failure and choose to either reconfigure the tool or substitute the tool with a more effective replacement. Security tools are expensive investments, and Resilient helps professionals reaffirm that these investments are working as advertised.

Unquantified benefits. The interviewed organization experienced the following benefits, which are not quantified for this study:

  • Resilient provides instant dashboarding to help expedite the audit process and reduce scrutiny from regulatory bodies. Most enterprises are audited on the security front numerous times a year and provide management reporting on security incidents at an even higher frequency. By being able to centralize security response data in the Resilient platform, the interviewed organization can provide internal and external auditors with data that reduces security professional effort and auditor effort.
  • The organization saw continual security posture improvement from newly free time to security analysts. Whereas the interviewed organization was once constantly fighting fires, it is now doing deep analysis into threats to continually improve its processes and defenses. The value of this has not been calculated, but it certainly helps the organization’s security individuals sleep better at night knowing that they are in a better posture to prevent massive fallout from situations like recent, widely publicized security breaches.

Costs. The interviewed organization experienced the following risk- adjusted costs:

  • License and support costs amounted to $3,469,440 over three years. The license costs are both user licenses for the incident responders as well as the primary software licenses for production and development environments. Standard support and service has also been accounted for in this category.
  • Software integration and process build outs are a low but ongoing cost. This cost category is inclusive of deployment, orchestration buildouts, and integration build outs with existing security tools. Some APIs are included, but as the interviewed organization’s security architecture was complex and tools are numerous, the custom buildout of these integrations was necessary and cost $266,745 over three years.

Forrester’s interview with an existing customer and subsequent financial analysis found that the interviewed organization experienced PV benefits of $7,610,015 over three years versus PV costs of $3,736,185, adding up to a net present value (NPV) of $3,873,830 and an ROI of 104%.

TEI Framework And Methodology

From the information provided in the interview, Forrester has constructed a Total Economic ImpactTM (TEI) framework for those organizations considering implementing IBM Resilient.

The objective of the framework is to identify the cost, benefit, flexibility, and risk factors that affect the investment decision. Forrester took a multistep approach to evaluate the impact that IBM Resilient can have on an organization:

  • DUE DILIGENCE: Interviewed IBM stakeholders and Forrester analysts to gather data relative to Resilient
  • .CUSTOMER INTERVIEW: Interviewed one organization using Resilient to obtain data with respect to costs, benefits, and risks.
  • FINANCIAL MODEL FRAMEWORK: Constructed a financial model representative of the interview using the TEI methodology and risk-adjusted the financial model based on issues and concerns of the interviewed organization.
  • CASE STUDY: Employed four fundamental elements of TEI in modeling IBM Resilient’s impact: benefits, costs, flexibility, and risks. Given the increasing sophistication that enterprises have regarding ROI analyses related to IT investments, Forrester’s TEI methodology serves to provide a complete picture of the total economic impact of purchase decisions. Please see Appendix A for additional information on the TEI methodology.

DISCLOSURES

Readers should be aware of the following:

This study is commissioned by IBM and delivered by Forrester Consulting. It is not meant to be used as a competitive analysis.

Forrester makes no assumptions as to the potential ROI that other organizations will receive. Forrester strongly advises that readers use their own estimates within the framework provided in the report to determine the appropriateness of an investment in IBM Resilient.

IBM reviewed and provided feedback to Forrester, but Forrester maintains editorial control over the study and its findings and does not accept changes to the study that contradict Forrester’s findings or obscure the meaning of the study.

IBM provided the customer names for the interviews but did not participate in the interviews.

The Resilient Customer Journey

BEFORE AND AFTER THE RESILIENT INVESTMENT

Interviewed Organization

For this study, Forrester interviewed an IBM Resilient customer with multiple years of experience using the platform:

  • This is a financial services organization with a worldwide footprint.
  • It employs more than 15,000 full-time equivalents (FTEs) and has revenues in the tens of billions.
  • It has a cyber defense team of approximately 150 security professionals.
  • This is an organization that is held accountable to multiple regulatory bodies; effective security posture and processes are instrumental to meeting the standards.

Key Challenges

Coming from an existing state of using a homebrew incident response plan that incorporated the use of an IT ticketing system, the security team at the organization felt that its needs were largely unmet. There was a clear lack of visibility and integration into various security tools, providing for weak documentation and a complete absence of automation. “We had a clear desire for so much more to improve our efficiency, and when we realized that the existing solution failed at 99% of our wants, it was time to move on,” said the VP of cyber defense. Further, “The messaging from the top was that we had these solutions already — but our own analysis suggested it [the old solution] was clearly incapable of doing what we needed to be effective.”

  • There was a lack of integration with various security tools: Lacking integration with the security stack resulted in very little documentation and metrics for consumption. Further still, the effort required to triage and actually drive to the root cause of the incidents was largely manual and time-consuming. The old system served as a way to mark issues but aided very little in actually feeding information to security professionals so that they could take proper action on containment.
  • A lack of playbooks meant that every situation was assessed manually when it could have been automated: Incidents arose in a variety of forms and attack vectors. Incident responders would manually go through the analysis process, pulling information from various tools to determine the proper course of action. Said simply, the security analysts needed to enact different containment processes on every incident. The result was that different analysts performed containment and remediation in different ways, piling up on the inefficiencies.
  • There was a clear disconnect on automation and orchestration. Without integration, there was no automation. No single centralized point of control was dictating the hundreds of remedial actions that had previously been seen. Again, these actions took manual labor, and remediation was left to the wildly varying methods between the incident responders.
  • Security professionals were a scarce commodity: Being in a constant firefight mode required a large force of incident responders who were each versed in a wide variety of security elements. As the need for these professionals grew, it was more and more costly to add to this cyber defense group. Intelligent automation was a clear solution to reduce the laborious effort of analysis and containment.

Decision To Use Resilient

After an extensive request for proposal (RFP) and business case process evaluating multiple vendors, the interviewed organization chose Resilient and began deployment:

  • The organization chose Resilient because of its dynamic playbooks — the ability to follow the path of incidents and act dynamically through the stages of breach from initial identification to internal network proliferation and widespread data corruption.
  • By the end of the bake-off proof of concept (POC), the organization had built simple integration that translated into significant automation savings for all incident responders.
  • The Resilient solution was running and integrated with many of the organization’s mission-critical security tools within two weeks.

Key Results

The interview revealed that key results from the Resilient investment include:

  • Integration with existing security tool sets allowed for a dramatic automation improvement. By integrating with existing tools, Resilient took initiative to present the relevant data on issues to security analysts and then completed the required actions through the tools once approved by analysts. In short, orchestration and automation eliminated a large portion of investigative work from the detect, analyze, contain, and eradicate workflow.
  • Like security practitioners, business end users found greater productivity. As the time-to-contain shortened from automation, business users enjoyed higher levels of uptime at their workstations, directly feeding value back to the organization in productive output. Disruptions were reduced in scale; even IT help desk effort was reduced as reimage sessions or virtual machine (VM) recomposes were minimized by fast action to resolve incidents.
  • Having a capable incident response platform was the final piece of the security puzzle to tackle increasingly complex attacks. While detection and remediation were still largely left to the existing tools in the security group, the time to take action and contain threats had dramatically improved. Being without an IR platform capable of orchestration was like having the tools but having to wait to decide when and where to use which specific tool for the task.

Financial Analysis

Orchestration And Automation Savings For Incident Response

Following the deployment of IBM Resilient, the interviewed organization realized a significant gain in the automation and, in turn, a reduction of security analyst effort. Whereas the existing solution offered very limited or no data from the relevant security pieces, Resilient, once integrated with the security stack, was able to provide vivid detail on security incidents and enact on containment and remediation actions with minimal input from security personnel. From the interview, Forrester determined:

  • Security experts can see from a centralized command center the initial point of detection and any further exploitation caused by the incident. Using dynamic playbooks, the Resilient platform visibly displays the actions required and can execute with a single click from the incident responders.
  • In the previous state where incidents morphed and affected multiple points across the network, incident responders would rely on multiple analysts to determine and contain these threats. With Resilient, the organization can identify these threats and reduce the number of actual personnel necessary to mitigate the issues.
  • The interviewee stated that the longest part of the incident response workflow was the analysis and triage on the incidents. Resilient effectively reduced the effort involved by over 80%. Accounting for three analysts who may have been involved in these incidents, their individual effort was reduced by nearly 25 minutes for analysis, resulting in a total of 1.25 hours saved per incident.
  • At an average rate of 350 incidents occurring on a weekly basis, we estimate that 22,750 hours were saved in the initial year by the security responders and analysts.

Calculations have been adjusted for an increase in efficiency through optimization of orchestrations and an increase in incidents that will occur over the ensuing years. Forrester estimates security incidents to increase by nearly 15% at financial services organizations on a year- over-year basis.

  • Incidents will grow in frequency by 15% year over year.
  • Tuning and optimization of the orchestration through further integration with security tools will increase the time saved by security professionals by 10% year over year.
  • At a rate of $110,000 per year, accounting for benefits, security professionals earn the equivalent of $66/hour.

With the time saved, the security analysts were not necessarily relinquished — especially as they are highly sought after. Instead, the interviewed organization allocated these analysts to spend the newly found time saved from automation to perform deep level analysis — such as determining the advanced behavior of malware or optimizing rule sets and orchestration so that incidents are handled even faster in the future.

While Forrester believes the value of automation and orchestration to be undeniable, readers should be aware of the potential impact risk of exacting the benefits if an established IR plan is already in place. Consideration of this risk should be for organizations that may already be very mature in incident response and security posture — factors that may diminish the value cited in this category.

To account for this risk, Forrester adjusted this benefit downward by 5%, yielding a three-year risk-adjusted total PV of $4,502,964

End User Productivity Recapture From Improved IR Capabilities

IBM Resilient does not enable quicker detection of malicious activity — this is a function of the existing security infrastructure. Likewise, Resilient does not perform the remediation. Instead, the Resilient solution accelerates the incident response workflow once an incident has been detected, leading to a significantly reduced time to enact the remediation and containment procedures — otherwise explained as the period of time between mean-time-to-detect (MTTD) and mean-time-to-contain (MTTC).

Incident responders previously required between 20 and 30 minutes to analyze and determine a proper containment approach, which has largely been eliminated due to Resilient’s automation and orchestration to carry out containment measures. End users who operate on the enterprise network would often find that their machines were locked out upon detection, resulting in a period of downtime until the endpoint was contained and remediated. With the reduction in the period between MTTD and MTTC, the end users are able to recover this time to be spent productively.

Additionally, a number of incidents often cause deeper and collateral damage as time passes. A hastened action to respond to the incident often reduces the need for deeper-level remediation/recovery techniques, such as a complete reimage.

For the interviewed organization, Forrester found that:

  • For each of the incidents occurring across the enterprise, end users are saving a minimum of 30 minutes per incident. They are repurposing those 30 minutes into productive output.
  • The percentage of incidents that ultimate may have necessitated full restores or VM recompose without a hastened response is 10%.
  • The average time for reimage, recompose, or full remediation is estimated at 1.5 hours — time that would have been taken away from user productivity.

The reduction in productivity recaptured can vary depending on:

  • The number of applications installed on the endpoint stations.
  • The time needed to reimage/recompose.
  • The detection efficacy of security measures already in place.

To account for these risks, Forrester adjusted this benefit downward by 5%, yielding a three-year risk-adjusted total PV of $1,346,720.

Existing Security Asset Value Realization Improvement

Enterprises today are rightfully concerned about their security posture and allocate increasing amounts to security budgets — especially given the number of high-profile breaches that frequent the news. With an assortment of tools, how do organizations determine the efficacy of these individual tools following POC and deployment? Forrester’s interview with the customer organization revealed that while POCs and bake-offs can be useful for a first impression, sometimes the solutions are not quite as effective as originally expected. With Resilient as a central point of orchestration and data collection, the interviewed organization gained visibility into its collective security stack and was better able to evaluate its existing investments.

  • Upon integration with Resilient, the security team was able to collect information as to which defense mechanisms were more effective, if effective at all, on detection or containment of malicious activity.
  • The interviewed organization was able to clearly delineate whether its browser sandboxing and database access management were working, as results were all reported back to Resilient.
  • The organization estimated that over $1.5 million of its investments were not properly configured or working to the standard promised, resulting in either reconfiguration or removal of those services.
  • Detection was primarily noted in the first year of Resilient deployment with smaller incremental gains in the years after.
  • Recognition of the points of failure saved an additional amount of labor, when the security team would have passed false negatives or exerted additional effort on false positives.

Value recovered from the points of failure was estimated at a PV of $1,760,331 over the course of three years of usage.

Unquantified Benefits

Beyond the quantified benefits represented above, the customer organization identified the dramatically improved security posture now present. Time previously spent on remediating incidents is now spent to do advanced heuristics on malware and threats — understanding the underlying nature to prevent additional outbreaks in the present and future. What is the value in that? Forrester has determined the following on breaches:

  • No organization is immune to breaches. The size of an organization cannot determine the likelihood of attack or the accompanying potential damage, nor can any particular industry preclude an organization, as motivations behind breaches have evolved. While it is impossible to say what percentage of organizations are breached, we know that it is a matter of when, rather than if. Organizations that have a solid IR plan and perform deep-level analysis on different threat vectors stand a much-improved chance of minimizing damage.
  • Breaches that are not addressed with immediacy contain a number of financial ramifications in the short and long run. Lost revenues, legal settlements, regulatory fines from the likes of the Federal Financial Institutions Examination Council (FFIEC) and the Payment Card Industry (PCI), and long-term brand erosion should all be considered.

While prevention and detection are always important, incident response formulas should be equally as critical in the overall security scheme of the organization.

Flexibility

The value of flexibility is clearly unique to each customer, and the measure of its value varies from organization to organization. There are multiple scenarios in which a customer might choose to implement Resilient and later realize additional uses and business opportunities, including:

  • Resilient incident response is agnostic with the tools that it integrates with and orchestrates. As newer and more capable prevention, detection, logging, and remediation tools are introduced to the security ecosystem, Resilient can continue to serve as the central orchestration mechanism with these tools and perpetually increase automation to security teams.

Flexibility would also be quantified when evaluated as part of a specific project (described in more detail in Appendix A).

License And Support Costs

From the interview, Forrester has determined that the majority of costs are borne from the following items:

  • The customer purchased a base software license, along with the user seat licenses for individual incident responders. The licensing purchased by the interviewed organization is of the perpetuity type.
  • Support and service were a continued cost assumed on a yearly basis following the initial year of usage.
  • Lastly, a development environment for the Resilient platform was necessary to develop integrations into the organization’s 50-plus existing security tools.

Costs in this study are represented at near list pricing, reflecting only slight discounting. Purchases of other IBM security solutions may drive the cost of the Resilient solution down beyond what is reflected here. We encourage readers to explore the options with IBM or partners.

Compiling the costs of the licenses and service and support, the interviewed organization likely assumed PV costs of $4,415,651 after three years of usage.

Initial And Ongoing Orchestration, Process, And Integration Build-Outs

The IBM Resilient platform can be deployed outright with minimal effort and comes with a number of standard dynamic playbooks. As no two organizations are the same, however, process remodeling and security tool integration need to be undertaken to fully realize the automation and orchestration capabilities of Resilient. The interviewed organization started integration and process augmentation for the mission-critical tools within its stack of more than 50 tools.

  • Initial planning and scripting of the various integrations required the efforts of five security FTEs over two weeks, committing a real total of 400 hours in this time. With this effort, the organization had integrated Resilient with its mission-critical and most commonly used tools. Automation savings almost immediately accrued, but the efforts for process engineering and tool integration didn’t stop there.
  • Over the next three years, the organization continued to integrate tools to continually improve the efficiency of incident response, cutting manual processes where it could. The effort spent by a Python developer equated to approximately half an FTE on an ongoing basis.
  • The result was a continued reduction and effectiveness in the organization’s ability to contain incidents in shorter periods of time.

Some organizations may lack the developer resources for advanced Python development in the security space; as such, there exists the risk that additional effort might need to be allocated to the tool integration process. Additionally, different organizations have varying complexities in their security architecture that may require additional effort. As such, Forrester has overlaid this cost category with what we identify as implementation risk.

To account for these risks, Forrester adjusted this cost upward by 10%, yielding a three-year risk-adjusted total PV of $266,745.

IBM Resilient: Overview

The following information is provided by IBM. Forrester has not validated any claims and does not endorse IBM or its offerings.

The Resilient Incident Response Platform (IRP) is the leading platform for orchestrating and automating incident response processes. With Resilient, security organizations can significantly drive down their mean time to find, respond to, and remediate using the platform. It quickly and easily integrates with organizations’ existing security and IT investments, creating a single hub to drive fast and intelligent action. The platform’s advanced orchestration capabilities enable adaptive response to complex cyber threats.

The latest orchestration innovations to the Resilient IRP include:

  • Dynamic Playbooks: Provides the agility, intelligence, and sophistication needed to contend with complex attacks. Dynamic Playbooks automatically adapts to real-time incident conditions and ensures repetitive, initial triage steps are complete before an analyst even opens the incident.
  • Visual Workflows: Enables analysts to orchestrate incident response with visually built, complex workflows based on tasks and technical integrations.
  • Incident Visualization: Graphically displays the relationships between incident artifacts or indicators of compromise (IOCs) and incidents in an organization’s environment.

Appendix A: Total Economic Impact

Total Economic Impact is a methodology developed by Forrester Research that enhances a company’s technology decision-making processes and assists vendors in communicating the value proposition of their products and services to clients. The TEI methodology helps companies demonstrate, justify, and realize the tangible value of IT initiatives to both senior management and other key business stakeholders.

Total Economic Impact Approach

  • Benefits represent the value delivered to the business by the product. The TEI methodology places equal weight on the measure of benefits and the measure of costs, allowing for a full examination of the effect of the technology on the entire organization.
  • Costs consider all expenses necessary to deliver the proposed value, or benefits, of the product. The cost category within TEI captures incremental costs over the existing environment for ongoing costs associated with the solution.
  • Flexibility represents the strategic value that can be obtained for some future additional investment building on top of the initial investment already made. Having the ability to capture that benefit has a PV that can be estimated.
  • Risks measure the uncertainty of benefit and cost estimates given: 1) the likelihood that estimates will meet original projections and 2) the likelihood that estimates will be tracked over time. TEI risk factors are based on “triangular distribution.”

The initial investment column contains costs incurred at “time 0” or at the beginning of Year 1 that are not discounted. All other cash flows are discounted using the discount rate at the end of the year. PV calculations are calculated for each total cost and benefit estimate. NPV calculations in the summary tables are the sum of the initial investment and the discounted cash flows in each year. Sums and present value calculations of the Total Benefits, Total Costs, and Cash Flow tables may not exactly add up, as some rounding may occur.

Compliance-proof your identity and access management program

Keep your financial data safe and your company compliant 

Safeguarding the confidentiality of corporate information and customers’ personal financial data is more than a best practice. Worldwide, it’s something financial institutions are required to do. It’s a prerequisite to compliance with increasingly stringent legal requirements and industry-driven mandates. 

That’s because any lapse in security that allows cybercriminals to commit fraud or theft can devastate consumers and businesses alike. 

Specifically because the banking and financial services industry (BFSI) is custodian of the crucial financial and personal data of hundreds of millions of people, it is one of the most prized 

targets for cybercriminals. According to figures compiled by 

IBM Managed Security Services, the financial services sector moved from the third most-attacked industry in 2015 (behind healthcare and manufacturing) to the first in 2016.

This white paper provides BFSI security managers with guidance in understanding and complying with access management standards unique to this business sector, and explains how access management can ease regulatory compliance burdens. The paper also explains how security measures can be business enablers both internally (for employees and business partners) and externally (for current or prospective customers or clients) by providing user-centric authentication options that connect users smoothly to the data and services they need.

“Nearly three-  quarters  of CISOs we surveyed said that intrusions resulted in significant   operational disruptions over the past two years.

Access management as a modern legal and business necessity

For financial institutions, issues of access are paramount, especially in organizations spanning dozens of systems, thousands of employees, hundreds of thousands of customers and millions of stored records. In fact, attacks such as phishing aimed directly at users rather than at technology, along with the actions of inadvertent actors, such as employees who mistakenly opened the door for an attack through an innocent mistake, made up the bulk of attacks on the financial service industry in 2016.

This underscores the importance of identity and access management (IAM) in the security environment for any organization that manages customers’ money, or stewards valuable financial information such as insurance valuations or stock holdings. 

Consider that financial institutions’ point-of-sale systems have become one of the main targets of organized crime and cyber terrorists. Besides their business interest in protecting their own assets and reputation more generally, the BFSI sector is also held 

to finance-specific governmental regulations that reflect the value financial assets represent to the public. These include European Union (EU) requirements such as those found in the Revised Payment Services Directive (PSD2), as well as US laws such as  the Sarbanes-Oxley Act (SOX) and industry-formulated data privacy schemes such as the Payment Card Industry Data Security Standard (PCI DSS). 

Centralizing the identification of authorized users, and successfully managing each legitimate user’s access to computing or data resources (including personal information), are strongly related challenges. An infrastructure that includes IAM capabilities can help financial institutions deal with both of these business needs and decrease the complexity of managing security across systems and user populations..

The size of the IAM market in the banking, financial services and insurance industries was estimated at nearly USD3 billion in 2016—with a projected compound annual growth of 8.7 percent through 2021.2

Revised Payment Services Directive: High-impact API requirements

Adopted in October 2015 by the European Parliament to extend the Payment Service Directive of 2009, the Revised Payment Services Directive (PSD2) requires enterprises that do business in Europe (even if based elsewhere) to focus on standardizing, integrating and improving payment efficiency. PSD2 requirements, which become effective in 2018, are meant to protect consumers’ online payments, promote the development and use of new online and mobile payment systems, and reduce the risks of cross-border payment services.  

PSD2 affects many kinds of payment-industry participants. It requires account service providers to make certain kinds of account information available to registered third-party providers by means of application programming interfaces (APIs), and to allow those parties to initiate payments from a specified account. By mandating access to customer accounts, PSD2 expands the ways that non-traditional financial providers can provide consumers with financial services. PSD2 introduces guidelines for risk assessment and strong customer authentication (SCA) to ensure that requiring this level of openness from banks does not jeopardize consumer’s 

security or trust in the financial system. While PSD2 does not  require a standardized API specification for these interactions, concurrent mandates such as the Open Banking initiative in the UK have arisen to define and support the development of standard APIs. The API requirements of PSD2 and Open Banking present opportunities to deliver value-added services to customers, but for every institution they present technical challenges, including: 

  • Provisions for account-owner consent, allowing third-party  providers to request account information and payment initiation services from account owners
  • The need to provide SCA and risk detection across channels and via third-party providers without compromising end-user experience and adversely impacting payment activity
  • Conflicts and lack of clarity around API specification and PSD2 is designed to  interoperability protect
  • A security-first approach with wide-ranging capability  consumers  requirements encompassing authentication, confidentiality, promote  fraud detection and adherence to regulatory technical and   standards innovative payment systems. 
  • Monitoring service levels and performance to help ensure that APIs meet existing service level agreements (SLAs)

Sarbanes-Oxley Act: Financial record keeping and reporting

The Sarbanes-Oxley Act of 2002 (SOX) is a US federal law enacted in response to major corporate and accounting scandals in which corporate financial information was manipulated, withheld or misused. SOX mandates that covered corporations (including all US company boards, management and public accounting firms) ensure that their financial data is preserved uncorrupted for government auditors. Its provisions dictate that adequate controls be implemented, tested and documented for financial reporting and governance.

IAM can help organizations ensure compliance by limiting access to sensitive records; a solution with built-in reporting capabilities can make it easier to document user interactions with data.

A strong access management platform can help IT and security managers comply with the SOX mandate by:

  • Assigning and controlling user access rights
  • Enforcing segregation-of-duties policies
  • Adjusting access rights when job function changes, providing efficient, timely and secure role-based adjustments, and revoking user access upon termination, helping reduce the risk that formerly legitimate access will be turned into unauthorized use
  • Cost-efficiently managing details of user accounts (including identity, access and authorization) using centralized controls
  • Producing automated reports to meet SOX requirements for timely, complete and accurate compliance reports

SOX noncompliance can trigger criminal penalties, as well as civil penalties ranging as high as USD15 million.

PCI DSS: Industry standards and best practices for card data

The Payment Card Industry Data Security Standard (PCI DSS) was first released in 2004 by a coalition of major credit card providers, with the goal of enhancing cardholders’ security and facilitating the adoption of consistent data security measures among these card issuers. The compliance details have evolved with a series of updates reflecting changing security risks and leaps in credit card technology, but the core principle has remained consistent: credit card issuers and merchants must minimize the risks to cardholder data by employing the best practices specified in the regularly updated standard, several of which have implications for access management. The practices outlined by the PCI DSS are backed by an incentive to comply: an entity that is compromised while it was not compliant is subject to fines.

The PCI DSS specifies 12 requirements for organizations that handle credit cards, such as assigning a unique ID to each person who has access to the credit processing system.

A strong access management platform can help IT and security managers comply with the SOX mandate by:

  • Assigning and controlling user access rights
  • Enforcing segregation-of-duties policies
  • Adjusting access rights when job function changes, providing efficient, timely and secure role-based adjustments, and revoking user access upon termination, helping reduce the risk that formerly legitimate access will be turned into unauthorized use
  • Cost-efficiently managing details of user accounts (including identity, access and authorization) using centralized controls
  • Producing automated reports to meet SOX requirements for timely, complete and accurate compliance reports

The costs of a data breach include customer turnover, customer acquisition activities, reputation losses and diminished goodwill.

Enable digital business with IAM solutions from IBM

An integrated approach to IAM and systems access isn’t just about regulatory compliance. An effective IAM approach can help organizations use security as a driver for business, giving users simple, flexible access to services, and helping assure that their information is secure. Innovations such as multi-factor authentication (MFA), federation and fraud protection can help financial institutions meet their access management challenges, while still delivering superlative user experiences. These three capabilities can be united with a well-implemented IAM solution such as IBM Security Access Manager. With a robust IAM system in place, your security team can implement:

  • MFA: To achieve greater security than is possible with a single password, MFA requires that authentication be based on more than one kind of security credential, such as physical tokens or biometric identifiers. With MFA, an intruder, even an insider, is unlikely to be able to replicate or possess the necessary identifiers. IBM Security Access Manager can be flexibly configured to use MFA. With the optional integration of IBM Verify mobile security software, IBM Security Access Manager can be a launching point for secure push notifications sent to users’ mobile devices.
  • Federation: Federation allows the task of authenticating users—with conventional login/password pairs or MFA—to unite divisions of your business (such as banking and insurance products), so logins can be securely shared between services, transparently to users.
  • Fraud protection: IBM Security Access Manager can integrate with the fraud detection features of IBM Trusteer® Fraud Protection Suite to spot both insider threats and outside attackers. With a tuned alert and response system, suspicious activity can trigger associated actions in IBM Security Access Manager, and these actions can be tiered for quick, unobtrusive response.

Users at risk: Keeper Security recently analyzed a set of 10 million compromised accounts and found that nearly 17% of accounts were guarded with the password “123456.”

For more information

To learn more about IBM Security solutions, including IBM Security Access Manager, please contact your IBM representative or IBM Business Partner, or visit: ibm.com/security

About IBM Security solutions

IBM Security offers one of the most advanced and integrated portfolios of enterprise security products and services. The portfolio, supported by world-renowned X-Force research and development, provides security intelligence to help organizations holistically protect their people, infrastructures, data and applications, offering solutions for identity and access management, database security, application development, risk management, endpoint management, network security and more. These solutions enable organizations to effectively manage risk and implement integrated security for mobile, cloud, social media and other enterprise business architectures. IBM operates one of the world’s broadest security research, development and delivery organizations, monitors 15 billion security events per day in more than 130 countries, and holds more than 3,000 security patents.

Additionally, IBM Global Financing provides numerous payment options to help you acquire the technology you need to grow your business. We provide full lifecycle management of IT products and services, from acquisition to disposition. For more information, visit: ibm.com/financing

Deliver a competitive customer experience with automated workflows

Your employees perform tasks, make decisions and take action in a variety of workflows every day

Whether you’re looking to improve customer-facing workflows or internal processes, each one requires different steps to execute or guide different activities. 

These varied activities present three challenges:

  1. How do you optimize and scale activities in the most accurate, consistent and responsive way to satisfy customers? 
  2. How do you find a solution that can meet diverse needs to handle less-structured, case-centric activities?
  3. How do you measure performance and determine what needs improvement?

IBM Business Automation Workflow simplifies workflows for virtually any business style

IBM® Business Automation Workflow helps you easily and collaboratively discover new ways to automate and scale work by combining business process and case management capabilities. With these combined automation capabilities, you can:

  • Create and manage workflows from process models. 
  • Simplify complex tasks to reduce costs and execution time. 
  • Create content-initiated workflows, so an event triggers a workflow. 
  • Escape paper-heavy or spreadsheet-based workflow organization. 
  • Reconfigure workflows with minimal IT involvement for flexibility. 
  • Reuse workflow components when building parallel processes. 
  • Document actions, content and data to help prepare for audits. 
  • Use built-in reporting, auditing and governance to monitor real-time performance and compliance. 
  • Meet changing business needs with a component based solution.
  • Gain access from the public cloud.
  • Quickly provision with immediate access.

Success story

Financial firm consolidates content for efficiency

A large fund administrator had content related to various accounts scattered across its data storage and administrative interfaces rather than readily accessible within each account. With the help of IBM Case Manager, now part of the IBM Business Automation Workflow, the content was consolidated, and each account was simplified with the creation of a unified electronic record.

Your benefits

IBM Business Automation Workflow

IBM Business Automation Workflow helps you: 

  • Deliver better outcomes. 
  • Create and monitor competitive frontor back-office workflows.
  • Automate operations at scale. 
  • Create unified data records.
  • Improve knowledge-worker interactions. 
  • Improve the overall customer experience.
  • Better handle governance requirements. 
  • Improve the governance of workflows.

Deployment options

Choosing an environment 

Choose one or more of the following environments that best meets your needs:

  • Your cloud. Deploy and run the platform in the cloud of your choice with virtualized containers using Kubernetes or Terraforms with IBM Business Automation for Multicloud.
  • IBM Cloud™. Get started quickly with the IBM Business Automation Workflow on Cloud SaaS offering, fully managed by IBM with flexible licenses. 
  • On premises. Gain access to on-premises workflow capabilities with Business Automation Workflow.

The journey to automation

The scale you need to compete

IBM intelligent automation platform extends human work with digital labor using one or more automation capabilities.

Start small with one capability, then mix and match capabilities as business needs evolve.

IBM Business Automation Workflow helps you: 

  • Reduce errors and activity time in workflows using robotic process automation (RPA) bots. 
  • Reduce workflow business-logic complexity and change rules-based decisions faster.
  • Improve case work productivity by increasing knowledgeworker understanding of unstructured content.

Eight great reasons to adopt hybrid cloud with IBM Power Systems

The advantages of adopting hybrid cloud with IBM Power Systems

IBM Power Systems plus IBM Cloud technology offers users a host of valuable business benefits, from scaling out rapidly to full transparency of costs and the ability to test and develop new projects without financial or operational risk 

There is no one-cloud-fits-all option. While the possibilities are endless, the cloud journey can be daunting for enterprises which have unique regulatory and data requirements, extensive IT investments in their on-premise infrastructure, and are currently running anywhere from five to 15 different mission-critical workloads. This is why businesses need to consider a hybrid cloud approach, which helps them to build, deploy and manage applications and data running on-premise, in private clouds and in public clouds. 

With a combination of innovative technology and industry expertise – underpinned with security and a focus on open solutions – IBM Cloud is already helping to move some of the world’s largest enterprises into the next chapter of their cloud journey. Now, users of IBM Power Systems can more easily take part in their own hybrid cloud journey with IBM Power Systems Virtual Servers on IBM Cloud. 

IBM Power Systems Virtual Servers on IBM Cloud deliver IBM Power9 virtual machines, with IBM AIX and IBM i, on the IBM Cloud public infrastructure-as-service platform. It’s the best of IBM Power and the best of IBM Cloud in one convenient, economical, self-managed, pay-as-you-use environment. 

There are significant business benefits driving this IBM Power Systems on IBM Cloud hybrid approach, which address the different challenges organisations face as they expand their IT infrastructure beyond on-premise to meet the demands of the digital economy.

1. A pathway to hybrid cloud

Users of IBM Power Systems have historically relied on in-house infrastructure for the raw performance of Power-based processors. But they have been held back from accessing higher levels of flexibility, agility and efficiency because of obstacles to on-premise growth, such as enormous capital outlay, management hassles and risk. IBM Power Systems Virtual Servers on IBM Cloud now offer an opportunity to realise those benefits, and to help ensure a more seamless and smoother path to cloud based on a hybrid cloud decision framework.

IBM Power Systems users can enjoy fast, self-service provisioning, flexible management both on-premise and off-premise, with access to IBM Cloud services. A pricing model based on pay-as-you-use billing provides full transparency of costs and ensures that organisations know exactly what they are paying for. 

Hybrid cloud offers many ancillary benefits – described below – but one that features heavily in any business case is the ability to scale up and out to meet demand quickly and economically.

“Organisations can turn on provision and get capacity instantly with faster time to value. It is about making IT proactive. It makes sense for organisations that want to modernise their applications to be better equipped for a hybrid multicloud environment,” says Meryl Veramonti, portfolio marketing manager for IBM Cloud. 

“IBM Power Systems users have relied on their on-premise infrastructure and their own datacentres and IT setups. They want to modernise and expand their cloud capabilities, but do not want to pay a huge upfront cost for full migration. Now they don’t have to, because they have a direct path.”

IBM continues to innovate with capabilities that lend themselves towards not only a hybrid cloud model but a hybrid multicloud environment model. With the latest tooling around IBM’s multicloud management, users are able to leverage this offering to develop apps once, and run them anywhere on an open platform architected for their choice of private and public clouds.

2. Modernising infrastructure while maintaining expertise

A proactive and modern IT infrastructure is within easy reach. The migration path to hybrid cloud offers IBM Power Systems users a cost-effective and efficient on-ramp to the cloud in a way that mitigates risk, because they don’t have to change the operating system or the environment with which they are familiar.

“They can keep the operating system and the operating environment and dip their toes in the cloud without a huge upfront cost because there is a pay-as you-go model,” says Veramonti. “There is a head of steam for hybrid cloud because organisations are aware of the benefits, but they want to know exactly how it will work for them and whether they can still do what they do on-premise.” 

An organisation that does not want to shift its entire environment to the cloud, but does want to explore how cloud can benefit their business, is ideally served by IBM Cloud’s migration path to hybrid cloud.

This level of assurance that what works on-premise will work the same way in the cloud is key for IBM’s AIX and IBM i client base. 

“IBM Power Systems have a very specific infrastructure: the build is unique and moving or extending some of their environment to the IBM Cloud is just 

a way to build out from on-premise. It presents a similar environment to their home environment,” says José Rafael Paez, worldwide offering manager for IBM Systems.

3. Cost-effective, low-risk capacity

Organisations that have run into obstacles regarding capacity and wish to use cloud to expand without any costly upfront investment in more equipment or a huge upgrade programme can now reap the rewards. 

Choosing IBM Cloud makes sense for IBM Power Systems users that understandably want to avoid unnecessary risk in migrating critical IT infrastructure. 

“Historically, moving on-premise infrastructure to the cloud is not an easy switch and is a big learning curve,” says Paez. “The ease of transformation and the knowledge that the migration will work compared with choosing a competitor’s cloud platform, which can introduce risk, is a top business benefit for choosing IBM Cloud for risk-averse people.” 

Veramonti cites the example of a manufacturing company that didn’t want to spend more money on outdated on-premise equipment. “They wanted their infrastructure to work with the cloud to gain more power and memory without risking porting everything over to a new environment,” she says.

4. Effective, lower-cost maintenance

Another attractive proposition for the manufacturing firm in moving to hybrid cloud was the reduction in maintenance costs for workloads running in the IBM Cloud.

“The manufacturer was in charge of maintaining everything on-premise, but in the cloud IBM takes care of maintenance because it is all off-premise,” says Veramonti.

The ease of management, as well as the reduction in associated costs, mean that an organisation can rechannel IT resources to focus on innovation rather than keeping the lights on. 

“Organisations that are completely deployed on-premise have to spend a lot of money on hardware, electricity, cooling and operations teams to keep information running with enterprise uptime. Clients using IBM Cloud’s tier-one datacentre will have access to IBM capability and backbone and a high-end infrastructure,” adds Paez.

As well as the flexibility that comes with the guaranteed high performance of IBM Cloud without the maintenance headache, organisations are assured that migration offers an opportunity to make their data more secure.

5. Security and business continuity

Many organisations are rightly concerned about security and business continuity in the digital age, when data, the lifeblood of any organisation, must be made available to the business 24/7 and be protected from outages, cyber attacks and compromises. There are clear business benefits from strengthening disaster recovery by moving to hybrid cloud, and IBM Power Systems users are keen to capitalise on the IBM Cloud for this reason.

“A cloud strategy for disaster recovery has minimal risk by ensuring two locations – one on-premise and one as backup in the cloud,” says Veramonti.

This is an important business advantage for all IBM Power Systems users, big and small. They can enhance business continuity planning and de-risk their on-premise environment.

“Organisations want geo-diversity and by deploying in IBM Cloud datacentres they can gain that diversity,” says Michael Daubman, worldwide offering manager for IBM Cloud Infrastructure Services, IBM Cloud and Cognitive Software.

Daubman points out that IBM Cloud has datacentres with the IBM Power Systems Virtual Server offering in the US, Germany, and soon in many other countries (including the UK in early 2020), which provides organisations with high availability and the opportunity to capitalise on a cloud-based disaster recovery strategy.

6. Development and testing on the latest technology

Another business benefit is that organisations gain access to up-to-date hardware technology, such as the latest IBM Power9 servers. If they want to develop and test software, this is an attractive proposition.

Developing and testing applications is fundamental for the future of any organisation and its ability to innovate. Understandably, many are hesitant about devoting finite on-premise resources to projects that have an inherent risk. A hybrid cloud strategy therefore makes economic sense. The business can use IBM Cloud to develop and test new projects without committing large-scale resources on something that is yet to be proven.

Organisations can get access to new hardware and can develop and test in a flexible cloud model. Power10 will come quickly and when it does we’ll leverage it in the cloud,” says Daubman

From a skills perspective, organisations can also benefit from the best minds behind IBM Cloud. “Who knows Power10 better than the people who build the hardware platform?” he adds

Operational risk is reduced and organisations get access to best-practice architecture and the flexibility provided by IBM Cloud.

“Flexibility is assured because provisioning is managed through a set of application programming interfaces and there is no need to buy and drop in new hardware,” explains Daubman.

The choice of cloud provider is critical to the success of any business pursuing a hybrid cloud strategy; IBM Power Systems users can be reassured that by using their existing relationship with IBM, they have a quality cloud provider with a global reach of datacentres, skills and network.

7. Transparent pay-as-you-go pricing

Risk mitigation in IBM Cloud extends to an openness regarding price. 

“IBM Power Systems on IBM Cloud has transparent pricing. There is no risk or upfront cost. It is 100% owned and operated by IBM Cloud. We have global datacentres across the cloud and data never leaves our hands,” says Veramonti.

Pricing models are not only transparent, but can be customised for individual organisations to suit their specific needs.

“Organisations are charged hourly and billed monthly. They can turn on or turn off cloud resources depending on their needs – for example, Black Friday for a retailer or a seasonal spike in demand for Christmas where there is an influx of data that requires backup. The applications can then be turned off after the holiday season,” says Veramonti.

In the digital economy, this level of flexibility is an especially attractive business benefit. “You do not have to pre-buy capacity for your peak, which makes sense from a cost, management and operational perspective,” says Daubman.

8. Support from IBM Cloud

Demand for skilled IT professionals is increasing as business becomes more data driven. By adopting a hybrid cloud strategy, IBM Power Systems users can access IBM Cloud’s skilled teams around the globe, as well as state-of-the art technology to meet all their needs – from scaling out quickly to managing costs, and being able to test and develop without exposure to financial and operational risk.

These benefits make a compelling case for IBM Power Systems users to adopt a hybrid cloud strategy to future-proof their business. Increasingly, keeping all systems on-premise is becoming a business risk. 

Migrating to IBM Cloud makes sense for IBM clients that want to:

  • Grow their business;
  • Deploy workloads where and when they want them in an IBM Cloud datacentre; 
  • Deploy a resilient, cloud-based disaster recovery strategy;
  • Choose a deployment that is fully customisable;
  • Adopt cloud services to gain access to all the skills, services and added value that the global IBM Cloud network can provide.

Three core scenarios for migrating IBM Power Systems workloads to the IBM Cloud

Core scenarios driving the migration of workloads to the IBM Cloud

There are strong business cases for users of IBM Power Systems to make the move to the cloud, especially regarding business continuity and disaster recovery provision, testing and development, and application modernisation

There is an increasingly compelling business case for organisations to leverage the public cloud for a hybrid environment. For IBM Power Systems users, that path has become even more attractive since the launch of IBM Power Systems Virtual Servers on IBM Cloud, offering a route to run IBM AIX and IBM i workloads easily in the cloud that is cost effective, efficient and low risk.

When considering why and how to migrate, organisations must look at the opportunities and the practicalities of implementation.

Why migrate to IBM Cloud?

For IBM Power Systems clients that have typically relied on a wholly on-premise infrastructure, IBM Power Systems Virtual Servers on IBM Cloud provides a fast and reliable method for spinning up resources in the public cloud. With a pricing model that avoids capital expenditure, it is easy to scale out rapidly, while paying only for what you use is an attractive proposition for organisations that want to test, develop and flexibly grow infrastructure utilisation without having to buy new equipment.

IBM Power Systems Virtual Servers deliver IBM AIX or IBM i with IBM Power9 processor-based virtual machines on IBM Cloud. The advantages are it is a multi-tenant, self-managed, Power-as-a-service in IBM Cloud with consumption based operational expenditure pricing.

IBM Cloud Virtual Server environments deliver full infrastructure-as-a-service capabilities. For IBM Power Systems on IBM Cloud instances, organisations are billed for hourly metering in a pay-as-you-use subscription model. Clients receive self-service virtual server lifecycle management with a pool of compute, memory, storage and network infrastructure. Organisations access the cloud through client-owned IBM Cloud resources and bring their own operating system (OS) images or leverage available OS images.

A further advantage comes for organisations with limited internal skills and resources looking to explore a top-tier hybrid cloud. IBM Cloud manages and supports all the state-of-the-art infrastructure layers up to the operating system, which gives clients the peace of mind that their data and business continuity are in safe hands. 

By examining three of the main use cases driving migration – disaster recovery, software development and testing, and production application hosting – organisations can work with IBM Cloud to employ the latest best practices for a successful project.

IBM Cloud for disaster recovery

One IBM client, a furniture retailer based in Florida, decided to migrate to IBM Cloud to boost its business continuity and disaster recovery capability.

“The company was being hit more and more by weather events and needed to strengthen disaster recovery,” says Michael Daubman, worldwide offering manager for IBM Cloud Infrastructure Services, IBM Cloud and Cognitive Software.

By choosing IBM Power Systems Virtual Servers on IBM Cloud, the retailer did not have to purchase an additional data centre to supplement its on-premise deployment, and gained the agility of a public cloud within the controlled and secure environment of a private cloud.

“This offering was designed in the cloud exactly to the best-practice standards of our clients’ on-premise infrastructure,” says Daubman.

The cloud architecture solution was set up with fibre-attached storage, a dual virtual I/O server (VIOS) system for virtual storage redundancy on PowerVM as the hypervisor and DB2 data management products

“It was super important to have an enterprise solution,” says Daubman. 

Provisioning out onto the cloud meant the retailer could scale up and grow an OS image, paying only for what it needs as it grows. The architecture natively leverages Live Partition Mobility to avoid outages, moving AIX and IBM i workloads from one system to another as required, maintaining a highly available solution.

Daubman highlights how, by taking the required best-practice on-premise architecture and replicating it in the cloud, the retailer was given peace of mind, and the knowledge that all its enterprise software would remain fully supported. “The solution is a cloud-consumable version of the industry best practice for on-premise systems. It is an architecture for production enterprise applications,” he says.

The two critical components of the implementation include leveraging PowerVM hypervisor to provide a secure and scalable virtualisation environment for AIX and IBM i workloads; and providing fibre-attached enterprise-scale IBM Cloud storage.

Daubman points out that network-attached storage is very common in cloud deployments, but it introduced latency, so the retailer required a different solution for enterprise power. The fact that many enterprise software providers make support for their applications conditional on similar direct-attached storage was a huge positive factor for the furniture retailer’s implementation.

“Being able to run software in a supported capacity in the cloud is critical. Fibre Attached storage improves performance, and for a lot of software vendors, it is a requirement,” he says.

IBM Power Systems users can be assured that mission-critical applications are protected and future-proofed with IBM hybrid cloud. Data is copied to the cloud and can be accessed by users around the globe.

“Data can be secured faster and distributed faster. IBM Cloud offers resilience in the cloud, and organisations no longer have to add another datacenter in their on-premise environment. They can meet or exceed their investment in recovery time objective and recovery point objective for their disaster recovery plan,” says Meryl Veramonti, portfolio marketing manager for IBM Cloud. 

Disaster recovery might be the initial business case for adopting cloud, but according to Daubman, it often leads to greater uptake for other uses. “Disaster recovery is often a first step in the journey for a client,” he says.

He points to the fact that the retailer’s intention is to build out for production deployment in the cloud and to link with disaster recovery, becoming a more cloud-focused business.

IBM Cloud for development and testing

Organisations that want to migrate IBM Power Systems workloads to IBM Cloud for software development and testing now have an easy route to implementation because they can turn on and switch off resources quickly, which provides flexibility and makes economic sense.

They can gain enterprise systems as a service for fast, low-risk development and test on the latest IBM Power Systems platforms.

“Our offering allows development teams to test new workloads in the cloud. They can provision an instance and turn it off without thinking about nuances and worries. They just spin into the cloud and payment is metered by the hour. It is very affordable testing,” says José Rafael Paez, worldwide offering manager for IBM Systems.

According to Paez, the biggest headache for an organisation around testing in an on-premise environment is caused by the limited capacity available. They will need a certain amount of capacity for development and testing, but often cannot share capacity with the mission-critical workloads that run the business and take priority. For this reason, development and testing are often sectioned off, which comes at a cost.

“Internal management of assets often goes back and forth, with teams trying to achieve just enough capacity for testing,” says Paez.

Access to a sandbox environment in the IBM Cloud to test new software takes these worries away, and also provides links to the IBM Cloud marketplace and applications.

“A common trait of IBM Power Systems clients is that they are risk-averse. They won’t upgrade to the latest version of AIX unless they need to because they don’t want to mess up mission-critical applications. By providing a sandbox testing environment, they can test new versions of OS and new IBM Power Systems boxes in a safe place in the cloud,” says Paez. “They have a separate space for something the company considers risky, which offers a roadmap into an upgrade. They can test new versions of the AIX operating system and the hardware and add new applications from the cloud marketplace in a safe place.”

The temporary sandbox environment for testing, and its use as a step towards deploying production applications on IBM Power Systems Virtual Servers on IBM Cloud, meets the needs of risk-averse clients who want a remote environment away from critical workloads to test updates and changes. The flexible consumption model is cost-efficient and a stress-free way to evaluate, plan and test next-generation hardware or a new version of the operating system.

With a dedicated link to on-premise connectivity, and IBM Cloud Object Store providing optional backup and custom image hosting, organisations can have peace of mind that testing and developing on IBM Power Systems Virtual Servers on IBM Cloud is the right move. As well as being able to test hardware before a major refresh, such as Power9, and test complex architecture changes, it also offers an initial step into an organisation’s hybrid cloud journey.

IBM Cloud for hosting production applications

Using IBM Cloud for AIX and IBM i production application hosting is the third major use case where organisations can leverage the flexibility of the cloud to deploy core business applications.

Organisations can run an enterprise-level workload in the IBM Cloud if they want to modernise their IT estate in a risk-averse manner.

“If they run into obstacles over capacity, it can help without having to invest in an on-premise upgrade,” says Veramonti.

Daubman says IBM Cloud gives IBM Power Systems users access to the latest hardware, such as Power9 processor-based servers, and allows IBM Cloud to take over datacentre management below the operating system, for which many organisations do not have the skills.

The implementation process gives users the ability to have load-balancing capability as part of the architecture in IBM Cloud and to pursue a hybrid approach to IT. Organisations can burst capacity into IBM Cloud and not have to worry about management overheads.

“It gives organisations the flexibility between a concrete on-premise infrastructure and a flexible cloud,” says Paez.

He says the hybrid connection with the on-premise environment gives organisations a new level of management they may not be accustomed to.

“A positive experience of hybrid cloud with production application hosting pushes a lot of clients to pursue cloud,” says Paez.

Organisations can manage applications in whichever environment they want with IBM’s multicloud manager.

“A simple demonstration proves that if you have a cloud and on-premise environment, you can move workloads from one environment into the other,” says Paez.

By gaining experience of how IBM Power Systems Virtual Servers on IBM Cloud works, any preconceptions about blocks and barriers associated with multiple environments are removed, and organisations are encouraged to expand and develop their hybrid cloud use.The ability to add additional capacity on the fly is particularly appealing to organisations that need to respond to a volatile and competitive landscape.

“They want to be able to cope with an influx of usage caused by seasonal spikes, new products, testing a new application and wanting to play around with that application, without the risk of doing it in a real-world scenario,” says Veramonti.

By increasing their cloud portfolio, clients can modernise legacy workloads and gain the reassurance of being able to access the latest IBM Cloud technologies and skills.

“Many organisations are challenged with skills and resources on-premise, and they are using the cloud more and more,” says Daubman.

IBM Cloud for flexible, transparent pricing

Another business bonus for IBM Power Systems users migrating to IBM Cloud comes from licence payments decreasing. Daubman highlights how licensing for the operating system is based on the exact resources you need at the time.

“You are not paying for licences for the whole machine – only what you need at a point in time. Operational expenses are reduced because you are not licensing a machine. It is a virtual machine and you pay based on the processing power you are using,” he says.

Billing transparency allows organisations to budget and plan effectively. In the digital economy, where responsiveness is a prerequisite to success, being able to scale out into the cloud and subsequently de-provision instantly can save significant costs.

“Billing transparency lets organisations look and plan ahead. You don’t have to plan for all the resources you need today. You can double cores in November for Black Friday, and you don’t have to worry about having enough staff on-premise and calling people in during the holiday season,” says Daubman.

A clear path to the IBM Cloud

IBM Power Systems users now have a clear path to the cloud with the introduction of IBM Power Systems Virtual Servers for IBM Cloud. There are strong business cases to make the move, especially for business continuity and disaster recovery provision; testing and development; and application modernisation. These starting points can be used to explore further how an IBM Cloud hybrid focus can strategically help an organisation on its journey to digital transformation.

IBM Cloud’s global geo-diversity and expertise, with a guarantee of security and compliance in an end-to-end approach for the enterprise, are reassuring for IBM Power Systems users. Reliable and continuous security are provided for the client’s environment, and IBM Cloud provides support, management and delivery across the complete cloud environment, using IBM expertise and proven technology. 

Reliability, performance and affordability give peace of mind to enterprises that are considering hybrid cloud. An organisation opting for IBM Power Systems Virtual Servers on IBM Cloud will soon discover how cloud can support its strategic direction towards a digital future.

Ovum Decision Matrix: Selecting a Cloud Platform for Hybrid Integration Vendor

Catalyst 

Digital business is driving a proliferation of applications, services, data stores, and APIs that need to be connected to deliver critical business processes. Integration is the lifeblood of today’s digital economy, and middleware is the software layer connecting different applications, services, devices, data sources, and business entities. This Ovum Decision Matrix (ODM) is a comprehensive evaluation to help enterprise IT leaders, including chief information officers (CIOs), enterprise/integration architects, integration competency center (ICC)/integration center of excellence (CoE) directors, and digital transformation leaders select a cloud platform provider best suited to their specific hybrid integration requirements. 

Ovum view

Ovum’s ICT Enterprise Insights 2018/19 survey results indicate a strong inclination on the part of IT leaders to invest in integration infrastructure modernization, including the adoption of new integration platforms. IT continues to struggle to meet new application and data integration requirements driven by digitalization and changing customer expectations. Line-of-business (LOB) leaders are no longer willing to wait for months for the delivery of integration capabilities that are mission-critical for specific business initiatives. Furthermore, integration competency centers (ICCs) or integration centers of excellence are being pushed hard to look for alternatives that significantly reduce time to value without prolonged procurement cycles.

Against a background of changing digital business requirements, IT leaders need to focus on revamping enterprise integration strategy, which invariably will involve the adoption of cloud platforms for hybrid integration, offering deployment and operational flexibility and greater agility at a lower cost of ownership to meet multifaceted hybrid integration requirements. With this report, Ovum is changing its nomenclature for defining middleware-as-a-service (MWaaS) suites for hybrid integration and, in future, we will be using the term “cloud platforms (or PaaS products) for hybrid integration” to refer to this market.

We follow the specification of National Institute of Standards and Technology (NIST) for PaaS, according to which PaaS as a cloud service model should meet a range of characteristics, including: 

  • on-demand self-service  
  • broad network access 
  • resource pooling  
  • rapid elasticity 
  • measured service.  

Merely delivering application and/or data integration capabilities via the cloud on a subscription basis does not amount to a PaaS provision for hybrid integration. Some cloud platforms or software components of a cloud platform included in this ODM might not be termed as PaaS according to NIST’s specification, which is why we use the term “cloud platform”.

User productivity tools and deployment flexibility are key characteristics of cloud platforms for hybrid integration that help enterprises respond more quickly to evolving digital business requirements. With DevOps practices, microservices, and containerized applications gaining popularity, IT leaders should evaluate the option of deploying middleware (integration platforms) on software containers as a means to drive operational agility and deployment flexibility. 

Key findings 

  • Integration is still predominantly done by IT practitioners, but IT leaders should consider “ease of use” for both integration practitioners and less-skilled, non-technical users, such as power users, when selecting integration platforms for a range of hybrid integration use cases
  • The latest Ovum forecast reveals that integration PaaS (iPaaS) and API platform market segments are expected to grow at a compound annual growth rate (CAGR) of 59.7% and 61.7% respectively between 2018 and 2023, clearly the fastest growing middleware/PaaS market segments. 
  • The global iPaaS market is showing signs of saturation (not in terms of growth), and vendor offerings do not differ much in terms of technical capabilities. Key areas for iPaaS product development include support for deployment on containers, improvement in user experience (UX) for less-skilled, non-technical users, and machine learning (ML)-led automation of different stages of integration projects ranging from design and development to deployment and maintenance.  
  • PaaS for hybrid integration will significantly cannibalize the established on-premises middleware market, and by the end of 2019, Ovum expects at least 50% of the new spend (not including upgrades of on-premises middleware or renewal of similar licenses) on middleware to be accounted for by PaaS or cloud-based integration services. 
  • Major middleware and leading iPaaS vendors dominate this market, even though their routes to the development of a cloud platform for hybrid integration can be quite different. 
  • PaaS adoption in enterprises is for both strategic and tactical hybrid integration initiatives. IT leaders realize the significant benefits that cloud platforms for hybrid integration bring to the table in terms of greater agility in responding to business requirements and cost savings. 
  •  iPaaS vendors have invested significantly in developing lightweight PaaS-style products for B2B/electronic data interchange (EDI) integration to support key EDI messaging standards, rapid trading partner onboarding and community management, and governance of B2B processes. 

Vendor solution selection 

Inclusion criteria

Ovum has closely tracked the emerging cloud platforms for hybrid integration vendor landscape over the last four years and we have used these observations as the baseline for inclusion/exclusion in this ODM. The criteria for inclusion of a vendor in this ODM are as follows:

  • The cloud platform(s) should deliver significant capabilities across two of the three technology assessment criteria groups: “cloud integration”; “API platform”; and “B2B and mobile application/backend integration”. 
  • There is substantial evidence that the vendor is interested in pursuing a progressive product strategy that helps ascertain product viability and applicability to a range of hybrid integration use cases. 
  • Middleware products are not “cloud washed” and individual components demonstrate essential cloud services characteristics, such as multitenancy, resource sharing, and rapid scalability, as well as allowing usage tracking and metering and supporting the enforcement of service-level agreements (SLAs).
  • The cloud platform(s) should have been generally available as of March 30, 2019. The vendor must have at least 50 enterprise (paid) customers using various components as of May 31, 2019. We did not want to leave out any vendor because of limitations related to significant revenue realization.
  • It should deliver enterprise-grade developer enablement and API-led integration capabilities, and an appropriate UX for less-skilled users (non-developers). 
  • At least the core middleware product should be architecturally coherent and product/component APIs should be available to support internal integration between different components of the middleware stack.

Exclusion criteria

A vendor is not included in this ODM if: 

  • The core middleware component provided by the vendor is restricted to API management, and the rest of the capabilities are delivered in partnership with other vendors. For this reason, specialized API management vendors that do not offer any substantial capabilities for other hybrid integration use cases were excluded from this ODM. This means that cloud based application and data integration capabilities are critical for inclusion in this ODM. 
  • The vendor is unable to commit required time and resources for the development of research to be included in this ODM. Some vendors, which otherwise would qualify for inclusion in this ODM, opted out of the evaluation exercise and were unable to submit the required level of information in response to the evaluation criteria spreadsheet by the cutoff date. (Jitterbit is the only vendor that qualified for inclusion but opted not to participate without citing any specific reason, and we decided to exclude it from this ODM). 
  • There is not enough evidence that the vendor is interested in expanding the features and capabilities to cater for the requirements of emerging use cases and exploiting new market trends.
  • There are indications that the vendor is struggling to grow its business and has partnered with middleware vendors to defend its position in the market, or the customer base is confined to only specific regions. 
  • The vendor did not feature in any of the analyst enquiries from enterprise IT leaders and users, and there were other indicators for a lack of investment and a dedicated strategy for middleware products. 

Ovum ratings

Market leader

This category represents a leading vendor that Ovum believes is worthy of a place on most technology selection shortlists. The vendor has established a commanding market position with its cloud platform for hybrid integration, demonstrating relatively high maturity, cohesiveness, good innovation and enterprise fit, and the capability to meet the requirements of a wider range of hybrid integration use cases, as well as executing an aggressive product roadmap and commercial strategy to drive enterprise adoption and business growth. In terms of scores, to be a leader in this ODM, a vendor must score 8 out of 10 both on “technology” and “execution and market impact” assessment dimensions.

Market challenger

A cloud platform for hybrid integration vendor in this category has a good market position and offers competitive functionality and a good price/performance proposition and should be considered as part of the technology selection. The vendor has established a significant customer base, with its platform demonstrating substantial maturity, catering for the requirements of a range of hybrid integration use cases, as well as continuing to execute a progressive product and commercial strategy. Some vendors included in this category are “strong performers” in terms of technology assessment but did not achieve consistently high or good scores for the “execution and market impact” dimension, which is an essential requirement for achieving a “market leader” rating.

Market follower

A cloud platform for hybrid integration in this category is typically aimed at specific hybrid integration use cases and/or customer segment(s) and can be explored as part of the technology selection. It can deliver the requisite features and capabilities at reasonable charge for specific use cases or requirements. This ODM does not feature any vendor in this category. 

Market and solution analysis 

A major market shift has begun and will not slow down 

Hybrid integration platform

Ovum defines a hybrid integration platform as a cohesive set of integration software (middleware) products that enable users to develop, secure, and govern integration flows connecting diverse applications, systems, services, and data stores as well as enabling rapid API creation/composition and lifecycle management to meet the requirements of a range of hybrid integration use cases. A hybrid integration platform is “deployment-model-agnostic” in terms of delivering requisite integration capabilities, be it on-premises and cloud deployments or containerized middleware.

The key characteristics of a hybrid integration platform include: 

  • support for a range of application, service, and data integration use cases, with an API-led, agile approach to integration reducing development effort and costs 
  • uniformity in UX across different integration products/use cases and for a specific user persona  
  • uniformity in underlying infrastructure resources and enabling technologies 
  • flexible integration at a product/component API level 
  • self-service capabilities for enabling less-skilled/non-technical users
  • the flexibility to rapidly provision various combinations of cloud-based integration services based on specific requirements 
  • openness to federation with external, traditional on-premises middleware platforms 
  • support for embedding integration capabilities (via APIs) into a range of applications/solutions  
  • developer productivity tools, such as a “drag-and-drop” approach to integration-flow development and pre built connectors and templates, and their extension to a broader set of integration capabilities 
  • flexible deployment options such as on-premises deployment, public/private/hybrid cloud deployment, and containerization 
  • centralization of administration and governance capabilities. 

For the purpose of this ODM, we are concerned only with cloud platforms (or PaaS products) for hybrid integration. A comprehensive PaaS suite (see Figure 1) combines iPaaS, apiPaaS, mobileback-end-as-a-service (MBaaS), and other cloud-based integration services such as data-centric PaaS and cloud-based B2B integration services to offer the wide-ranging hybrid integration capabilities required to support digital business.

These individual cloud-based integration services are offered on a subscription basis, with each component having essential cloud characteristics, such as multi tenancy, resource sharing, and rapid scalability. The success of iPaaS as an agile approach to hybrid integration has played a key role in the evolution of this market. For enterprises, PaaS products for hybrid integration represent a good opportunity to shift from legacy middleware platforms that require significant upgrades and investment to remain relevant in the current operating environment. Table 1 provides iPaaS and API platforms software market forecast for the period 2018-23.

 

Deployment of middleware on software containers is in the early stages and event-driven integration is gaining ground

Cloud-native integration is a natural fit to hybrid IT environments

It is obvious that hybrid IT environments call for a cloud-native integration paradigm that readily supports DevOps practices and drives operational agility by reducing the burden associated with cluster management, scaling, and availability. In the same was as a cloud-native integration paradigm, integration runtimes run on software containers, are continuous integration and continuous delivery and deployment (CI/CD)-ready, and are significantly lightweight and responsive enough to start and stop within a few seconds. Many enterprises have made substantial progress in containerizing applications to benefit from a microservices architecture and portability across public, private, and hybrid cloud environments. Containerized applications and containerized middleware represent a good combination. In cases where an application and a runtime are packaged and deployed together, developers can benefit from container portability and the ease of use offered by the application and middleware combination. 

In other terms, it makes sense for applications and middleware to share a common architecture, because DevOps teams can then avoid the overhead and complexity associated with the proposition of running containerized applications on different hardware and following different processes than those that exist with traditional middleware. This is true even in cases that do not involve much rearchitecting of the applications, and DevOps teams can still develop and deploy faster using fewer resources.

A lot of data is generated in the form of streams of events, with publishers creating events and subscribers consuming these events in different ways via different means. Event-driven applications can deliver better customer experiences. For example, this could be in the form of adding context to ML models to obtain real-time recommendations that evolve continually to meet the requirements of a specific use case. Embedding real-time intelligence into applications and real-time reactions or responsiveness to events are key capabilities in this regard.

For distributed applications using microservices, developers can opt for asynchronous event-driven integration in addition to the use of synchronous integration and APIs. Apache Kafka, an open source stream-processing platform, is a good option for such use cases requiring high throughput and scalability. Kubernetes can be used as a scalable platform for hosting Apache Kafka applications. Because Apache Kafka reduces the need for point-to-point integration for data sharing, it can reduce latency to only a few milliseconds, enabling faster delivery of data to users. 

Ovum Decision Matrix: Cloud platforms for hybrid integration, 2019–20 

The ODM chart in Figure 2 represents the results of a comprehensive evaluation of 11 cloud platforms for hybrid integration vendors meeting the inclusion criteria. The bubble size representing vendor positioning is determined by the scores achieved for the “market impact” criteria group under the “execution and market impact assessment” dimension. Table 2 provides a list of market leaders and challengers in alphabetical order (not in terms of scores), and subsequent sections also follow this practice.

Vendor analysis

Axway Ovum SWOT assessment

Strengths:

Axway AMPLIFY platform offers a good foundation for hybrid integration use cases 

Axway has well-established credentials for API management and B2B integration use cases, as evident from the high scores for the “API platform” and “B2B and mobile app/backend integration” criteria groups under the technology assessment dimension. The acquisition and subsequent integration of Appcelerator enabled Axway to cater for mobile application development and back-end integration use cases

Axway uses an OEM partnership to extend its platforms existing API-led integration capabilities with Cloud Elements, or more specifically, “Elements” that provide access to an entire category of applications, such as messaging, customer relationship management (CRM), e-commerce, finance, marketing, and document management, via integration to a single API. Both vendors espouse an APIled approach to integration and so there is synergy here. Axway has executed a progressive product strategy and forged partnerships with several ISVs, such as Cloud Elements, Stoplight.io, Entrust, and RestLet (acquired by Talend) to drive adoption.

Transformation in product strategy came at just the right time

With the AMPLIFY platform, Axway transformed its product strategy and directed investment to offer a unified platform that enables users to develop new digital business applications/services and to subsequently integrate them with other applications/services and data stores. This enables users to rapidly connect and share data with trading partners, derive actionable insights to optimize corresponding engagements, and monetize enterprise data assets. The AMPLIFY platform marked Axway’s shift from a vendor providing a suite of integration, security, and operational intelligence and analytics products to a vendor offering a cohesive, cloud-based hybrid integration platform, which can now support key hybrid integration use cases. This shift is starting to show good results for Axway, which claims that about 24,000 active organizations are using the AMPLIFY platform, a smaller share of which are paid customers. If it was only about technology assessment, Axway would qualify as a leader. However, it narrowly missed out on scoring the required 8 out of 10 for the “execution and market impact” assessment dimension, a key criterion to be rated an overall leader in this ODM.

Weaknesses

Specific gaps exist in its iPaaS capabilities and it needs to improve brand recognition in cloud based hybrid integration platforms market

Part of Axway’s iPaaS are currently limited to Europe and US data centers, while the platform’s virtual private cloud (VPC) customers are deployed and available in all key regions (the US, EU, and AsiaPacific). Native integration to blockchains and key RPA tools in missing. ML-based automation is a work in progress, but Axway plans to offer automation for data mapping. These are important areas for improvement as far as Axway’s iPaaS is concerned, because many of Axway’s key competitors are already offering these features and capabilities. 

Axway featured in a few Ovum conversations with enterprise IT leaders over the last couple of years. Its revenue from and the customer base for the AMPLIFY platform is significantly lower (considering Axway’s overall size and the time since the general availability of the AMPLIFY platform) than several key vendors in this market. However, the company is part way through transitioning from a licensing to subscription model. This is affecting Axway’s topline revenue but is a strategy for the longer term. The vendor expects that a return to top line growth will be evident by the end of 2020.

Axway must focus on investing in marketing and effective evangelism to increase the visibility and raise the profile of its AMPLIFY platform, although the vendor says it is seeing significant growth quarter-on-quarter. It is worth noting that Axway’s Catalyst team comprising experts in digital transformation and API-led innovation areas can help enterprises realize positive outcomes from digital transformation and integration modernization initiatives. The corresponding business strategy would benefit from a keen focus on winning net new deals involving a range of hybrid integration use cases, and Axway has achieved some recent success in this regard. 

Boomi Ovum SWOT assessment

Strengths

Leading iPaaS vendor with growing hybrid integration capabilities

Boomi, a Dell Technologies Business, achieved the highest score for the “cloud integration/iPaaS” criteria group under the technology assessment dimension. It has well-established credentials in the global iPaaS market, with thousands of large and midsize enterprises as customers. Boomi has expanded the capabilities of its iPaaS to support a range of hybrid integration requirements beyond on-premises and SaaS applications and data integration. Boomi’s iPaaS caters to the requirements of two key user personas: developers/integration practitioners and less-skilled, non-technical users. Boomi recently introduced the Boomi API gateway and developer portal to enable secure and scalable interactions with external parties, enhance API discoverability, and provide driver engagement across a broader API consumer base. 

There is a faster deployment option for Boomi iPaaS for Pivotal Kubernetes and Pivotal Application Services environments (PKS/PAS) available from the Pivotal Cloud Foundry marketplace. Boomi Enterprise Innovation Services and Architectural Services provide a package of integration services, advice from architecture experts, and support and resources, with the flexibility to customize to specific customer needs. Boomi provides a cloud-managed B2B/EDI integrated service as part of its unified platform. Users can build, deploy, and manage both traditional EDI and newer web services in the cloud. To simplify the configuration of trading partner profiles and B2B processes, Boomi provides a “configuration-based” platform that eliminates the cost and complexity of writing code. It is impressive to see how Boomi’s integration platform has expanded from iPaaS and API-led integration to cover B2B/EDI integration and simple file transfer use cases.

Good feature-price performance and early mover in exploiting ML for automation 

On a comparative basis, Boomi offers a good feature-price value for enterprises of all sizes. This is evident from its joint highest score for the “scalability and enterprise fit” criteria group under execution and market impact assessment dimension.

Boomi uses ML in the form of Boomi Suggest, Boomi Resolve, and Boomi Assure . It was arguably the first mover in the iPaaS market to start delivering ML-based automation. The Boomi Suggest feature uses millions of indexed mappings to offer automatic recommendations on mappings for new integrations based on successful configurations developed by other users in the past. Boomi also uses crowdsourced contributions from its support team and user community to offer resolutions to common errors within the iPaaS UI. Boomi Suggest offers mapping suggestions with “confidence rankings”, data transformation, and error resolutions via correlations to simplify integration-flow development. 

Weaknesses

Needs to address gaps in the features of Boomi API Management 

Boomi API Management was developed as an extension of the Boomi AtomSphere Platform to cater to the needs of existing users. Since then, the product has expanded in terms of key features and capabilities, and 2018 was a year of major advances in the capabilities of Boomi API Management. However, some of Boomi’s nearest competitors in the iPaaS market have more mature and wellestablished API platform capabilities. 

Over the last couple of years, there has been a slight decoupling (from core the iPaaS product) and dedicated product roadmap and strategy for Boomi API Management. Areas from improvement include support for GraphQL and gRPC standards, greater coverage in performance monitoring reports on key metrics, automated failover for high availability and reliability, better support for the Node.js framework, and a sophisticated API deprecation and retirement processes. Boomi is capable of filling these gaps and developing this as a leading API platform, and recent announcements indicate that this a key priority for Boomi’s product and business management. 

IBM Ovum SWOT assessment 

Strengths

Well-rounded offering catering to the requirements of key hybrid integration use cases 

IBM achieved consistently high scores across the various criteria groups under the technology and execution and market impact assessment dimensions. The IBM Cloud Pak for Integration caters to a range of hybrid integration requirements, including on-premises and SaaS application and data integration, rapid API creation/composition and lifecycle management, API security and API monetization, messaging, event streaming, and high-speed transfer. With IBM Cloud Pak for Integration’s container-based architecture, users have the flexibility to deploy in any environment with Kubernetes infrastructure, as well as to use a self-service approach to integration. IBM is extending its integration platform’s API capabilities to provide support for GraphQL management, and this approach decouples GraphQL management from GraphQL server implementation. IBM Sterling B2B Integration Services and IBM Mobile Foundation cater to the requirements of B2B/EDI integration and mobile application/back-end integration respectively.

The only vendor that can function as a true strategic partner for enterprises embarking on integration modernization initiatives 

IBM’s Agile integration methodology focuses on delivering business agility as part of integration modernization initiatives. It espouses the transitioning of integration ownership from centralized integration teams to application teams, as supported by the operational consistency achieved via containerization. On the operational agility side, cloud-native infrastructure offers dynamic scalability and resilience. For large enterprises embarking on integration modernization initiatives, this methodology can cater to people, processes, and technology aspects to provide the necessary advice and guidance to help enterprises achieve faster time to value across diverse deployment environments. Ovum analyzed the competitive services offerings of all vendors in this ODM and found IBM’s agile integration methodology to be the most comprehensive and well thought out.

Weaknesses

The B2B/EDI integration offering is architecturally different, so users need a separate offering for mobile application/backend integration 

IBM Sterling B2B Integration Services for supporting B2B/EDI integration use cases are architecturally different from the products under the IBM Cloud Pak for Integration. IBM is working on filling this gap and is developing a lightweight PaaS product for B2B/EDI integration. IBM Mobile Foundation is not part of the “Connect” set of product portfolios and is an add-on product. Another area for improvement is the use of ML for automating the different stages of integration projects, ranging from design and development to deployment and maintenance, which IBM is capable of providing by using the capabilities of its Watson platform. Some of the ML-related capabilities are part of IBM’s product roadmap for this middleware portfolio.

Frequent branding, rebranding, and renaming creates confusion in the market  

IBM’s middleware portfolio has undergone various iterations of branding, rebranding, and renaming over the years and this does create confusion in the market. From the days of IBM WebSphere Cast Iron Live to IBM API Connect or even IBM WebSphere Cast Iron Cloud Integration to IBM App Connect, IBM has certainly expanded features and capabilities or significantly transformed specific parts of its middleware portfolio. However, frequent rebranding and renaming exercises can be avoided to ensure a strong, sustained enterprise mindshare, and this will help in avoiding unnecessary confusion in the market. New and potential customers for IBM Cloud Pak for Integration should ask for customer references and case studies and check that these align with their specific requirements.

MuleSoft Ovum SWOT assessment 

Strengths

Comprehensive and cohesive cloud platform catering to a range of hybrid integration use cases

MuleSoft Anypoint Platform is a cohesive PaaS-style product catering to key hybrid integration use cases, this is evident from MuleSoft’s high scores across the “cloud integration” and “API platform” criteria groups under the technology assessment dimension. MuleSoft has further simplified it UX with the API Community Manager, upgrades to Anypoint Exchange, an improved integrated development environment (IDE) for the Mule 4 runtime (Studio 7), Anypoint Visualizer, and template-driven design and note-based collaboration for non-technical users (Flow Designer). Anypoint Partner Manager, MuleSoft’s lightweight PaaS-style B2B solution, caters to the requirements of B2B/EDI use cases, including partner management reporting, partner onboarding, B2B transaction configuration, B2B transaction tracking, and audit logging. While it is not an extensive B2B/EDI integration platform, it can be used by MuleSoft’s customers for meeting less complex B2B/EDI integration needs.

MuleSoft is one of the very few vendors that can support the requirements of all use cases included in this ODM via an architecturally coherent cloud platform that qualifies as a pre-play PaaS product. Visual API designer, API modeling framework parser, API functional monitoring, and several new connectors for a range of applications and endpoints, are some of the capabilities introduced over the last year to drive developer productivity

Weaknesses

ML-based automation can be improved 

Using the application network graph, MuleSoft provides a recommendation engine for suggestions on the next best action. The first application of this engine is the ML-based automapper in flow designer, and MuleSoft has dedicated plans to introduce new capabilities to drive ML-based automation. These are steps in the right direction. However, given MuleSoft’s track record of innovation and fast response to emerging market dynamics, by now it could have exploited ML capabilities to automate different stages of integration projects, ranging from design and development to deployment and maintenance. Some of its nearest competitors already have a better set of capabilities driving MLbased automation.

Ovum SWOT assessment 

Strengths

Comprehensive and cohesive cloud platform catering to a range of hybrid integration use cases

MuleSoft Anypoint platform is a cohesive PaaS-style product catering to key hybrid integration use cases, which is evident from MuleSoft’s high scores across the “cloud integration” and “API platform” criteria groups under the technology assessment dimension. MuleSoft has further simplified its UX with API Community Manager, upgrades to Anypoint Exchange, an improved integrated development environment (IDE) for the Mule 4 runtime (Studio 7), Anypoint visualizer, and template-driven design and note-based collaboration for non-technical users. Anypoint B2B solution caters to the requirements of B2B/EDI use cases, with partner management and reporting supported via Anypoint Partner Manager. Anypoint B2B solution is a lightweight PaaS-style product supporting trading partner onboarding, B2B transaction configuration, B2B transaction tracking, and audit logging. While it is not an extensive B2B/EDI integration platform, it can be used by MuleSoft’s customers for meeting less complex B2B/EDI integration needs. 

MuleSoft is one the few vendors that can support the requirements of all use cases included in this ODM via an architecturally coherent cloud platform that qualifies as a pre-play PaaS product set. Visual API designer, API Modeling Framework parser, API functional monitoring, and several new connectors for a range of applications and endpoints are some of the capabilities introduced over the last year to drive developer productivity.

MuleSoft has seen rapid growth since its acquisition by Salesforce 

MuleSoft is an integration business of over $500m within the broader Salesforce business lines. It has grown much faster than some of its established and larger competitors. The Salesforce acquisition has helped drive broader adoption of MuleSoft Anypoint Platform, both across existing large and midsize Salesforce customers. Because of this growth, there is a key focus on providing a compelling UX to less-skilled, non-technical users. MuleSoft enjoys strong brand recognition in the iPaaS and API platform markets as well as the cloud-based middleware for hybrid integration market. Contrary to the belief of some of its competitors, MuleSoft does not face a major hindrance driven by potential concerns about the neutrality of an integration vendor. MuleSoft’s growth is driven via both direct sales and packaged integration routes.

Weaknesses

ML-based automation could be improved 

Using the application network graph, MuleSoft provides a recommendation engine for suggestions on the next best action. The first application of this engine is the ML-based automapper in flow designer and MuleSoft has dedicated plans to introduce new capabilities to drive ML-based automation. These are steps in the right direction. However, given MuleSoft’s track record of innovation and fast response to emerging market dynamics, it could have exploited ML capabilities by now to automate different stages of integration projects, ranging from design and development to deployment and maintenance. Some of its nearest competitors already have a better set of capabilities driving MLbased automation. 

Oracle Ovum SWOT assessment 

Strengths

A well-balanced, comprehensive PaaS for hybrid integration product set

Oracle has a well-rounded PaaS for hybrid integration portfolio and achieved high scores for various criteria groups under the technology assessment dimension. Oracle Integration Cloud, Oracle’s iPaaS solution has seen rapid growth in terms of revenue over the last three years and, along with other PaaS offering of the portfolio, such as Oracle API Platform, Oracle SOA Cloud Service, and Oracle Mobile Hub, forms a good option for all key hybrid integration use cases. Oracle Self-Service Integration Cloud service aimed at less skilled, non-technical users allows them to build and consume simple integration recipes without any need to code. Oracle offers a uniform UX across various products of this middleware portfolio, something which many of its competitors have struggled to offer.The Oracle API Platform offers a range of capabilities for API creation and end-to-end lifecycle management, and has evolved into a fairly competitive offering over the last three to four years. Oracle exploits ML capabilities for providing recommendations at various stages of the design, testing, and deployment cycle, including but not limited to data mapping, business object/API recommendations in context, and the best next action to provide the logical next step in the flow. Insight capability for business integration analytics is a differentiator for Oracle. 

Rapid sustained revenue growth over the last three to four years 

Oracle has seen rapid revenue growth for its PaaS for hybrid integration portfolio. This has translated into several thousands of large enterprise customers using multiple PaaS offerings to tackle hybrid integration challenges. Oracle has also had success in cross-selling and upselling PaaS products to existing customers, as well as adding a significant number of new customers and securing one of the leading market shares. Most of this success in driving adoption and revenue growth can be attributed to aggressive execution against ambitious product roadmaps, and of course, Oracle’s financial muscle to invest billions of dollars in new product development and mobilize a large global salesforce is also a key strength.

Weaknesses

Specific gaps in products need to be addressed with a focus on the usability for non-Oracle endpoints and workloads  

In terms of its API platform, Oracle should focus on providing support for GraphQL and gRPC standards and SLA compliance, as well as built-in predictive analytics and the ability to send alerts and notifications to subscribers when APIs are versioned is other areas for improvement. Containerized middleware deployment is an emerging trend and one that many of Oracle’s competitors are exploiting for revenue growth. While this is not an officially supported topology from Oracle, containerized middleware deployment is planned for the on-premises execution engine.

When it comes to non-Oracle endpoints and workloads, many enterprise IT leaders are not sure of the usability of Oracle PaaS for integration use cases. They have an understanding that Oracle middleware’s usability is limited to Oracle-to-Oracle and Oracle-to-non-Oracle endpoints. This is definitely not the case with Oracle iPaaS, and Ovum has seen various implementations involving nonOracle to non-Oracle endpoints/applications. Oracle should focus on changing this viewpoint and should deliver more specific messaging for “non-Oracle only” use cases.

Red Hat Ovum SWOT assessment

Strengths

Open source innovation and growing hybrid integration capabilities 

Red Hat’s acquisition by IBM was recently completed, and IBM has emphasized that Red Hat will continue to operate as a separate unit within IBM and will be reported as part of IBM’s Cloud and Cognitive Software division. Our analysis is based on the assumption that this setup in IBM will continue. Red Hat has a long history of open source prowess and engineering expertise that has enabled IT practitioners to experiment and deliver new functionality with its middleware products. Red Hat Fuse was an early entrant to the hybrid integration market, with a focus on cloud-native integration developers. Red Hat Fuse Online (part of Red Hat Integration), Red Hat’s iPaaS offering is different in the sense that it was developed with a key focus on providing a better UX to less technical users. The API platform component of Red Hat Integration exploits the capabilities of 3scale API management and Fuse integration, and is a functionally rich solution for API lifecycle management. Red Hat achieved a high score for the “API platform” criteria group under the technology assessment dimension. Red Hat partners with Trace Financial for EDI-based transformations. For mobile app/back-end integration, Mobile Developer Services (included with Red Hat managed integration) provide key mobile app development capabilities optimized for containers, microservice architectures, and hybrid cloud deployments. This component exploits the capabilities of Feed Henry, a mobile application platform vendor acquired by Red Hat in 2014. 

Red Hat acquired JBoss in 2006 and grew its middleware business for over a decade. Owing to its business model, it took some time for Red Hat to figure out the emerging opportunities in a market where enterprise service bus (ESB) and service-oriented architecture (SOA) infrastructure adoption was declining and iPaaS and API management market segments were growing at high double-digit rates. Then came the trend of deployment and management of middleware on software containers. Red Hat was able to develop a strategy that did not deviate much from its heritage and still deliver products that could compete with iPaaS and API-led integration platforms. This is applicable for serious buyers that are willing and have the capability to experiment and innovate with open source middleware.  

Red Hat’s PaaS portfolio for hybrid integration is a good option for developers and integration practitioners that appreciate the capabilities and flexibility of open source middleware. The cost of exit in a proprietary middleware context is quite high, and it is not easy to achieve a significant level of interoperability with application infrastructure and middleware platforms offered by other vendors. Red Hat Integration as an open source middleware product offers users the flexibility to try and experiment with small integration projects and see what works best for a particular requirement or integration scenario. In a world where a drag-and-drop approach and pre-built connectors and templates are marketed as nirvana for cloud integration, it is good to see Red Hat making integration technical again. With time, we expect Red Hat’s customer base and revenue for this middleware portfolio to grow to an extent where it is comparable to the other iPaaS vendors that provide API lifecycle management capabilities.

Portable architecture and cost-effectiveness

The ability to keep a fully supported and portable architecture intact across private, public, and managed cloud is a key differentiator for Red Hat in this market segment. Red Hat’s strategy is simple: exploit the best open source technologies in the market and communities and adopt new projects based on the market direction. This enables Red Hat middleware to offer better scalability than proprietary or “open core” competitive offerings. Red Hat Integration as a package subscription includes app integration, data integration, messaging, data streaming, and API management capabilities and is bundled with the Red Hat OpenShift container platform. The cost of a one-year subscription for Red Hat Integration is significantly lower than that provided by some of the vendors included in this ODM. Enterprises with access to developers capable of exploiting open source middleware for tackling complex integration challenges can use Red Hat Integration to reduce the costs for hybrid integration projects. If it was only about technology assessment, Red Hat would qualify as a leader. However, it didn’t achieve consistently high scores for the “execution and market impact” assessment dimension, a key criterion to be rated a leader in this ODM.

Weaknesses

Late to market with an option for less skilled users

ICCs/integration COEs are no longer in the driver’s seat and LOBs are aggressive in terms of moving ahead with the adoption of iPaaS for SaaS integration. Some of these products also provide simpler capabilities for rapid API creation and API-led integration. While Red Hat Fuse Online is quite different from Red Hat Fuse in terms of its UX, it still does not offer the type of “ease-of-use” in development of integration flows as is the norm with modern iPaaS solutions. For this reason, Red Hat does not compete head on for tactical integration projects driven by LOBs. This has more to do with Red Hat’s position in the market and its core customer base. Red Hat offers a range of technical connectors, but there are gaps in terms of the coverage of connectors to the common SaaS applications used in enterprises. This again is a basic characteristic of modern iPaaS solutions and one of the reasons why iPaaS has gained traction in the developer and integration practitioner community and less skilled, non-technical users.  

Red Hat does not really compete with point solutions, such as the use of iPaaS for SaaS integration in a LOB, or for that matter, standalone API management. It functions better as a middleware stack vendor. We do not see this as a limitation for large enterprises capable of using open source middleware to solve complex integration issues in hybrid IT environments because the rest of the user base was never a sweet spot for Red Hat.

SAP Ovum SWOT assessment

Strengths

Growing hybrid integration capabilities and a progressive product roadmap

SAP supports the various key use cases included in this ODM, including cloud integration, API lifecycle management, B2B/EDI integration, and mobile app and back-end integration. SAP Cloud Platform Integration Suite, SAP’s iPaaS offering, provides an intuitive web interface with pre-built templates. The integration adviser uses ML capabilities and crowd-sourcing to offer a proposal service for message implementation and mapping guidelines. SAP has a dedicated roadmap for the integration adviser, including complex pattern mapping, optimized integration flow templates offering partner discovery, and further improvements in the proposal service. SAP recently introduced new features and capabilities, such as a public trial version, support for Microsoft Azure in a production release, self-service subscription enablement of integration platform tenants, new connectivity options, and trading partner management. SAP Cloud Platform Integration Suite is unique in the sense that it is a vendor-managed multicloud iPaaS available on a pay-as-you-go license model (SAP Cloud Platform Enterprise Agreement). 

SAP Cloud Platform API Management is SAP’s API lifecycle management product that offers standards-based API access to REST/OData or SOAP services, API analytics on consumption and operations, enterprise-grade security, and developer-centric services to enable users to subscribe, use, and manage API consumption. SAP Cloud Platform Integration Suite supports mobile app and back-end integration requirements. SAP has gradually developed a hybrid integration platform that can be consumed as PaaS. SAP achieved a good score for the “cohesiveness and innovation” criteria group under the execution and market impact assessment dimension.

Weaknesses

Gaps in iPaaS and API lifecycle management capabilities 

SAP does not support the deployment of iPaaS and API lifecycle management solutions on software containers. SAP Cloud Platform API Management is available as a fully cloud-managed service. SAP’s hybrid roadmap for the second half 2020 includes complementing the cloud service with a containerized local gateway runtime that can run in a customer’s private cloud environment. There is  also scope for improvement in the UX for less skilled, non-technical users. Gaps in terms of features and capabilities of SAP Cloud Platform API Management include support for GraphQL and gRPC standards and built-in predictive analytics. SAP does not offer an MFT product as a cloud service, and in the B2B/EDI integration context, SAP Cloud Platform Integration Suite offers an API-based trading partner solution. An improved (next-generation) trading partner management is planned for next year. These are some of the key areas for improvement that should be addressed soon to respond to emerging market dynamics and customer requirements and to remain competitive with the leading vendors in this market.

Product marketing and execution need to improve 

SAP’s product strategy for this product portfolio is driven by the requirements of core SAP ecosystem users, and it focuses on upselling and cross-selling to existing customers using SAP applications, on premise middleware, and other software products. While this is a good option to capitalize on the low hanging market opportunity, such a strategy can slow down the long-term evolution of a leading PaaS vendor providing a hybrid integration platform. This reflects in the number of customers and revenue SAP has realized for this product portfolio, which is lower than several vendors included in this ODM. Over the last couple of years, SAP featured sparsely in Ovum’s conversations with enterprise IT leaders embarking on hybrid integration and integration modernization initiatives. It is no different when it comes to conversations on leading iPaaS vendors because SAP does not enjoy substantial brand recognition beyond its core SAP ecosystem. There is significant scope for improvement in SAP’s product marketing, which should focus on improvising the visibility and raising the profile of SAP Cloud Platform Integration Suite.

Seeburger Ovum SWOT assessment

Strengths

Seeburger BIS in the cloud offers foundational capabilities for hybrid integration use cases 

Seeburger’s cloud platform for hybrid integration uses the features and capabilities of the underlying Seeburger Business Integration Suite (BIS). Seeburger’s middleware stack is well integrated and includes only home-grown solutions. This ensures interaction between the individual modules, and increases the overall stability and availability of the integration platform. Seeburger’s BIS portal is a unified UI layer for the entire platform, regardless of the deployment model. Seeburger BIS in the cloud provides SaaS integration, B2B/EDI as-a-service, API platform, and MFT as-a-service capabilities. Seeburger BIS can be deployed across various IaaS cloud environments, and there is support for deployment on containers. 

Seeburger concentrates on delivering iPaaS as a partner to its customers, and not only operates the integration platform (iPaaS) on a technical level, but also provides them with specialist personnel on request. At the same time, Seeburger is focusing on extending iPaaS support for different IaaS providers. Seeburger’s middleware product strategy means that cross-selling and upselling to existing customers represents a low-hanging opportunity. On a comparative basis, Seeburger’s cloud platform for hybrid integration offers foundational capabilities for tackling a range of integration issues. API creation is supported by a wizard and the BPMN design tool enables the composition of platform services into a new API via a simple drag-and-drop approach. On the B2B/EDI integration side, for trading partner onboarding, Seeburger’s Community Management Application (CMA) enables the use of web forms that can be designed by users. In addition, tailored forms can be created to collect all the required information to streamline the onboarding process. Seeburger achieved a high score for the “B2B and mobile app/backend integration” criteria group under the technology assessment dimension.

Weaknesses

Gaps in iPaaS and API platform capabilities

In the context of iPaaS capabilities, Seeburger does not provide pre-built, dedicated connectors to common endpoints and applications, such as marketing tools, collaboration applications, financial applications, content management systems, analytics data warehouses, and RPA tools. This is, however, part of the 2020 product roadmap. ML-based automation across different stages of integration projects, ranging from design and development to deployment and maintenance is not provided, though it is part of the product strategy and roadmap. There is scope for improvement with a tailored UX for less skilled, non-technical users. 

In the context of API platform capabilities, areas for improvement include support for GraphQL and gRPC standards, wider coverage via dashboard for tracking key metrics and performance monitoring reports on key metrics, built-in predictive analytics capability, and better support for Node.js framework. Seeburger must focus on filling these gaps to effectively compete with its nearest competitors. 

Need to improve brand awareness in the cloud platforms for hybrid integration market 

 

While Seeburger has been in the integration software business for a long time, it is a relatively new vendor in cloud platforms (PaaS) for the hybrid integration market. Over the last few years, Seeburger has used the capabilities of its BIS in the cloud to expand coverage of hybrid integration use cases, including SaaS integration and API-led integration (B2B/EDI integration was always a strong area for Seeburger). However, in comparison to leading iPaaS and API platform vendors, Seeburger has a lower brand awareness in this market. Seeburger has featured only sparsely in Ovum’s conversations with enterprise IT leaders over the last couple of years. This is also reflected in the relatively small revenue and customer base Seeburger has for this product portfolio. This is not surprising because other vendors included in this ODM had entered this market segment well ahead of Seeburger. Seeburger must invest in marketing and evangelism to raise the visibility and profile of its cloud-based hybrid integration platform. 

SnapLogic Ovum SWOT assessment 

Strengths

Timely expansion from an iPaaS to a PaaS portfolio aimed at a range of hybrid integration use cases

SnapLogic Enterprise Integration Cloud, SnapLogic’s iPaaS in its previous form, was a good product with strong credentials across both data and application integration use cases. SnapLogic Intelligent Integration Platform is a broader PaaS-style product aimed at a wider range of use cases, and not limited to only iPaaS and API-led integration. SnapLogic achieved the joint second highest score for the “cloud integration/iPaaS” criteria group under the technology assessment dimension. The hybrid integration platform marketed as an “Intelligent Integration Platform” offers AI-enabled workflows and self-service UX to simplify and accelerate time to value for application and data integration initiatives.

Moving beyond partnerships, SnapLogic has extended its integration platform to API lifecycle management, a good move at the right time. The extended integration platform offers a visual paradigm with a low code/no code approach for iPaaS and API lifecycle management use cases. The August 2019 release of the platform introduced a new API developer portal to expose API endpoints to external consumers. SnapLogic B2B solution integrates its Intelligent Integration Platform with a cloud-based B2B gateway to offer trading partner community management, support for a range of EDI standards, EDI message translation, and transaction monitoring with an audit trail. The combined SnapLogic integration product portfolio is functionally rich and compares well with the larger vendors in this market. 

Substantial strengths across application and data integration use cases and early mover in offering ML-based automation

While we have not looked extensively at data integration use cases in this ODM, it is worth highlighting that if application and data integration are considered together, there are very few vendors that can compete with SnapLogic. Until 2017, SnapLogic’s product strategy tilted toward application and data integration use cases, but with the introduction of API lifecycle management and B2B integration capabilities, it has positioned itself as a capable cloud-based hybrid integration platform provider. 

On an overall basis, SnapLogic is one of the few vendors offering ML-based automation capabilities across the integration lifecycle. SnapLogic’s Iris AI uses AI/ML capabilities to automate highly repetitive, low-level development tasks. Its Integration Assistant provides step-level suggestions for developing an integration flow, as well as offering recommendations for pipeline optimization. Moreover, SnapLogic Data Science is offered as a self-service solution to accelerate ML development and deployment with minimal coding. 

Weaknesses

Specific gaps in terms of API platform capabilities need to be addressed without much delay 

Although SnapLogic has an ambitious product roadmap for API management as it pertains to iPaaS use cases, it still has significant ground to cover to successfully compete with some of its iPaaS competitors providing holistic rapid API creation/composition and end-to-end API management capabilities. Areas for improvement include support for GraphQL and gRPC standards, reuse of existing API definitions via Swagger representation import, better support for API deprecation and retirement processes, and support for the Node.js framework. We believe these gaps exist because this is a new capability area for SnapLogic, where the product roadmap is driven by the most important requirements for its existing customer base. We understand that SnapLogic is not focusing on developing a best-of-breed, standalone cloud-based API platform. However, in the long run, it is critical to fill these gaps if SnapLogic wants to improve and retain its competitive positioning because application and data integration disciplines are converging anyway. 

TIBCO Ovum SWOT assessment 

Strengths

Strong credentials, a robust platform, and well thought-out strategy have delivered a strong competitive positioning 

TIBCO has long enjoyed strong credentials as an integration vendor and has a well-established footprint in the large enterprise segment. TIBCO Cloud Integration (TCI) has gradually evolved as a comprehensive iPaaS product for key hybrid integration use cases. TIBCO achieved consistently high scores across the various criteria groups under the “technology” and “execution and market impact” assessment dimensions. TIBCO Cloud Integration is a functionally rich platform, while the TIBCO Cloud Integration Connect capability is for less skilled, non-technical users. The TIBCO Cloud Integration Develop and Integrate capabilities are aimed at developers and integration practitioners. The platform supports REST APIs, GraphQL, and event-driven integration, and when used as an API platform deployed on premises, it uses a cloud-native, container-based architecture. On the B2B/EDI integration side, integration with TIBCO BusinessConnect Trading Community Management enables rapid trading partner onboarding, while TIBCO Foresight BusinessConnect Insight supports B2B transaction monitoring. TIBCO has developed a compelling value proposition aimed at different user personas and across disparate deployment models, and has undertaken significant investment to drive an improved UX. As a result of a well thought-out business strategy and good execution in terms of product innovation and delivery, TIBCO has maintained a leading position in this market. This is in line with its competitive position in the pre-iPaaS middleware market.

Disciplined and focused execution is the hallmark of TIBCO’s strategy 

While TIBCO does not invest as much in marketing as do some of its nearest competitors, over the past four years it has still managed to transition from an on-premises heavy middleware vendor to a leading vendor providing PaaS for hybrid integration. Functioning under the ownership of Vista Equity Partners, TIBCO has demonstrated disciplined and focused execution when it comes to filling gaps in its existing middleware portfolio (for example, overcoming the failure of TIBCO Cloud Bus, TIBCO’s very first iPaaS offering) and driving innovation to emerge as a leading vendor in this market. TIBCO’s revenue from PaaS for hybrid integration is lower than some of its nearest competitors, but we expect this gap to shrink because TIBCO is capable of achieving above-market average growth in the near future. At its core, TIBCO remains an engineering company delivering innovation to successfully compete with vendors already in this market.

Weaknesses

Scope for improvement in ML-based automation, PaaS-style product for B2B/EDI integration required for exploiting market opportunity

TIBCO Cloud Integration provides ML-enabled capabilities, such as smart mapping, automated discovery of connection metadata, a visual model of impact analysis, the ability to fix and address issues driven by changes in configuration, and heuristics-based mapping of data elements and event payloads. There are significant gaps in this set of capabilities when it comes to exploiting ML for automating different stages of integration projects, ranging from design and development to deployment and maintenance. Some of its nearest competitors are ahead in terms of ML-based automation capabilities in production environments. However, TIBCO is working on providing recommendation services at various stages of the integration lifecycle.

TIBCO would benefit from a lightweight, PaaS-style product aimed at B2B/EDI integration use cases, and should provide a simplified UX along the lines of TIBCO Cloud Integration. This is more about a PaaS product delivering B2B/EDI integration capabilities and not an extensive set of features and capabilities as provided by traditional, dedicated B2B/EDI integration platforms hosted on the cloud. This is a low-hanging market opportunity, because many enterprises are struggling with legacy EDI platforms that are a burden and expensive to maintain. 

WSO2 Ovum SWOT assessment 

Strengths

Open source integration cloud with significant SaaS integration and API lifecycle management capabilities 

For developers and integration practitioners with the skills to exploit open source middleware, WSO2 provides substantial capabilities for SaaS integration, API-led integration, and API lifecycle management. WSO2 API Cloud is a hosted version of the open source WSO2 API Manager, and is a functionally rich offering. WSO2 API Cloud offers a developer portal, a scalable API gateway, and a powerful transformation engine with built-in security and throttling policies, reporting, and alerts. WSO2 achieved a high score for the “API platform” criteria group under the technology assessment dimension.

WSO2 API lifecycle management and integration platforms are centrally managed through a common UI that supports various concerns, such as user and tenant management. WSO2 offers a drag-and drop graphical development environment, a graphical data and type mapper, and graphical flow debugging to simplify the development of integrations. WSO2 Integration Cloud offers good feature price performance. API back-end services hosted on WSO2 Integration Cloud can be exposed to the WSO2 API Cloud. In March 2017, WSO2 also introduced “Ballerina”, a programming language with both textual and graphical syntaxes to enable users to develop integration flows by describing them as sequence diagrams. Ballerina forms the basis for WSO2’s new code-driven integration approach.  

Weaknesses

Does not cater to B2B/EDI integration requirements

WSO2 is the only vendor in this ODM that does not provide a minimal set of capabilities for B2B/EDI integration use cases. While the cloud platforms for hybrid integration market is tilted toward iPaaS and API lifecycle management capabilities, several vendors have gradually expanded to provide support for less complex B2B/EDI integration use cases. The WSO2 Integration Cloud does not provide a tailor-made UX and self-service integration capabilities for less skilled, non-technical users. This is an area in the iPaaS market in which almost all other vendors have invested to better support less skilled, non-technical users. WSO2 is, however, planning to offer low-code, graphical integration based on Ballerina integrator runtime to enable ad hoc integrators to develop integrations

less skilled, non-technical users. WSO2 is, however, planning to offer low-code, graphical integration based on Ballerina integrator runtime to enable ad hoc integrators to develop integrations. WSO2 does not offer ML-based automation across different stages of integration projects, ranging from design and development to deployment and maintenance. This is largely due to its preference to focus on capabilities that are critical for developers and integration practitioners. Other areas for improvement include support for different IaaS clouds, the availability of iPaaS via a regional data center, pre-built connectors for blockchain integration, integration with RPA tools, and centralized management via a web-based console (or other suitable means) for creating, deploying, monitoring, and managing integrations.

Significant scope for improvement in product marketing

Compared to some of its competitors, WSO2 engages in relatively few marketing activities, which hinders its improvement in terms of its brand recognition and competitive market positioning, particularly in regions where it does not have a significant direct presence. Because it mainly targets enterprise/integration architects and hands-on technologists, WSO2’s product marketing activities have a technology-centric flavor. However, it would benefit from including a business-centric approach to sales and marketing to target a wider range of users and decision-makers, such as business leaders funding a LOB-led digital business initiative involving hybrid integration. 

Appendix

Methodology

An invitation followed by the ODM evaluation criteria spreadsheet comprising questions across two evaluation dimensions were sent to all vendors meeting the inclusion criteria, with nine vendors opting to participate. Ovum had thorough briefings with the final nine vendors to discuss and validate their responses to the ODM questionnaire and understand their latest product developments, strategies, and roadmaps. 

This ODM includes observations and input from Ovum’s conversations (including those conducted based on customer references) with IT leaders, enterprise architects, digital transformation initiative leaders, and enterprise developers and integration practitioners using cloud platforms for hybrid integration. 

Technology assessment

Ovum identified the features and capabilities that would differentiate the leading cloud platforms for hybrid integration vendors. The criteria groups and associated percentage weightings are as follows. 

  • Cloud integration/iPaaS (weighting assigned = 40%) 
  • API platform (weighting assigned = 45%) 
  • B2B and mobile application/backend integration (weighting assigned = 15%) 

Execution and market impact assessment 

For this dimension, Ovum assessed the capabilities of a cloud platform for hybrid integration and the associated vendor across the following key areas: 

  • Cohesiveness and innovation (weighting assigned =40%) 
  • Scalability and enterprise fit (weighting assigned =45%) 
  • Market impact (weighting assigned =15%) 

Leadership compass database and big data security

1 Introduction 

Databases are arguably still the most widespread technology for storing and managing business-critical digital information. Manufacturing process parameters, sensitive financial transactions or confidential customer records – all this most valuable corporate data must be protected against compromises of their integrity and confidentiality without affecting their availability for business processes. The area of database security covers various security controls for the information itself stored and processed in database systems, underlying computing and network infrastructures, as well as applications accessing the data. 

However, since the last edition of KuppingerCole’s Leadership Compass on Database Security two years ago, a notable change in the direction the market is evolving has become apparent: as the amount and variety of digital information an organization is managing grows, the complexity of the IT infrastructure needed to support this digital transformation grows as well. 

Nowadays, most companies end up using various types of databases and other data stores for structured and unstructured information depending on their business requirements. Recently introduced data protection regulations like the European Union’s GDPR or California’s CCPA make no distinction between relational databases, data lakes or file stores – all data is equally sensitive regardless of the underlying technology stack. 

Because of this, we have decided to expand the scope of this year’s Leadership Compass to incorporate data protection and governance solutions for NoSQL databases and Big Data frameworks in addition to relational databases we focused on last time. 

Among the security risks databases of any kind are potentially exposed to are the following: 

  • Data corruption or loss through human errors, programming mistakes or sabotage; 
  • Inappropriate access to sensitive data by administrators or other accounts with excessive privileges;  
  • Malware, phishing and other types of cyberattacks that compromise legitimate user accounts; 
  •  Security vulnerabilities or configuration problems in the database software, which may lead to data loss or availability issues; 
  • Denial of service attacks leading to disruption of legitimate access to data; 

Consequently, multiple technologies and solutions have been developed to address these risks, as well as provide better activity monitoring and threat detection. Covering all of them in just one product rating would be quite difficult. Furthermore, KuppingerCole has long stressed the importance of a strategic approach to information security. 

Therefore, customers are encouraged to look at database and big data security products not as isolated point solutions, but as a part of an overall corporate security strategy based on a multi-layered architecture and unified by centralized management, governance and analytics. 

1.1 Market Segment

Because of the broad range of technologies involved in ensuring comprehensive data protection, the scope of this market segment isn’t easy to define unambiguously. In fact, only the largest vendors can afford to dedicate enough resources for developing a solution that covers all or at least several functional areas – the majority of products mentioned in this Leadership Compass tend to focus on a single aspect of database security like data encryption, access management or monitoring and audit. 

The obvious consequence of this is that when selecting the best solution for your particular requirements, you should not limit your choice to overall leaders of our rating – in fact, a smaller vendor with a lean, but flexible, scalable and agile solution that can quickly address a specific business problem may, in fact, be more fitting. On the other hand, one must always consider the balance between a well-integrated suite from a single vendor and a number of best-of-breed individual tools that require additional effort to make them work together. Individual evaluation criteria used in KuppingerCole’s Leadership Compasses will provide you with further guidance in this process. 

To make your choice even easier, we are focusing primarily on security solutions for protecting structured data stored in relational or NoSQL databases, as well as in Big Data stores. Secondly, we are not explicitly covering various general aspects of network or physical server security, identity and access management or other areas of information security not specific for databases, although providing these features or offering integrations with other security products may influence our ratings. 

Still, we are putting a strong focus on integration into existing security infrastructures to provide consolidated monitoring, analytics, governance or compliance across multiple types of information stores and applications. Most importantly, this includes integrations with SIEM/SoC solutions, existing identity, and access management systems and information security governance technologies. 

Solutions offering support for multiple database types as well as extending their coverage to other types of digital information are expected to receive more favorable ratings as opposed to solutions tightly coupled only to a specific database (although we do recognize various benefits of such tight integration as well). The same applies to products supporting multiple deployment scenarios, especially in cloud-based and hybrid infrastructures. 

Another crucial area to consider is the development of applications based on the Security and Privacy by Design principles, which have recently become a legal obligation under the EU’s General Data Protection Regulation (GDPR) and similar regulations in other geographies. Database and big data security solutions can play an important role in supporting developers in building comprehensive security and privacyenhancing measures directly into their applications.

Such measures may include transparent data encryption and masking, fine-grained dynamic access management, unified security policies across different environments and so on. We are taking these functions into account when calculating vendor ratings for this report as well.

Despite our effort to cover most aspects of database and big data security in this Leadership Compass, we are not covering the following products: 

  •  Solutions that primarily focus on unstructured data protection having limited or no database-related capabilities
  •  Security tools that cover general aspects of information security (such as firewalls or antimalware products) but do not offer functionality specifically tailored for data protection 
  • Compliance or risk management solutions that focus on organizational aspects (checklists, reports, etc.) 

1.2 Delivery models 

Since most of the solutions covered in our rating are designed to offer comprehensive protection and governance for your data regardless of the IT environment it is currently located – in an on-premises database, in a cloud-based data lake or in a distributed transactional system – the very notion of the delivery model becomes complicated as well. 

Certain components of such solutions, especially the ones dealing with monitoring, analytics, auditing, and compliance can be delivered as managed services or directly from the cloud as SaaS, but the majority of other functional areas require deployment close to the data sources, as software agents or database connectors, as network proxies or monitoring taps and so on. Especially with complex Big Data platforms, a security solution may require multiple integration points within the existing infrastructure. 

In other words, when it comes to data protection, you can safely assume that a hybrid delivery model is the only viable option. 

1.3 Required Capabilities 

When evaluating the products, besides looking at the aspects of 

  • overall functionality 
  • size of the company 
  • number of customers 
  • number of developers 
  • partner ecosystem 
  • licensing models 
  • platform support 

We also considered the following key functional areas of database security solutions:

  • Vulnerability assessment – this includes not just discovering known vulnerabilities in database products, but providing complete visibility into complex database infrastructures, detecting misconfigurations and, last but not least, the means for assessing and mitigating these risks. 
  •  Data discovery and classification – although classification alone does not provide any protection, it serves as a crucial first step in defining proper security policies for different data depending on their criticality and compliance requirements. 
  • Data-centric security – this includes data encryption at rest and in transit, static and dynamic data masking and other technologies for protecting data integrity and confidentiality. 
  • Monitoring and analytics – these include monitoring of database performance characteristics, as well as complete visibility in all access and administrative actions for each instance, including alerting and reporting functions. On top of that, advanced real-time analytics, anomaly detection, and SIEM integration can be provided. 
  • Threat prevention – this includes various methods of protection from cyber-attacks such as denial-ofservice or SQL injection, mitigation of unpatched vulnerabilities and other infrastructure-specific security measures. 
  • Access Management – this includes not just basic access controls to database instances, but more sophisticated dynamic policy-based access management, identifying and removing excessive user privileges, managing shared and service accounts, as well as detection and blocking of suspicious user activities. 
  • Audit and Compliance – these include advanced auditing mechanisms beyond native capabilities, centralized auditing and reporting across multiple database environments, enforcing separation of duties, as well as tools supporting forensic analysis and compliance audits. 
  • Performance and Scalability – although not a security feature per se, it is a crucial requirement for all database security solutions to be able to withstand high loads, minimize performance overhead and to support deployments in high availability configurations. For certain critical applications, passive monitoring may still be the only viable option. 

2 Leadership

Selecting a vendor of a product or service must not be only based on the comparison provided by a KuppingerCole Leadership Compass. The Leadership Compass provides a comparison based on standardized criteria and can help to identify vendors that shall be further evaluated. However, a thorough selection includes a subsequent detailed analysis and a Proof of Concept of the pilot phase, based on the specific criteria of the customer. 

Based on our rating, we created the various Leadership ratings. The Overall Leadership rating provides a combined view of the ratings for 

  • Product Leadership 
  • Innovation Leadership
  • Market Leadership 

2.1 Overall Leadership 

The Overall Leadership rating is a combined view of the three leadership categories: Product Leadership, Innovation Leadership, and Market Leadership. This consolidated view provides an overall impression of our rating of the vendor’s offerings in the particular market segment. Notably, some vendors that benefit from a strong market presence may slightly drop in other areas such as innovation, while others show their strength, in the Product Leadership and Innovation Leadership, while having a relatively low market share or lacking a global presence. Therefore, we strongly recommend looking at all leadership categories, the individual analysis of the vendors, and their products to get a comprehensive understanding of the players in this market. 

In this year’s Overall Leadership rating we observe the same situation as in the previous release: only the two biggest vendors, namely IBM and Oracle, have reached the Leaders segment, which reflects both companies’ global market presence, broad ranges of database security solutions and impressive financial strengths. 

However, while last time we have positioned IBM slightly in the front, considering the fact that IBM’s solutions are database-agnostic, while half of Oracle’s portfolio only focuses on Oracle databases, this time the situation has changed. During the last year, Oracle has substantially increased its stake in the database security market, primarily with their innovative Autonomous Database technology stack, as well as numerous improvements in their existing products. Thus, we recognize Oracle as this year’s overall leader in Database and Big Data security. 

It is worth mentioning that while maintaining database agnosticism, IBM Data Protection has continued to add support for new data sources and has enhanced their capabilities to facilitate secure hybrid multicloud. IBM has also added support for unstructured data protection making Guardium a universal platform for data discovery, classification, and protection wherever this data resides. 

The rest of the vendors are populating the Challengers segment. Lacking the combination of an exceptionally strong market and product leadership, they are hanging somewhat behind the leaders, but still deliver mature solutions excelling in certain functional areas. We have a mix of companies we had recognized previously – Axiomatics, Imperva and Thales (which has completed the acquisition of Gemalto in early 2019) – and several newcomers like comforte AG, Delphix and SecuPI, each offering excellent solutions in their respective functional areas. 

There are no Followers in this rating, indicating the overall maturity of the vendors representing the market in our Leadership Compass. 

Unfortunately, several vendors we had in the rating last time were unable to participate this time. You can still find them mentioned in the later chapter “Vendors to Watch”. For more technical details about their products, please refer to the previous edition of this Leadership Compass. 

Again, we must stress that the leadership does not automatically mean that these vendors are the best fit for a specific customer requirement. A thorough evaluation of these requirements and a mapping to the product features by the company’s products will be necessary. 

Overall Leaders are (in alphabetical order): 

  • IBM
  • Oracle

2.2 Product Leadership 

The first of the three specific Leadership ratings is about Product Leadership. This view is mainly based on the analysis of product/service features and the overall capabilities of the various products/services.  

In the Product Leadership rating, we look specifically for functional strength of the vendors’ solutions. It is worth noting that, with the broad spectrum of functionality we expect from a complete data security solution, it’s not easy to achieve a Leader status for a smaller company. 

Among the distant leaders are the largest players in the market, offering a wide range of products covering different aspects of database security. 

IBM Security Guardium, the company’s data security platform provides a full range of data discovery, classification, entitlement reporting, near real-time activity monitoring, and data security analytics across different environments, which has led us to recognize IBM as the Product Leader. 

Oracle’s impressive database security portfolio includes a comprehensive set of security products and managed services for all aspects of database assessment, protection, and monitoring – landing the company at the close second place. 

Following them we can find two newcomers of the rating: comforte AG with their highly scalable and fault-tolerant data masking and tokenization platform that has grown from the company’s roots in high performance computing and decade-long experience serving large customers in the financial industry, and SecuPI – a young but ambitious vendor focusing on data-centric protection and GDPR/CCPA compliance for databases, big data and business applications. 

Finally, Thales after the recent acquisition of Gemalto and Imperva with a substantial R&D investment from Thoma Bravo have managed to improve their earlier ratings substantially, making it into the Leaders segment as well. 

Other vendors with their robust, but less functionally broad solutions are populating the Challengers segment. Delphix is a leading provider of data virtualization solutions for cloud migration, application development, and business analytics scenarios, all with a comprehensive set of data desensitization capabilities. Somewhat behind it we find Axiomatics – a leader in dynamic access control with a specialized ABAC solution for databases and Big Data frameworks. 

There are no followers in our product rating. Product Leaders are (in alphabetical order):

  • comforte AG 
  • IBM
  • Imperva
  • Oracle
  • SecuPI
  • Thales

2.3 Innovation Leadership 

Another angle we take when evaluating products/services concerns innovation. Innovation is, from our perspective, a key capability in IT market segments. Innovation is what customers require for keeping up with the constant evolution and emerging customer requirements they are facing.

Innovation is not limited to delivering a constant flow of new releases, but focuses on a customer oriented upgrade approach, ensuring compatibility with earlier versions especially at the API level and on supporting leading-edge new features which deliver emerging customer requirements. 

In this rating, we again observe IBM and Oracle in the Leaders segment, reflecting both companies’ sheer development resources which allow them to constantly deliver new features based on innovative technologies. 

IBM has continued to expand the focus of the Guardium platform – of note is the added support for unstructured data monitoring in on-prem and cloud stores, as well as the incorporation of the latest technological developments like containerized databases, artificial intelligence and consent management. 

Thanks to their recent breakthrough innovations with the Autonomous Database product family, which offers substantial improvements in terms of security, compliance, performance and availability of sensitive data by completely removing human interaction from database operations, Oracle has managed to increase their rating compared to the last edition, landing them at the first place in our innovation chart. 

Most other vendors can be found in the Challengers segment, reflecting their continued investments into delivering new innovative features in their solutions, which, however, simply cannot keep up with the behemoths among the leaders. 

The only company in the Followers segment is Axiomatics. This does not imply any negative assessment of their solutions, however, rather emphasizing the maturity of their technology and lack of major competitors in their narrow area of the market. 

Innovation Leaders are (in alphabetical order): 

  • IBM
  • Oracle

2.4 Market Leadership 

Here we look at Market Leadership qualities based on certain market criteria including but not limited to the number of customers, the partner ecosystem, the global reach, and the nature of the response to factors affecting the market outlook. Market Leadership, from our point of view, requires global reach as well as consistent sales and service support with the successful execution of marketing strategy.

Unsurprisingly, among the market leaders, we can observe all large and established vendors like Oracle, IBM, Thales, and Imperva. All these companies are veteran players in the IT market with a massive global presence, large partner networks and impressive numbers of customers (including those outside of the data security market).

All smaller and younger companies are found in the Challengers segment, indicating their relative financial stability and future growth potential. 

Market Leaders are (in alphabetical order): 

  • IBM
  • Imperva
  • Oracle
  • Thales

3 Correlated View 

While the Leadership charts identify leading vendors in certain categories, many customers are looking not only for, say, a product leader, but for a vendor that is delivering a solution that is both feature-rich and continuously improved, which would be indicated by a strong position in both the Product Leadership ranking and the Innovation Leadership ranking. Therefore, we deliver additional analysis that correlates various Leadership categories and delivers an additional level of information and insight. 

3.1 The Market/Product Matrix 

The first of these correlated views looks at Product Leadership and Market Leadership. 

In this comparison, it becomes clear which vendors are better positioned in our analysis of Product Leadership compared to their position in the Market Leadership analysis. Vendors above the line are sort of “overperforming” in the market. It comes as no surprise that these are mainly the very large vendors, while vendors below the line are often innovative but focused on specific regions. 

Among the Market Champions, we can find the usual suspects – the largest well-established vendors including IBM, Oracle, Thales, and Imperva. 

comforte AG and SecuPI appear in the middle right box, indicating the opposite skew, where strong product capabilities have not yet brought them to strong market presence. Given both companies’ relatively recent entrance to the global database security market, we believe they have a strong potential for improving their market positions in the future. 

Axiomatics and Delphix can be found in the middle segment, indicating their relatively narrow functional focus, which corresponds to limited potential for future growth. 

3.2 The Product/Innovation Matrix 

The second view shows how Product Leadership and Innovation Leadership are correlated. Vendors below the line are more innovative, vendors above the line are, compared to the current Product Leadership positioning, less innovative. 

Here, we see a good correlation between the product and innovation ratings, with most vendors being placed close to the dotted line indicating a healthy mix of product and innovation leadership in the market.  

Among Technology Leaders, we again find IBM and Oracle, indicating both vendors’ distant leadership in both product and innovation capabilities thanks to their huge resources and decades of experience

The top middle box contains vendors that are providing good product features but lag behind the leaders in innovation. Here we find comforte AG, SecuPI, Thales and Imperva, indicating their strong positions in the selected functional areas of data security. 

Delphix has landed in the middle segment, showing that even with somewhat limited functional focus a vendor can still deliver a healthy amount of innovation.

The only company showing a noticeably lower level of innovation is Axiomatics; still, it has landed in the middle left box, indicating strong product capabilities. 

3.3 The Innovation/Market Matrix

The third matrix shows how Innovation Leadership and Market Leadership are related. Some vendors might perform well in the market without being Innovation Leaders. This might impose a risk to their future position in the market, depending on how they improve their Innovation Leadership position. On the other hand, vendors that are highly innovative have a good chance of improving their market position but often face risks of failure, especially in the case of vendors with a confused marketing strategy. 

Vendors above the line are performing well in the market compared to their relatively weak position in the Innovation Leadership rating, while vendors below the line show, based on their ability to innovate, the biggest potential for improving their market position. 

Again unsurprisingly, we can find IBM and Oracle among the Big Ones – vendors that combine strong market presence with a strong pace of innovation. 

Thales and Imperva in the top middle box indicate their strong market positions despite somewhat slower innovation, while comforte AG, Delphix and SecuPI occupy the opposite positions below the dotted line, indicating their strong performance in innovation, which has not yet translated into larger market shares.

Axiomatics can be found in the left middle box, indicating their position as an established player in a small, but mature and “uncrowded” market segment, which inhibits innovation somewhat.

4 Products and Vendors at a glance 

This section provides an overview of the various products we have analyzed within this KuppingerCole Leadership Compass on Database and Big Data Security. Aside from the rating overview, we provide additional comparisons that put Product Leadership, Innovation Leadership, and Market Leadership in relation to each other. These allow identifying, for instance, highly innovative but specialized vendors or local players that provide strong product features but do not have a global presence and large customer base yet. 

In addition, we also provide four additional ratings for the vendor. These go beyond the product view provided in the previous section. While the rating for Financial Strength applies to the vendor, the other ratings apply to the product.

In the area of innovation, we were looking for the service to provide a range of advanced features in our analysis. These advanced features include but are not limited to implementing practical applications of new innovative technologies like machine learning and behavior analytics or introducing new functionality in response to market demand. Where we could not find such features, we rate it as “Critical”

In the area of market position, we are looking at the visibility of the vendor in the market. This is indicated by factors including the presence of the vendor in more than one continent and the number of organizations using the services. Where the service is only being used by a small number of customers located in one geographical area, we award a “Critical” rating.

In the area of financial strength, a “Weak” or “Critical” rating is given where there is a lack of information about financial strength. This doesn’t imply that the vendor is in a weak or a critical financial situation. This is not intended to be an in-depth financial analysis of the vendor, and it is also possible that vendors with better ratings might fail and disappear from the market. 

Finally, a critical rating regarding ecosystem applies to vendors which do not have or have a very limited ecosystem with respect to numbers of partners and their regional presence. That might be company policy, to protect their own consulting and system integration business. However, our strong belief is that the success and growth of companies in a market segment rely on strong partnerships. 

5 Product evaluation 

This section contains a quick rating for every product we’ve included in this report. For some of the products, there are additional KuppingerCole Reports available, providing more detailed information. In the following analysis, we have provided our ratings for the products and vendors in a series of tables. These ratings represent the aspects described previously in this document. Here is an explanation of the ratings that we have used: 

  • Strong Positive: this rating indicates that, according to our analysis, the product or vendor significantly exceeds the average for the market and our expectations for that aspect.
  • Positive: this rating indicates that, according to our analysis, the product or vendor exceeds the average for the market and our expectations for that aspect. 
  • Neutral: this rating indicates that, according to our analysis, the product or vendor is average for the market and our expectations for that aspect. 
  • Weak: this rating indicates that, according to our analysis, the product or vendor is less than the average for the market and our expectations in that aspect. 
  • Critical: this is a special rating with a meaning that is explained where it is used. For example, it may mean that there is a lack of information. Where this rating is given, it is important that a customer considering this product look for more information about the aspect. 

It is important to note that these ratings are not absolute. They are relative to the market and our expectations. Therefore, a product with a strong positive rating could still be lacking in functionality that a customer may need if the market in general is weak in that area. Equally, in a strong market, a product with a weak rating may provide all the functionality a particular customer would need. 

5.1 Axiomatics 

Axiomatics is a privately held company headquartered in Stockholm, Sweden. Founded in 2006, the company is currently a leading provider of dynamic policy-based authorization solutions for applications, databases, and APIs. Despite its relatively small size, Axiomatics serves an impressive number of Fortune 500 companies and government agencies, as well as actively participates in various standardization activities. Axiomatics is a major contributor to the OASIS XACML (eXtensible Access Control Markup Language) standard, and all their solutions are designed to be 100% XACML-compliant. 

Strengths

  • Database-agnostic approach ensures unified policy application across different databases and big data stores 
  •  100% compliance with the XACML standard
  •  Shares the authorization model with other Axiomatics products for applications, APIs, etc.

Challenges

  • Quite narrow functional focus compared to other products in the rating  
  •  Relies on 3rd party components to enforce policies 

The company’s flagship data protection solution is the Dynamic Authorization Suite built around the Axiomatics Policy Server, an enterprise-wide universal Attribute-Based Access Control (ABAC) product. Included in the suite are Axiomatics Data Access Filter MD for managing access to sensitive information in relational databases along with SmartGuard for Big Data frameworks and cloud data stores. 

Implemented as loosely coupled add-ons or proxies, the suite provides policy-based access control defined in standard XACML, as well as dynamic data masking, filtering and activity monitoring transparently for multiple data sources, which integrates seamlessly with other company’s access management solutions for applications, APIs and microservices and other third-party products.

The key features of the solution include dynamic context-aware authorization implemented in a vendor-neutral way, flexible access control to sensitive data based on real-time dynamic data filtering, dynamic data masking and filtering for financial, healthcare, pharmaceutical and other types of personal information, and centralized management of access policies across databases, applications, and APIs. 

5.2 Comforte AG

comforte AG is a privately held software company specializing in data protection and digital payments solutions based in Wiesbaden, Germany. The company’s roots can be traced back to 1998 when its founders came to the market with a connectivity solution for HPE NonStop systems – a fault-tolerant selfhealing server platform for critical business applications. Over the years, comforte’s offering has evolved into a comprehensive solution for protecting sensitive business data with encryption and tokenization, tailored specifically for critical use cases that do not allow even minimal downtime.

Strengths

  • Unique hardened, scalable and fault-tolerant architecture for mission-critical use cases 
  • Deployment flexibility, hybrid cloud, and as-aService scenarios are supported  
  • Broad range of transparent application integration options, support for Big Data and stream processing frameworks 

Challenges

  • Current functionality limited to tokenization and masking (other data protection
  • Somewhat limited market visibility outside of the financial industry 

A few years ago, comforte AG has entered the data-centric security market with their SecurDPS Enterprise solution that combines the company’s patented stateless tokenization algorithm, proven highly scalable and fault-tolerant architecture, flexible access control and policy management, augmented by a broad range of transparent integration options, which allow various existing applications to be quickly included into the enterprise-wide deployment without any changes in infrastructure or code. 

The platform’s decentralized and redundant architecture ensures deployment flexibility in any scenario: hybrid cloud and as-a-Service use cases are supported as well. Patented stateless tokenization algorithm supports limitless scaling across heterogeneous environments. Strong focus on regulatory compliance directly addresses PCI DSS and GDPR requirements. 

5.3 Delphix

Delphix is a privately held software development company headquartered in Redwood City, California, USA. It was founded in 2008 with a vision of a dynamic platform for data operators and data consumers within an enterprise to collaborate in a fast, flexible and secure way. With offices across the USA, Europe, Latin America, and Asia, Delphix is currently serving over 300 global enterprise customers including 30% of the Fortune 100 companies. 

Strengths

  • Based on a universal, high-performance and space-efficient data virtualization technology 
  • Support for a broad range of database types and unstructured file systems
  • Transparent data masking and tokenization capabilities 
  • Preconfigured for GDPR compliance  

Challenges

  • Limited data protection capabilities, lack of encryption support 
  • Limited monitoring and analytics functions 

Delphix Dynamic Data Platform is a software-based data virtualization platform – quickly provisioning virtual copies of masked or unmasked data across different IT environments. Delivered as virtual appliances that can be deployed anywhere, the platform offers unified support for on-prem, cloud and hybrid environments. 

Using compression, intelligent data block sharing and other optimizations and offering self-service capabilities and API-driven automation functions, the Delphix platform ensures that data consumers can get access to the data they need as quickly and efficiently as possible, enabling numerous usage scenarios: cloud migration, data analytics, DevOps automation of data delivery, test data management, and even disaster recovery. 

Since the platform is designed to be fully transparent for existing applications and services, this ensures effortless hybrid cloud deployment for new and existing applications. Powerful selfservice functions for data consumers enable quick provisioning, refreshing, rewinding, and sharing of data sources in minutes instead of hours, powering the emerging DataOps methodology. Integrated data anonymization features come preconfigured for GDPR compliance. 

5.4 IBM

IBM Corporation is a multinational technology and consulting company headquartered in Armonk, New York, USA. IBM offers a broad range of software solutions and infrastructure, hosting and consulting services in numerous market segments. With over 370 thousand employees and market presence in 160 countries, IBM ranks as one of the world’s largest companies both in terms of size and profitability.

Strengths

  •  Full range of security capabilities for structured and unstructured data 
  • Support for hybrid multi-cloud environments
  • Advanced Big Data and Cognitive Analytics  
  • Nearly unlimited scalability 
  • Integrated ecosystem with IBM’s and 3rd party security, identity and analytics products
  • Massive network of technology partners and resellers

Challenges

  •  Setup and operations may be complicated for some customers 

IBM Security, one of the strategic units of the company, provides a comprehensive portfolio including identity and access management, security intelligence and information protection solutions. The product covered in this rating is IBM Security Guardium – a comprehensive data security platform providing a full range of functions, including discovery and classification, entitlement reporting, data protection, activity monitoring, and advanced data security analytics, across different environments: from file systems to databases and big data platforms to hybrid cloud infrastructures. 

Among the key features of the Guardium platform are discovery, classification, vulnerability assessment and entitlement reporting across heterogeneous data environments; encryption, data redaction and dynamic masking combined with real-time alerting and automated blocking of malicious access; and activity monitoring and advanced security analytics based on machine learning. 

Automated data compliance and audit capabilities with Compliance Accelerators for specific frameworks like PCI, HIPAA, SOX or GDPR ensure that following strict personal data protection guidelines becomes a continuous process, leaving no gaps either for auditors or for malicious actors. 

5.5 Imperva

Imperva is an American cybersecurity solution company headquartered in Redwood Shore, California. Back in 2002, the company’s first product was a web application firewall, but over the years, Imperva’s portfolio has expanded to include several product lines for data security, cloud security, breach prevention, and infrastructure protection as well. In 2019, Imperva was acquired by private equity firm Thoma Bravo, making it a privately held company and providing a substantial boost in R&D. At the same time, major changes in product licensing were announced, which reduced a large number of standalone products towards a short list of convenient packages called FlexProtect Plans. 

Strengths

  • Convenient licensing plans for comprehensive data protection 
  • Multiple collection methods ensure minimal performance overhead 
  • Advanced security intelligence and behavior analytics 
  • Large number of out-of-the-box workflows and compliance reports 

Challenges

  • No support for data encryption or dynamic masking 

Instead of multiple SecureSphere products for Discovery and Assessment, Activity Monitoring, Database Firewall, as well as CounterBreach for threat protection and Camouflage for masking, Imperva customers only need to subscribe for a single FlexProtect for Data licensing plan to enable full protection of their sensitive data. 

The new data protection suite offers all the required capabilities, such as the unified protection across relational databases, data warehouses, Big data platforms, and mainframes; comprehensive activity monitoring, auditing, and forensic investigation, augmented with advanced security analytics based on behavior profiling; pre-defined policies, remediation workflows, and hundreds of compliance reports Integrations with other Imperva’s security products ensure that this multi-factored data security can be enforced across endpoints, web applications, and cloud services. 

A notable recent addition to Imperva’s portfolio is Cloud Data Security, a new offering that extends discovery, classification and analytics capabilities to database assets in the cloud. Delivered as SaaS, the platform can be deployed and configured in hours, delivering actionable insights for prioritizing threat remediations immediately.

5.6 Oracle

Oracle Corporation is an American multinational information technology company headquartered in Redwood Shores, California. Founded back in 1977, the company has a long history of developing database software and technologies; nowadays, however, Oracle’s portfolio incorporates a large number of products and services ranging from operating systems and development tools to cloud services and business application suites. 

Strengths

  • Autonomous cloud database platform eliminating human administrative access
  • Automated provisioning, upgrades, backup and DR, no downtime 
  • Comprehensive product portfolio for all areas of database security 
  • Deep integration with other Oracle’s Data Provisioning, Testing and Cloud technologies  

Challenges

  • A number of products are available only for Oracle databases 
  • Big Data and NoSQL products are not yet integrated with RDBMS security solutions  

The breadth of the company’s database security portfolio is impressive: with a number of protection and detection products and a number of managed services covering all aspects of database assessment, protection, monitoring and compliance, Oracle Database Security can address the most complex customer requirements, both on-premises and in the cloud.

The recently introduced Oracle Autonomous Database, which completely automated provisioning, management, tuning and upgrade processes of database instances without any downtime, not just substantially increases security and compliance of sensitive data stored in Oracle databases, but makes a compelling argument for moving this data to the Oracle cloud.

It’s worth noting that a substantial part of the company’s security capabilities is still specifically designed for Oracle databases only, which makes Oracle’s data protection solutions less suitable for companies using other DB types.  

This strategy seems to change slowly however as the company is planning to offer more database-agnostic tools in the future. 

5.7 SecuPI

SecuPI is a privately held data-centric security vendor headquartered in Jersey City, NJ, USA. The company was founded in 2014 by entrepreneurs with a strong background in financial technology, also known for coinventing the very concept of dynamic data masking. After realizing that data masking alone does not solve modern privacy and compliance problems, the company was established with a vision “to do the things the right way”. 

Strengths

  • Integrated data protection and privacy platform with strong focus on GDPR/CCPA 
  • Application-level protection overlays simplify deployment and management 
  • User identity context for more fine-grained policies and monitoring
  • Broad support for big data and EDW platforms 

Challenges

  •  Architecture potentially limits support of less popular or legacy platforms 
  • Small market presence compared to competitors

As opposed to most competitors that encrypt information at the database level, SecuPI’s approach is to embed encryption overlays directly into application stacks. Thus, the solution can only focus on supporting a few of major development platforms like Java or .NET instead of numerous distinct data source types. In addition, this approach gives the platform access to real user identities and not to typical service accounts used to connect to databases. With this technology, SecuPI delivers a single privacyfocused data protection platform for on-prem and cloud-based applications, which is easy to deploy and to operate thanks to the centralized management of data protection policies.

SecuPI software platform brings data-centric security and compliance closer to application owners and business units, enabling sensitive data discovery, classification, anonymization, and minimization across the whole organization, with centralized policy management along with real-time monitoring of all data flows and user activities without any changes in existing applications and network infrastructures. 

Built-in controls for user consent management, anonymization and other data subject rights (such as the right to be forgotten) ensure that all existing applications can be made compliant with GDPR and similar regulations quickly and without the need to adapt existing database structures.

5.8 Thales

Thales is a leading provider of data protection solutions headquartered in Austin, Texas, USA. With over 40 years of experience in information security, the company is a veteran player in such areas like hardware security modules (HSM), data encryption, key management and PKI. The company’s modern history began in 2000 when it became a part of Thales Group, an international company based in France, which provides solutions and services for defense, aerospace and transportation markets. In 2019, Thales completed the acquisition of Gemalto, its largest competitor in the data protection market, thus substantially increasing both its market position and functional capabilities with new services like Authentication and Access Management. 

Strengths

  • Comprehensive transparent encryption, tokenization and masking capabilities  
  • High-performance thanks to hardware encryption support 
  • Centralized management across all environments, even 3rd party products 
  • Standard APIs for adding encryption support to existing applications

Challenges

  • Primary focus on data protection only, no coverage of other functional areas  

In this rating we focus primarily on the Vormetric Data Security Platform, a unified data protection platform providing customers the flexibility, scale and efficiency to address different security requirements like transparent encryption of the entire database environments, privileged user access controls, granular fieldlevel data protection with encryption, tokenization and data masking, and a single security manager for maximizing value and minimizing the total cost of ownership. 

Notable features of the platform include centralized management of encryption keys and policies across all environments and products, application encryption APIs for embedding transparent encryption into existing apps, and dynamic masking with format-preserving tokenization. Live Data Transformation enables in-place encryption of data without the need to move it elsewhere first; this helps reduce maintenance windows for rotating encryption keys or other scenarios like versioned backups. Tight integrations with storage vendors enable innovative capabilities like efficient storage deduplication of transparently encrypted data. 

6 Vendors to watch 

In addition to the vendors evaluated in detail in this Leadership Compass, there are several companies that for various reasons were unable to participate in the rating but are nevertheless worth mentioning. Some of the vendors below are focusing primarily on other aspects of information security yet show a notable overlap with the topic of our rating. Others have just entered the market as startups with new, yet interesting products worth checking out. 

6.1 Dataguise  

Dataguise is a privately held company headquartered in Fremont, CA, United States. Founded in 2007, the company provides a sensitive data governance platform to discover, monitor and protect sensitive data on-premises and in the cloud across multiple data environments. Although the company primarily focuses on Big Data infrastructures, supporting all major Hadoop distributions and many Hadoop-as-a-Service providers, their solution supports traditional databases, as well as file servers and SharePoint. 

From a single dashboard, customers can get a clear overview of all sensitive information stored across the corporate IT systems, understand which data is being protected and which is at risk of exposure, as well as ensure compliance with industry regulations with a full audit trail and real-time alerts. 

6.2 DataSunrise 

DataSunrise is a privately held company based in Seattle, WA, United States. It was founded in 2015 with the goal of developing a next-generation data and database security solution for real-time data protection in heterogeneous environments. 

The company’s solution combines data discovery, activity monitoring, database firewall and dynamic data masking capabilities in a single integrated product. However, the company does not focus on cloud databases only, offering support for a wide range of database and data warehouse vendors. In addition, DataSunrise provides integrations with a number of 3rd party SIEM solutions and other security tools. 

6.3 DB CyberTech

DB CyberTech (formerly DB Networks) is privately held database security vendor headquartered in San Diego, CA, United States. Founded in 2009, the company focuses exclusively on database monitoring through non-intrusive deep protocol inspection, database discovery, and artificial intelligence. 

By combining network traffic inspection with machine learning and behavioral analysis, DB Networks claims to be able to provide continuous discovery of all databases, analyze interactions between databases and applications and then identify compromised credentials, database-specific attacks and other suspicious activities which reveal data breaches and other advanced cyberattacks. 

6.4 McAfee

McAfee is a veteran American computer security vendor headquartered in Santa Clara, California. Founded in 1987, the company has a long history in developing a broad range of endpoint protection, network, and data security solutions. Between 2011 and 2016, McAfee has been a wholly owned subsidiary of Intel. Currently, the company is a joint venture between Intel and an investment company TPG Capital. 

In the database security market, McAfee offers a number of products that form the McAfee Database Security Suite providing unified database security across physical, virtual, and cloud environments. The suite provides comprehensive functionality in such areas as database and data discovery, activity monitoring, privileged access control, and intrusion detection – all through a non-intrusive network-based architecture.

6.5 Mentis Inc 

MENTIS is a privately held company that provides sensitive information management solutions since 2004. It is headquartered in New York City, USA. The company offers a comprehensive suite of products for various aspects of discovery, management, and protection of critical data across multiple sources, built on top of a common software platform and delivered as a fully integrated yet flexible solution.

With this platform, MENTIS is able to offer business-focused solutions for such common challenges as GDPR compliance, migration to public clouds and sensitive data management for cross-border operations. The company promises quick and simple deployment for most customers with pre-built controls for data masking, monitoring, auditing and reporting for popular enterprise business applications. 

6.6 Micro Focus 

Micro Focus is a large multinational software vendor and IT consultancy. Originally established in 1976 in Newbury, United Kingdom, nowadays the company has a large global presence and a massive portfolio of products and services for application development and operations management, data management and governance, and, of course, security. In recent years, Micro Focus has grown substantially through a series of acquisitions, and in 2017, it merged with the HPE’s software business.

Voltage SecureData Enterprise, the company’s data security platform provides a comprehensive solution for securing sensitive enterprise data through transparent encryption and pseudonymization across multiple database types and Big Data platforms, on premises, in the cloud, and on the edge.

6.7 Microsoft

Microsoft is a multinational technology company headquartered in Redmond, Washington, USA. Founded in 1975, it has risen to dominate the personal computer software market with MS-DOS and Microsoft Windows operating systems. Since then, the company has expanded into multiple markets like desktop and server software, consumer electronics and computer hardware, mobile devices, digital services and, of course, the cloud. 

Given their leading position in multiple IT environments – on endpoints, in data centers and in the public cloud, Microsoft has the unique opportunity to collect vast amounts of security-related telemetry and convert it into security insights and threat intelligence. In recent years, the company has established itself as a notable security solution provider, and even though they do not yet offer specialized database security products, their portfolio in the areas of information protection and security analytics is worth checking. 

Even more interesting are the recent developments in their SQL Server platform, which focus on the concept of Confidential Computing – performing operations on sensitive data within secured enclaves. Combined with the existing encryption capabilities, this technology enables consistent data protection at any stage: at rest, in transit, and in use. 

6.8 Protegrity

Protegrity is a privately held software vendor from Stamford, CT, USA. Since 1996, the company has been in the enterprise data protection business. Their solutions implement a variety of technologies, including data encryption, masking, tokenization and monitoring across multiple environments – from mainframes to clouds. 

Protegrity Database Protector is a solution for monitoring and securing sensitive information in databases, storage and backup systems with policy-based access controls. Big Data Protector extends this protection to Hadoop-based Big Data platforms – protecting the data both at rest and in transit, as well as in use during various stages of processing. 

Protegrity Data Security Gateway provides transparent protection for data moving between multiple devices, without the need to modify any existing applications or services. 

6.9 Trustwave

Trustwave is a veteran cybersecurity vendor headquartered in Chicago, IL, United States. Since 1995, the company provides managed security services in such areas as vulnerability management, compliance, and threat protection. 

Trustwave DbProtect is a security platform that provides continuous discovery and inventory of relational databases and Big Data stores, agentless assessment of each asset for configuration problems, vulnerabilities, dangerous user rights, and privileges and potential compliance violations and finally enables comprehensive rep

The solution’s distributed architecture can meet the scalability demands of large organizations with thousands of data stores. 

7 Methodology 

KuppingerCole Leadership Compass is a tool which provides an overview of a particular IT market segment and identifies the leaders in that market segment. It is the compass which assists you in identifying the vendors and products/services in a particular market segment which you should consider for product decisions. 

It should be noted that it is inadequate to pick vendors based only on the information provided within this report. 

Customers must always define their specific requirements and analyze in greater detail what they need. This report doesn’t provide any recommendations for picking a vendor for a specific customer scenario. This can be done only based on a more thorough and comprehensive analysis of customer requirements and a more detailed mapping of these requirements to product features, i.e. a complete assessment. 

7.1 Types of Leadership 

We look at four types of leaders: 

  • Product Leaders: Product Leaders identify the leading-edge products in a particular market segment. These products deliver to a large extent what we expect from products in that market segment. They are mature.
  • Market Leaders: Market Leaders are vendors which have a large, global customer base and a strong partner network to support their customers. A lack of global presence or breadth of partners can prevent a vendor from becoming a Market Leader. 
  • Innovation Leaders: Innovation Leaders are those vendors which are driving innovation in the market segment. They provide several of the most innovative and upcoming features we hope to see in the market segment. 
  • Overall Leaders: Overall Leaders are identified based on a combined rating, looking at the strength of products, the market presence, and the innovation of vendors. Overall Leaders might have slight weaknesses in some areas but become an Overall Leader by being above average in all areas. 

For every area, we distinguish between three levels of products: 

  • Leaders: This identifies the Leaders as defined above. Leaders are products which are exceptionally strong in particular areas. 
  • Challengers: This level identifies products which are not yet Leaders but have specific strengths which might make them Leaders. Typically, these products are also mature and might be leading-edge when looking at specific use cases and customer requirements. 
  • Followers: This group contains products which lag behind in some areas, such as having a limited feature set or only a regional presence. The best of these products might have specific strengths, making them a good or even the best choice for specific use cases and customer requirements but are of limited value in other situations. 

Our rating is based on a broad range of input and long experience in that market segment. Input consists of experience from KuppingerCole advisory projects, feedback from customers using the products, product documentation, and a questionnaire sent out before creating the KuppingerCole Leadership Compass, as well as other sources. 

7.2 Product rating 

KuppingerCole as an analyst company regularly does evaluations of products/services and vendors. The results are, among other types of publications and services, published in the KuppingerCole Leadership Compass Reports, KuppingerCole Executive Views, KuppingerCole Product Reports, and KuppingerCole Vendor Reports. KuppingerCole uses a standardized rating to provide a quick overview of our perception of the products or vendors. Providing a quick overview of the KuppingerCole rating of products requires an approach combining clarity, accuracy, and completeness of information at a glance. 

KuppingerCole uses the following categories to rate products: 

  • Security
  • Functionality
  • Integration
  • Interoperability
  • Usability

Security – security is measured by the degree of security within the product. Information Security is a key element and requirement in the KuppingerCole IT Model (#70129 Scenario Understanding IT Service and Security Management1 ). Thus, providing a mature approach to security and having a well-defined internal security concept are key factors when evaluating products. Shortcomings such as having no or only a very coarse-grained, internal authorization concept are understood as weaknesses in security. Known security vulnerabilities and hacks are also understood as weaknesses. The rating then is based on the severity of such issues and the way vendors deal with them. 

Functionality – this is measured in relation to three factors. One is what the vendor promises to deliver. The second is the status of the industry. The third factor is what KuppingerCole would expect the industry to deliver to meet customer requirements. In mature market segments, the status of the industry and KuppingerCole expectations usually are virtually the same. In emerging markets, they might differ significantly, with no single vendor meeting the expectations of KuppingerCole, thus leading to relatively low ratings for all products in that market segment. Not providing what customers can expect on average from vendors in a market segment usually leads to a degradation of the rating, unless the product provides other features or uses another approach which appears to provide customer benefits.

Integration – integration is measured by the degree in which the vendor has integrated the individual technologies or products in their portfolio. Thus, when we use the term integration, we are referring to the extent to which products interoperate with themselves. This detail can be uncovered by looking at what an administrator is required to do in the deployment, operation, management, and discontinuation of the product. The degree of integration is then directly related to how much overhead this process requires. For example: if each product maintains its own set of names and passwords for every person involved, it is not well integrated. 

And if products use different databases or different administration tools with inconsistent user interfaces, they are not well integrated. On the other hand, if a single name and password can allow the admin to deal with all aspects of the product suite, then a better level of integration has been achieved.

Interoperability—interoperability also can have many meanings. We use the term “interoperability” to refer to the ability of a product to work with other vendors’ products, standards, or technologies. In this context, it means the degree to which the vendor has integrated the individual products or technologies with other products or standards that are important outside of the product family. Extensibility is part of this and measured by the degree to which a vendor allows its technologies and products to be extended for the purposes of its constituents. We think Extensibility is so important that it is given equal status so as to ensure its importance and understanding by both the vendor and the customer. As we move forward, just providing good documentation is inadequate. We are moving to an era when acceptable extensibility will require programmatic access through a well-documented and secure set of APIs. Refer to the Open API Economy Document (#70352 Advisory Note: The Open API Economy2 ) for more information about the nature and state of extensibility and interoperability.

Usability —accessibility refers to the degree in which the vendor enables the accessibility to its technologies and products to its constituencies. This typically addresses two aspects of usability – the end user view and the administrator view. Sometimes just good documentation can create adequate accessibility. However, we have strong expectations overall regarding well-integrated user interfaces and a high degree of consistency across user interfaces of a product or different products of a vendor. We also expect vendors to follow common, established approaches to user interface design. 

We focus on security, functionality, integration, interoperability, and usability for the following key reasons: 

  • Increased People Participation—Human participation in systems at any level is the highest area of cost and potential breakdown for any IT endeavor. 
  • Lack of Security, Functionality, Integration, Interoperability, and Usability—Lack of excellence in any of these areas will only result in increased human participation in deploying and maintaining IT systems. 
  • Increased Identity and Security Exposure to Failure—Increased People Participation and Lack of Security, Functionality, Integration, Interoperability, and Usability not only significantly increases costs, but inevitably leads to mistakes and breakdowns. This will create openings for attack and failure. 

Thus, when KuppingerCole evaluates a set of technologies or products from a given vendor, the degree of product Security, Functionality, Integration, Interoperability, and Usability which the vendor has provided are of the highest importance. This is because the lack of excellence in any or all areas will lead to inevitable identity and security breakdowns and weak infrastructure. 

7.3 Vendor rating 

For vendors, additional ratings are used as part of the vendor evaluation. The specific areas we rate for vendors are: 

  • Innovativeness 
  • Market position 
  • Financial strength 
  • Ecosystem

Innovativeness – this is measured as the capability to drive innovation in a direction which aligns with the KuppingerCole understanding of the market segment(s) the vendor is in. Innovation has no value by itself but needs to provide clear benefits to the customer. However, being innovative is an important factor for trust in vendors, because innovative vendors are more likely to remain leading-edge. An important element of this dimension of the KuppingerCole ratings is the support of standardization initiatives if applicable. Driving innovation without standardization frequently leads to lock-in scenarios. Thus, active participation in standardization initiatives adds to the positive rating of innovativeness. 

Market position – measures the position the vendor has in the market or the relevant market segments. This is an average rating overall markets in which a vendor is active, e.g. being weak in one segment doesn’t lead to a very low overall rating. This factor considers the vendor’s presence in major markets.

Financial strength – even while KuppingerCole doesn’t consider size to be a value by itself, financial strength is an important factor for customers when making decisions. In general, publicly available financial information is an important factor therein. Companies which are venture-financed are in general more likely to become an acquisition target, with massive risks for the execution of the vendor’s roadmap. 

Ecosystem – this dimension looks at the ecosystem of the vendor. It focuses mainly on the partner base of a vendor and the approach the vendor takes to act as a “good citizen” in heterogeneous IT environments. 

Again, please note that in KuppingerCole Leadership Compass documents, most of these ratings apply to the specific product and market segment covered in the analysis, not to the overall rating of the vendor. 

7.4 Rating scale for products and vendors 

For vendors and product feature areas, we use – beyond the Leadership rating in the various categories – a separate rating with five different levels. These levels are 

  • Strong positive – Outstanding support for the feature area, e.g. product functionality, or outstanding position of the company, e.g. for financial stability. 
  • Positive – Strong support for a feature area or strong position of the company, but with some minor gaps or shortcomings. E.g. for security, this can indicate some gaps in fine-grain control of administrative entitlements. E.g. for market reach, it can indicate the global reach of a partner network, but a rather small number of partners. 
  • Neutral – Acceptable support for feature areas or acceptable position of the company, but with several requirements we set for these areas not being met. E.g. for functionality, this can indicate that some of the major feature areas we are looking for aren’t met, while others are well served. For company ratings, it can indicate, e.g., a regional-only presence. 
  • Weak – Below-average capabilities in the product ratings or significant challenges in the company ratings, such as very small partner ecosystem. 
  • Critical – Major weaknesses in various areas. This rating most commonly applies to company ratings for the market position or financial strength, indicating that vendors are very small and have a very low number of customers. 

7.5 Spider graphs 

In addition to the ratings for our standard categories such as Product Leadership and Innovation Leadership, we add a spider graph for every vendor we rate, looking at specific capabilities for the market segment researched in the respective Leadership Compass. For the field of Database and Big Data Security, we look at the following eight areas: 

  • Vulnerability assessment – Discovering known vulnerabilities in database products, providing complete visibility into complex database infrastructures, detecting misconfigurations and the means for assessing and mitigating these risks. 
  • Discovery & Classification – Crucial first step in defining proper security policies for different data depending on their criticality and compliance requirements. 
  • Data-centric Security – Data encryption at rest and in transit (and in use wherever available), static and dynamic data masking and other technologies for protecting data integrity and confidentiality. 
  • Monitoring & Analytics – Monitoring of database performance characteristics, complete visibility for all access and administrative actions for each instance, including alerting and reporting functions, advanced real-time analytics, anomaly detection, and SIEM integration. 
  • Threat Prevention – Various methods of protection from cyber-attacks such as denial-ofservice or SQL injection, mitigation of unpatched vulnerabilities and other infrastructure-specific security measures. 
  • Access Management – Access controls for database instances, dynamic policy-based access management, identifying and removing excessive user privileges, managing shared and service accounts, detection, and blocking of suspicious user activities. 
  • Audit & Compliance – Advanced auditing mechanisms beyond native capabilities, centralized auditing and reporting across multiple database environments, enforcing separation of duties, forensic analysis, and compliance audits. 
  • Performance & Scalability – Ability to withstand high loads, minimize performance overhead and to support deployments in high availability configurations.

These spider graphs add an extra level of information by showing the areas where products are stronger or weaker. Some products show gaps in certain areas while being strong in other areas. These might be a good fit if only specific features are required. Given the breadth and complexity of the full scope of database security, only very few largest vendors have enough resources to offer solutions that cover all of the areas; thus, we do not recommend overlooking smaller, more specialized products – often they may provide substantially better return of investment. 

7.6 Inclusion and exclusion of vendors 

KuppingerCole tries to include all vendors within a specific market segment in their Leadership Compass documents. The scope of the document is global coverage, including vendors which are only active in regional markets such as Germany, Russia, or the US. 

However, there might be vendors which don’t appear in a Leadership Compass document due to various reasons: 

  • Limited market visibility: There might be vendors and products which are not on our radar yet, despite our continuous market research and work with advisory customers. This usually is a clear indicator of a lack of Market Leadership. 
  • Denial of participation: Vendors might decide on not participating in our evaluation and refuse to become part of the Leadership Compass document. KuppingerCole tends to include their products anyway as long as sufficient information for evaluation is available, thus providing a comprehensive overview of leaders in the particular market segment. 
  • Lack of information supply: Products of vendors which don’t provide the information we have requested for the Leadership Compass document will not appear in the document unless we have access to sufficient information from other sources.
  • Borderline classification: Some products might have only a small overlap with the market segment we are analyzing. In these cases, we might decide not to include the product in that KuppingerCole Leadership Compass. 

Despite our effort to cover most aspects of database and big data security in this Leadership Compass, we are not planning to review the following products: 

  • Solutions that primarily focus on unstructured data protection having limited or no database-related capabilities; 
  •  Security tools that cover general aspects of information security (such as firewalls or antimalware products) but do not offer functionality specifically tailored for data protection; 
  • Compliance or risk management solutions that focus on organizational aspects (checklists, reports, etc.) 

The target is providing a comprehensive view of the products in a market segment. KuppingerCole will provide regular updates on their Leadership Compass documents. 

We provide a quick overview of vendors not covered and their offerings in the chapter Vendors to watch. In that chapter, we also look at some other interesting offerings around the Database and Big Data Security market and in related market segments. 

Data security challenges in a hybrid multicloud world

Deploying in a hybrid, multicloud environment

Let’s face it, cloud computing is evolving at a rapid pace. Today, there’s a range of choices for moving applications and data to cloud that includes various deployment models, from public and private to hybrid cloud service types. As part of a broader digital strategy, organizations are seeking ways to utilize multiple clouds. With a multicloud approach, companies can avoid vendor lock-in and take advantage of the best-of-breed technologies, such as artificial intelligence (AI) and blockchain. The business benefits are clear: improved flexibility and agility, lower costs, and faster time to market. According to an IBM Institute for Business Value survey of 1,106 business and technology executives, by 2021, 85% of organizations are already operating multicloud environments. 98% plan to use multiple hybrid clouds by 2021. However, only 41% have a multicloud management strategy in place.1 When it comes to choosing cloud solutions, there’s a plethora of options available. It’s helpful to look at the differences between the various types of cloud deployment and cloud service models.

Understanding cloud deployment models

Over the past decade, cloud computing has matured in several ways and has become a tool for digital transformation worldwide. Generally, clouds take one of three deployment models: public, private or hybrid.

Public cloud

A public cloud is when services are delivered through a public internet. The cloud provider fully owns, manages and maintains the infrastructure and rents it to customers based on usage or periodic subscription, for example Amazon Web Services (AWS) or Microsoft Azure.

Private cloud

In a private cloud model, the cloud infrastructure and the resources are deployed on premises for a single organization, whether managed internally or by a third party. With private clouds, organizations control the entire software stack, as well as the underlying platform, from hardware infrastructure to metering tools.

Hybrid cloud

It offers the best of both worlds. A hybrid cloud infrastructure connects a company’s private cloud and third-party public cloud into a single infrastructure for the company to run its applications and workloads. Using the hybrid cloud model, organizations can run sensitive and highly regulated workloads on a private cloud infrastructure and run the less sensitive and temporary workloads on the public cloud. However, moving applications and data beyond firewalls to the cloud exposes them to risk. Whether your data is in a private cloud or a hybrid environment, data security and protection controls must be in place to protect data and meet government and industry compliance requirements.

Types of cloud service models

Data security differs based on the cloud service model being used. There are four main categories of cloud service models: infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), and database as a service (DBaaS), which is a flavor of PaaS. IaaS allows organizations to maintain their existing physical software and middleware platforms, and business applications on the infrastructure provided and managed by the service provider. Organizations benefit from this approach when they want to quickly take advantage of the cloud while minimizing impact and using existing investments. PaaS allows companies to use the infrastructure, as well as middleware or software provided and managed by the service provider. This flexibility removes a significant burden on a company from an IT perspective and allows it to focus on developing innovative business applications.

DBaaS solutions are hosted and fully managed database environments by a cloud provider. For example, a firm might subscribe to Amazon RDS for MySQL or Microsoft Azure SQL Database. SaaS is a service model that outsources all IT and allows organizations to focus more on their core strengths instead of spending time and investment on technology. It offers SaaS to the end users. In this cloud service model, a service provider hosts applications and makes them available to organizations. With each step, from IaaS to PaaS to SaaS to DBaaS, organizations give up some level of control over the systems that store, manage, distribute and protect their sensitive data. This increase in trust placed in third parties also presents an increase in risk to data security. Regardless of the chosen architecture, it’s ultimately your organization’s responsibility to ensure that appropriate data security measures are in place across environments.

Data security challenges to your cloud environment

Chances are, you’re already on your journey to the cloud. If your organization is like the vast number of businesses, your sensitive data resides in locations you can’t control and is managed by third parties that may have unfettered access. Research by the Ponemon Institute has found that insider threats are significantly increasing in frequency and cost. According to the institute’s findings, “the average global cost of insider threats rose by 31 percent in two years to $11.45 million and the frequency of incidents spiked by 47 percent in the same time period.” 4 The surveyed organizations had a global head count of 1,000 or more employees.

Determining how best to store data is one of the most important decisions an organization can make. The cloud is well-suited for long-term, enterprise-level data storage that allows organizations to benefit from massive economies of scale, which translates into lower expenses. And, this feature often makes cloud-based data centers a smarter place to store business-critical information than a stack of servers down the hall. 

Even as the expense of acquiring storage drops, it can be expensive in the long term due to increased business use and the number of personnel managing the storage systems. However, while putting data storage in the hands of third-party service providers can help save money and time, it can also pose serious security challenges and create new levels of risk.

Cloud deployments work on a shared responsibility model between the cloud provider and the consumer. In the case of an IaaS model, the cloud consumer has room to implement data security measures much like what they would normally deploy on premises and exercise tighter controls. 

On the other hand, for SaaS services, cloud consumers for the most part have to rely on the visibility provided by the cloud provider which, in essence, limits their ability to exercise more granular controls. 

It’s important to understand that whatever your deployment model or cloud service type, data security must be a priority. What’s of great concern is that your sensitive data now sits in many places, both within your company’s walls and outside of them. And, your security controls need to go wherever your data goes. 

Keep your sensitive data safe essentially everywhere 

Who has access to sensitive data in your organization? How sure are you that your staff or privileged users haven’t inappropriately accessed sensitive customer data?

In other words, you can’t protect what you don’t know. Simply locking down network access may not serve the purpose. After all, employees rely on this network to access and share data. This access means that the effectiveness of your data security is largely in the hands of your employees, some of which may no longer work directly for your company but still maintain access. Automated discovery, classification and monitoring of your sensitive data across platforms is crucial to enforce effective, in-context security policies and to help address compliance with regulations. 

Generally, in cloud environments, cloud service providers (CSPs) have the ability to access your sensitive data, which makes CSPs a new frontier in insider threats. Additionally, cybercriminals know that CSPs store vast amounts of important data, making such environments prime targets for attacks. To counteract these threats, sophisticated analytics-based tools that verify authorized and normal access must be utilized. Learn more

Consider encryption for cloud storage

With cloud storage, your data may move to a different place, on a different media, than its location today. The same is true of virtualization. Not only cloud-based data, but also cloud-based computing resources might shift rapidly in terms of both location and hardware underpinnings. The shifting nature of the cloud means that your security approach needs to address different kinds of cloud-based storage. Your approach also must account for copies, whether long-term backups or temporary copies, created during data movement. 

To address these challenges, you should deploy cross-platform solutions and employ strong encryption to help ensure that your data is unusable to unauthorized persons in the event that it’s mishandled. 

Even if your data is not primarily stored in the cloud, both the form in which data leaves and returns to your enterprise and the route data takes are important concerns. Data is only as secure as the weakest link in the processing chain. So, even if data is primarily kept encrypted and behind a firewall onsite, if it’s transmitted to an offsite backup or for third-party processing, the data may be exposed.

Malware detection or behavioral analysis that’s designed to spot suspicious activities can help prevent an internal or external data breach—and serve valuable functions in their own right. 

Encryption, however, helps protect data wherever it exists, whether it’s at rest or in motion.

Organizational challenges to your cloud environment

With data growing at an exponential rate, organizations are facing a growing list of data protection laws and regulations. What are at risk? Customers’ personal information, such as payment card information, addresses, phone numbers and social security numbers, to name a few. To have an effective security solution, organizations should adopt a risk-based approach to protecting customer data across environments. 

Here are five challenges that could impact your organization’s security posture: 

  •  Ensuring compliance 
  •  Assuring privacy 
  •  Improving productivity 
  •  Monitoring access controls 
  •  Addressing vulnerabilities

IBM Security™ Guardium® data protection platform is designed to help your organization meet these challenges with smarter data protection capabilities across environments. 

Keep up with compliance

The realities of cloud-based storage and computing mean that your sensitive data across hybrid multicloud systems could be subject to industry and government regulations. 

If your data is in a public cloud, you must be aware of how the CSP plans to protect your sensitive data. For example, according to the European Union (EU) General Data Protection Regulation (GDPR), information that reveals a person’s racial or ethnic origin are considered sensitive and could be subject to specific processing conditions.5 These requirements apply even to companies located in other regions of the world that hold and access the personal data of EU residents.

Understanding where an organization’s data resides, what types of information it consists of, and how these relate across the enterprise can help business leaders define the right policies for securing and encrypting their data

Additionally, it could also help with demonstrating compliance with regulations, such as:

  • Sarbanes-Oxley (SOX) 
  • Payment Card Industry Data Security Standard (PCI DSS) 
  • Security Content Automation Protocol (SCAP) 
  • Federal Information Security Management Act (FISMA) 
  • Health Information Technology for Economic and Clinical Health Act (HITECH) 
  • Health Insurance Portability and Accountability Act (HIPAA) 
  • California Consumer Privacy Act (CCPA). 

IBM Security Guardium solutions are designed to monitor and audit data activity across databases, files, cloud deployments, mainframe environments, big data repositories, and containers. The process is streamlined with automation, thus lowering costs and time for compliance requirements. Learn more

Address privacy issues

With the proliferation of smartphones, tablets and smart watches, managing access controls and privacy can become a daunting task. One of the challenges for security administrators is ensuring that only individuals with a valid business reason have access to personal information. For example, physicians should have access to sensitive information, such as a patient’s symptoms and prognosis data, whereas a billing clerk only needs the patient’s insurance number and billing address.

Your customers expect you to make their privacy a priority. Start with developing a privacy policy, describing the information you collect about your customers and what you intend to do with it.

IBM Security Guardium Insights provides security teams with risk-based views and alerts, as well as advanced analytics based on proprietary machine learning (ML) technology to help them uncover hidden threats within large volumes of data across hybrid environments. Learn more

Hear from Kevin Baker, Chief Information Security Officer at Westfield, on the data privacy challenges facing his organization, and his approach to addressing them through the necessary insights and automation while scaling to support innovation with IBM Security Guardium Insights. 

Improve productivity

Security and privacy policies should enable and enhance, not interfere with business operations. Policies should be built into everyday operations and work seamlessly within and across all environments—in private, public, on-premises and hybrid environments—without impacting your productivity. For example, when private clouds are deployed to facilitate application testing, consider using encryption or tokenization to mitigate the risk of exposing that sensitive data.

IBM® Guardium solutions can help your security teams monitor user activity and respond to threats in real time. This process is streamlined with automated and centralized controls, thus reducing the time spent on investigations and empowering database administrators and data privacy specialists to make more informed decisions. 

According to Ponemon Institute, IBM Guardium solutions can help make IT security teams more efficient.7 Prior to deploying the Guardium solution, about 61% of the surveyed IT security teams’ time was spent identifying and remediating data security issues. Post deployment, the average percentage of time spent on such activities was 40%, a decrease of 42%.

Monitor access controls

The lifecycle of a data breach is getting longer, states a study by the Ponemon Institute. In fact, the institute’s research found that 49% of the data breaches studied were due to human error, including system glitches and “‘inadvertent insiders” who may be compromised by phishing attacks or have their devices infected or lost/stolen.” 

Cybercriminals could range from individuals to state-sponsored hackers with disruptive intentions. They could be rogue computer scientists trying to show off or make a political statement, or they may be tough, organized intruders. They could be disgruntled employees or even foreign state-sponsored hacker who want to collect intelligence from government organizations.

Breaches can also be accidental, such as stolen credentials, human error or misconfigurations, for example, when permissions are set incorrectly on a database table, or when an employee’s credentials are compromised. One way to avoid this issue is by authorizing both privileged and ordinary end users with 

“least possible privilege” to minimize abuse of privileges and errors. Organizations should protect data from both internal and external attacks in physical, virtual and private cloud environments

Perimeter defenses are important, but what’s more important is protecting the sensitive data wherever it resides. This way, if the perimeter is breached, sensitive data will remain secure and unusable to a thief. Declining perimeters make protection of data at its source crucial.

A layered data security solution can help administrators examine data access patterns and privileged user behaviors to understand what’s happening inside their private cloud environment. The challenge is to implement security solutions without hampering the business’ ability to grow and adapt, therefore providing appropriate access and data protections to ensure data is managed on a need-to-know basis, wherever it resides. 

Address vulnerability assessments

When it comes to defending against attackers, what worked in the past may not work today. Many organizations rely on diverse security technologies that could be operating in silos. According to a study by Forrester Consulting, on average, organizations are managing 25 different security products or services from 13 vendors.

The number of data repository vulnerabilities is vast, and criminals can exploit even the smallest window of opportunity. Some of these vulnerabilities include missing patches, misconfigurations, and default system settings that could leave gaps that cybercriminals are hoping for. This complexity is increasingly difficult to keep track of and manage as data repositories become virtualized. 

Furthermore, companies that move to cloud often struggle to evolve their data security practices in a way that enables them to protect sensitive data while enjoying the benefits of the cloud. The more cloud services your organization uses, the more control you may need to manage the different environments. 

Think about the use of homegrown tools that are in place today for data security. Will the homegrown tools you’re using today work tomorrow? For example, with data-masking routines or database activity monitoring scripts, will there be coding changes required to make them work on a virtual database? Chances are that a significant investment will be required to update these homegrown solutions. In short, organizations need a data-centric approach to security wherein security strategies are built into the fabric of their hybrid, multicloud environments. 

Unlike a point solution, IBM Security Guardium Insights supports heterogeneous integration with other industry-leading security solutions. Guardium data protection also provides best-of-breed integration with IBM Security solutions, such as IBM QRadar® SIEM for proactive data protection.

A smarter data security approach

As cloud matures and scales rapidly, we must realize that effective data security isn’t a sprint, but a marathon—an ongoing process that continues through the life of data.

While there’s no one-size-fits-all approach for data security, it’s crucial that organizations look to centralize data security and protection controls that can work well together. This approach can help security teams improve visibility and control over data across the enterprise and cloud.

What constitutes an effective cloud security strategy?

  • Discover and classify your structured and unstructured sensitive data, online and offline, regardless of where it resides and classify sensitive IP and data that’s subject to regulations, such as PCI, HIPAA, Lei Geral de Proteção de Dados (LGPD), CCPA, and GDPR.
  • Assess risk with contextual insights and analytics. How is your critical data being protected? Are access entitlements in accordance with industry and regulatory requirements? Is the data vulnerable to unauthorized access and security risks based on a lack of protection controls?
  • Protect sensitive data sources based on a deep understanding of what data you have and who has and should have access to it. Protection controls must accommodate the different data types and user profiles within your environment. Flexible access policies, data encryption and encryption key management should help keep your sensitive data protected.
  • Monitor data access and usage patterns to quickly uncover suspicious activity. Once the appropriate controls are in place, you need to be quickly alerted to suspicious activities and deviations from data access and usage policies. You must also be able to centrally visualize your data security and compliance posture across multiple data environments without relying on multiple, disjointed consoles. 
  • Respond to threats in real time. Once alerted to potential vulnerabilities and risk, you need the ability to respond quickly. Actions can include blocking and quarantining suspicious activity, suspending or shutting down user sessions or data access, and sending actionable alerts to IT security and operations systems. 
  • Simplify compliance and its reporting. You need to be able to demonstrate data security and compliance to both internal and external parties and make appropriate modifications based on results. Demonstrating compliance with regulatory mandates often requires storing and reporting on years’ worth of data security and audit data. Data security and compliance reporting must be comprehensive, accounting for your entire data environment.

Encrypt data in hybrid, multicloud environments

Since we can no longer rely on the perimeter to secure an organization’s sensitive data, it’s crucial for today’s business leaders to wrap the data itself in protection. IBM Security Guardium Data Encryption is a suite of modular, integrated and highly scalable encryption, tokenization, access management, and encryption key management solutions that can be deployed essentially across all environments. These solutions encode your sensitive information and provide granular control over who has the ability to decode it.

Strong encryption is a common answer to the challenge of securing sensitive data wherever it resides. However, encryption raises complicated issues of portability and access assurance. Data is only as good as the security and reliability of the keys that protect it. How are keys backed up? Can data be transparently moved among cloud providers, or shared between cloud-based and local storage? 

IBM Security Guardium Key Lifecycle Manager can help customers who require more stringent data protection. The solution offers security-rich, robust key storage, key serving and key lifecycle management for IBM and non-IBM storage solutions using the OASIS Key Management Interoperability Protocol (KMIP). With centralized management of encryption keys, organizations will be able to meet regulations, such as the PCI DSS, SOX and HIPAA.

IBM Security Guardium platform was named a Leader in the Forrester Wave: Data Security Portfolio Vendors, Q2 2019. According to the report, the Guardium platform is a “good fit for buyers seeking to centrally reduce and manage data risks across disparate database environments.”

Discover a new approach to data security

At the core of protecting a hybrid, multicloud environment is the need for organizations to adopt solutions that offer maximum visibility and business continuity and help meet compliance and customer trust. 

IBM Security Guardium platform is centered on the overarching value proposition of a “smarter and more adaptive approach” to data security. Further, the solution supports a wide array of cloud environments, including private and public clouds, across PaaS, IaaS, and SaaS environments, for continuous operations and security. 

The Ponemon Institute conducted a survey of organizations that use the Guardium solution to monitor and defend their company’s data and databases. It found that 86% of respondents said the ability to use the Guardium solution to manage data risk across complex  IT environments, such as a multicloud or hybrid cloud ecosystem, is very valuable. Similarly, ML and automation is a significant benefit in managing data risks across the enterprise.

With the Guardium solution, your security team can choose the system architecture that works for your enterprise. For example, your team can deploy all of the Guardium components in the cloud, or choose to keep some of those components, such as a central manager, on premises. This flexibility allows existing customers to easily extend their data protection strategy to the cloud without impacting existing deployments.