Orange County 949-556-3131

San Diego 619-618-2211

Toll Free 855-203-6339

IBM Modern Integration Field Guide

What are IBM Cloud Paks?

Beyond containers and Kubernetes, you need to orchestrate your production topology and provide management, security and governance for your applications. IBM Cloud Paks are enterprise-ready, containerized software solutions that run on Red Hat® OpenShift® on IBM Cloud™ and Red Hat Enterprise Linux. Built on a common integration layer, IBM Cloud Paks include containerized IBM middleware and common software services for development and management.

  • IBM Cloud Pak™ for Applications. Quickly build cloud-native apps by leveraging built-in developer tools and processes, including support for microservices functions and serverless computing. 
  • IBM Cloud Pak™ for Data. Simplify the collection, organization, and analysis of data. Turn data into insights through an integrated catalog of IBM, open source, and third-party microservices add-ons. 
  • IBM Cloud Pak™ for Integration. Achieve the speed, flexibility, security, and scale required for all of your integration and digital transformation initiatives, including API lifecycle, application and data integration, messaging and events, high-speed transfer, and integration security
  • IBM Cloud Pak™ for Automation. Deploy on your choice of clouds, with low-code tools for business users and real-time performance visibility for business managers. Migrate your automation runtimes without application changes or data migration. Automate at scale without vendor lock-in.
  • IBM Cloud Pak™ for Multicloud Management. Gain consistent visibility, automation, and governance across a wide range of hybrid, multicloud management capabilities including integration with existing tools and processes
  • IBM Cloud Pak™ for Security. Integrate security tools to gain insights into threats across hybrid, multicloud environments.

IBM Cloud Pak for Integration 

Building integrated solutions requires you to use more than one integration pattern at a time. Simplify the management of your integration architecture and reduce cost. Running on Red Hat OpenShift, IBM Cloud Pak for Integration gives you the agility to deploy workloads on-premises and on private and public clouds. 

  • API lifecycle management. Create, secure, manage, share, and monetize APIs across clouds while you maintain continuous availability. 
  • Application and data integration. Integrate your business data and applications quickly and easily across any cloud system
  • Enterprise messaging. Simplify, accelerate, and facilitate the reliable exchange of data with a trusted, flexible, and security-rich messaging solution. 
  • Event streaming. Use Apache Kafka to deliver messages more easily and reliably and to react to events in real time. 
  • High-speed data transfer. Reliably send, share, stream, and sync large files and data sets at maximum speed. 
  • Platform-level security, automation, and monitoring. Quickly set up and manage gateways, control access on a per resource basis, deploy your integration flows, and monitor all of your traffic.

API lifecycle management 

Bridge the gap between cloud and on-premises applications quickly and easily by abstracting your back-end implementation as APIs. One of the best ways to do this is by exposing services as APIs for external consumption and let the consuming applications compose the integration logic

  • Expand. Provide a standard API interface. Include global API discovery to access key business functions as fine-grained services. Encourage data reuse and mashups driven by innovative transformation use cases.
  • Integrate. Create a significant impact on your business goals by exposing core services through managed APIs. Enable projects to integrate with one another and discover the benefits of synergy across the enterprise.
  • Scale. Be prepared to scale dynamically based on the demands of your expanding ecosystem and other usage metrics.

Application & data integration

Integrate all your business data and applications across any cloud more quickly and easily using open standards. From the simplest SaaS application to the most complex legacy systems, this pattern alleviates the concern about mismatched sources, formats, or standards. 

  • Integrate applications. Connect applications and data sources on-premises or across multiple clouds to coordinate the exchange of business information as a coarse-grained service so that core data and transactions maintain their integrity. In contrast to the API management pattern, this pattern is best suited for coarse-grained services. 
  • Integrate data. In near real time, synchronize data across multiple endpoints in the integration landscape to achieve a cohesive view of data, gathered from legacy back ends to SaaS applications, to DBaaS repositories, to analytics cloud services.
  • Incorporate agile integration. Unify cross-enterprise capabilities. Enforce the use of core enterprise services and business processes. Include cognitive augmentation within your integration logic. Set up agile organizational models and governance practices.

Enterprise messaging

Simplify, accelerate, and facilitate the reliable exchange of data with a flexible and enhance security-rich IT services messaging solution. Extend traditional messaging capabilities in modern applications to communicate with new technologies from AI, IoT devices, and other digital channels. 

  • Ensure secure and reliable messaging. Preserve message integrity throughout the network, protect data, and ensure regulatory compliance with security-rich functions. Provide reliable delivery without message loss, duplication, or complex recovery.
  • Unify your enterprise. More easily integrate heterogeneous application platforms using industry-standard JMS messaging protocols, scalable publish-subscribe, and a choice of APIs
  • Expect high performance and scalable message transfer. Your apps can rely on a highly available solution with fully automated failover, dynamically distributed messaging workloads, high throughput, and a low-latency solution.
  • Simplify management and control. Use a dashboard to gain insights with visibility to message and file tracking. Audit data movement and transaction completion.

Event streaming

Take advantage of event streams to build adaptive solutions with engaging, more personalized user experiences by responding to events before the moment passes. By design, events occur in a continuous stream from a multitude of sources in a low-latency, high-velocity manner. 

  • Decrease system complexity. Loose coupling allows event producers to emit events without any knowledge about who is going to consume those events. Likewise, event consumers don’t need to be aware of the event emitters. 
  • Simplify the interface. One event producer can reach multiple endpoints with a single call.
  • React to events as they happen. Enable the following scenarios: IoT device, streaming analytics, real-time back-end transactions, geolocation tracking, and auditing
  • Facilitate machine learning. Improve predictive analytics by moving to real-time event streaming from batch processing

High-speed file transfer

Enterprises need a reliable, fast, secure data transfer and synchronized system that is hybrid and multi-cloud IT services. An integration platform makes it possible to securely transfer data across geographies faster than traditional tools, between any kind of storage, whether it’s on-premises, in the cloud, or across diverse cloud vendors.

  • Integrate application data. Coordinate the exchange of business information so that data is available when and where it is needed.
  • Transform data for analytics. Access, cleanse, and prepare data to create a consistent view of your business within a data warehouse or data lake.
  • Enrich enterprise data. Augment DBaaS content with data from enterprise back-end systems for a 360-degree view of the user. Allow partner and in-house data sources to sync and complement each other’s updates
  • Transfer data. Move huge amounts of data between on-premises and cloud or from cloud to cloud rapidly and predictably with enhanced levels of IT service security. Speed the adoption of cloud platforms when data is very large and needs to be exchanged across long distances. 

IBM Garage: Accelerate your journey

Modernization comes in many flavors, and rewriting your entire estate is not feasible. Big bang modernization efforts are risky, so it is best to break large initiatives into smaller projects with measurable impact. Your goal is to accelerate value, deliver frequently, and reduce risk. IBM Garage experts can help.

  • Co-create. Identify a business modernization opportunity. Define and build the MVP with your squad, get feedback, and co-create a solution.
  • Co-execute. Manage risk by choosing the right approach to modernize your current estate. Accelerate your journey through automation and technology
  • Co-operate. Harden for production, standardize operations, and improve DevOps efficiency across your application estate. 

Agile Integration: Container-based and microservices-aligned lightweight integration runtimes

Integration Has Changed

IDC estimates that spending on digital transformation initiatives will represent a $20 trillion market opportunity over the next 5 years. What’s behind this staggering explosion of spending? The ever-present, ever-growing need to build new customer experiences through connected experiences across a network of applications that leverage data of all types

That’s no easy task – bringing together processes and information sources at the right time and in the right context is difficult at best, particularly when you consider the aggressive adoption of SaaS business applications. New data sources need to be injected into business processes to create competitive differentiation

The Value of Application Integration for Digital Transformation

When you consider your agenda for building new customer experiences and focus on how data is accessed and made available for the services and APIs that power these initiatives, you can see several significant benefits that application integration brings to the table:

  • Effectively addressing disparity – Access data from any system in any format and build homogeneity from it, no matter how diverse your multicloud landscape grows
  • Expertise of the endpoints – Modern integration includes smarts around complex protocols and data formats, but it also incorporates intelligence about the actual objects, business and functions within the end systems
  • Innovation through data – Applications owe much of their innovation to their opportunity to combine data beyond their boundaries and create meaning from it, a trait particularly visible in microservices architecture
  • Enterprise-grade artifacts – Integration flows inherit a tremendous amount of value from the runtime, which includes enterprise-grade features for error recovery, fault tolerance, log capture, performance analysis, and much more.

The integration landscape is changing to keep up with enterprise and marketplace computing demands, but how did we get from SOA and ESBs to a modern, containerized, agile approach to integration?

The Journey So Far – SOA and the ESB pattern

Before we can look forward to the future of agile integration, we need to understand what came before. SOA (service oriented architecture) patterns emerged at the start of the millennium, and at first the wide acceptance of the standards SOA was built upon heralded a bright future where every system could discover and talk to any other system via synchronous exposure patterns.

This was typically implemented in the form of the ESB (enterprise service bus) – an architectural pattern that was aimed at providing synchronous connectivity to backend systems typically over web or on-site embedded services. While many enterprises successfully implemented the ESB pattern, it became something of a victim of its own success.

  • ESB patterns often formed a single infrastructure for the whole enterprise, with tens or hundreds of integrations installed on a production server cluster. Although heavy centralization isn’t required by the ESB pattern, the implemented topologies almost always fell prey to it. 
  • Centralized ESB patterns often failed to deliver the significant savings companies were hoping for. Few interfaces could be re-used from one project to another, yet the creation and maintenance of interfaces was prohibitively expensive for any one project to take on. 
  • SOA was more complex than just the implementation of an ESB, particularly around who would fund an enterprise-wide program. Cross Enterprise initiatives like SOA and its underlying ESB struggled to find funding, and often that funding only applied to services that would be reusable enough to cover their creation cost.

The result was that creation of services by this specialist SOA team sometimes became a bottleneck for projects rather than the enabler that it was intended to be. This typically gave the centralized ESB pattern a bad name by association.

All that said, the centralized ESB pattern does bring some benefits, especially if they have a highly skilled integration team with a low attrition rate, and who receive a predictable and manageable number of new integration requirements. A single, centralized ESB certainly simplifies consistency and governance of implementation. However, many organizations have more fluid and dynamic requirements to manage, and are also under pressure to implement integration using similar cloud native technologies and agile methods as are being used in other parts of the organization. A case in point is the move to microservices architecture typically found in the application development space. 

Service oriented architecture (SOA) vs microservice architecture

SOA and microservices architecture share many words in common, but they are in fact completely separate concepts.

Service-oriented architecture and the associated ESB pattern is an enterprise-wide initiative to make the data and functions in systems of record readily available to new applications. We create re-usable, synchronous interfaces such as web services and RESTful APIs to expose the systems of record, such that new innovative applications can be created more quickly by incorporating data from multiple systems in real time

Microservices architecture, on the other hand, is a way of writing an individual application as a set of smaller (microservice) components in a way that makes that application more agile, scalable, and resilient. So in summary, service oriented architecture is about real-time integration between applications, whereas microservices architecture is about how we build the internals of applications themselves. 

The Case for Agile Integration

Why have microservices concepts become so popular in the application space? They represent an alternative approach to structuring applications. Rather than an application being a large silo of code running on the same server, the application is designed as a collection of smaller, completely independently-running components.

Microservices architecture enables three critical benefits:

  • Greater agility – Microservices are small enough to be understood completely in isolation and changed independently. 
  • Elastic scalability – Their resource usage can be fully tied into the business model.
  • Discrete resilience – With suitable decoupling, changes to one microservice do not affect others at runtime.

With those benefits in mind, what would it look like if we re-imagined integration, which is typically deployed in centralized silos, with a new perspective based on microservices architecture? That’s what we call an “Agile Integration.”

There are three related, but separate aspects to agile integration:

Aspect 1: Fine-grained integration deployment. What might we gain by breaking out the integrations in the siloed ESB into separate runtimes that could be maintained and scaled independently? What is the simplest way that these discrete integrations be made

Aspect 2: Decentralized integration ownership. How should we adjust the organizational structure to better leverage a more autonomous approach, giving application teams more control over the creation and exposure of their own integrations?

Aspect 3: Cloud native integration infrastructure. How can we best leverage the container-based infrastructure that underpins cloud native applications, to provides productivity, operational consistency, and portability for both applications and integrations across a hybrid and multi-cloud landscape

 

Aspect 1: Fine-grained Integration Deployment

Traditional integration is characterized by the heavily centralized deployment of integrations in the ESB pattern. Here, all integrations are deployed to a single heavily nurtured (HA) pair of integration servers has been shown to introduce a bottleneck for projects. Any deployment to the shared servers runs the risk of destabilizing existing critical interfaces. No individual project can choose to upgrade the version of the integration middleware to gain access to new features. 

Using the same concepts as microservice architecture, we could break up the enterprise-wide ESB into smaller, more manageable and dedicated pieces. Perhaps in some cases we can even get down to one runtime for each interface we expose. These “fine-grained integration deployment” patterns provide specialized, right-sized containers, offering improved agility, scalability and resilience, and look very different to the centralized ESB patterns of the past. Figure 1 demonstrates in simple terms how a centralized ESB differs from fine-grained integration deployment.

Fine-grained integration deployment draws on the benefits of a microservices architecture. Let’s revisit what we listed as microservices benefits in light of fine-grained integration deployment:

  • Agility: Different teams can work on integrations independently without deferring to a centralized group or infrastructure that can quickly become a bottleneck. Individual integration flows can be changed, rebuilt, and deployed independently of other flows, enabling safer application of changes and maximizing speed to production.
  • Scalability: Individual flows can be scaled on their own, allowing you to take advantage of efficient elastic scaling of cloud infrastructures.
  • Resilience: Isolated integration flows that are deployed in separate containers cannot affect one another by stealing shared resources, such as memory, connections, or CPU.

Aspect 2: Decentralized integration ownership

A significant challenge faced by service-oriented architecture was the way that it tended to force the creation of centralized integration teams and infrastructure to implement the service layer. 

This created ongoing friction in the pace at which projects could run since they always had the central integration team as a dependency. The central team knew their integration technology well, but often didn’t understand the applications they were integrating, so translating requirements could be slow and error prone. 

Many organizations would have preferred the application teams own the creation of their own services, but the technology and infrastructure of the time didn’t enable that. 

The move to fine-grained integration deployment opens a door such that ownership of the creation and maintenance of integrations can also be distributed out to the application teams. It’s not unreasonable for business application teams to take on integration work, streamlining the implementation of new integrations. 

Furthermore, API management has matured to the point where application teams can easily manage the exposure of their own APIs, again without resorting to a separate centralized integration team. 

Microservices design patterns often prefer to increase decoupling by receiving event streams of data and building localized data representations rather than always going via API calls to retrieve data in real time. Agile integration also considers how best to enable teams to publish and consume event streams both within and beyond application boundaries.

Aspect 3: Cloud-native integration infrastructure

Integration runtimes have changed dramatically in recent years. So much so that these lightweight runtimes can be used in truly cloud-native ways. By this we are referring to their ability to hand off the burden of many of their previously proprietary mechanisms for cluster management, scaling, availability and to the cloud platform in which they are running.

This entails a lot more than just running them in a containerized environment. It means they have to be able to function as “cattle not pets,” making best use of the orchestration capabilities such as Kubernetes and many other common cloud standard frameworks. 

Clearly, Agile Integration requires that the integration topology be deployed very differently. A key aspect of that is a modern integration runtime that can be run in a container-based environment and is well suited to cloudnative deployment techniques. Modern integration runtimes are almost unrecognizable from their historical peers. Let’s have a look at some of those differences:

  • Fast lightweight runtime: They run in containers such as Docker and are sufficiently lightweight that they can be started and stopped in seconds and can be easily administered by orchestration frameworks such as Kubernetes.
  • Dependency free: They no longer require databases or message queues, although obviously, they are very adept at connecting to them if they need to. 
  • File system based installation: They can be installed simply by laying their binaries out on a file system and starting them up—ideal for the layered file systems of Docker images. 
  • DevOps tooling support: The runtime should be continuous integration and deployment-ready. Script and property file-based install, build, deploy, and configuration to enable “infrastructure as code” practices. Template scripts for standard build and deploy tools should be provided to accelerate inclusion into DevOps pipelines.
  • API-first: The primary communication protocol should be RESTful APIs. Exposing integrations as RESTful APIs should be trivial and based upon common conventions such as the Open API specification. Calling downstream RESTful APIs should be equally trivial, including discovery via definition files.
  • Digital connectivity: In addition to the rich enterprise connectivity that has always been provided by integration runtimes, they must also connect to modern resources. For example, NoSQL databases (MongoDb and Cloudant etc.), and messaging services such as Kafka. Furthermore, they need access to a rich catalogue of application intelligent connectors for SaaS (software as a service) applications such as Salesforce
  • Continuous delivery: Continuous delivery is enabled by command line interfaces and template scripts that mesh into standard DevOps pipeline tools. This further reduces the knowledge required to implement interfaces and increases the pace of delivery
  • Enhanced tooling: Enhanced tooling for integration means most interfaces can be built by configuration alone, often by individuals with no integration background. With the addition of templates for common integration patterns, integration best practices are burned into the tooling, further simplifying the tasks. Deep integration specialists are less often required, and some integration can potentially be taken on by application teams as we will see in the next section on decentralized integration. 

Modern integration runtimes are well suited to the three aspects of an agile integration methodology: fine-grained deployment, decentralized ownership, and true cloud-native infrastructure. 

Along with integration runtimes becoming more lightweight and container friendly, we also see API management and messaging/eventing infrastructure moving to container-based deployment. This is generally in order to benefit from the operational constancy provided by orchestration platforms such as Kubernetes that provides auto scaling, load-balancing, deployment, internal routing, reinstatement and more in a standardized way, significantly simplifying the administration of the platform.

Agile Integration for the Integration Platform

Throughout this paper, we have been focused on the application integration features as deployed in an agile integration architecture. However, many enterprise solutions can only be solved by applying several critical integration capabilities. An integration platform (or what some analysts refer to as a “hybrid integration platform”) brings together these capabilities so that organizations can build business solutions in a more efficient and consistent way. 

Many industry specialists agree on the value of this integration platform. Gartner notes:

The hybrid integration platform (HIP) is a framework of on-premises and cloud-based integration and governance capabilities that enables differently skilled personas (integration specialists and non-specialists) to support a wide range of integration use cases.… Application leaders responsible for integration should leverage the HIP capabilities framework to modernize their integration strategies and infrastructure, so they can address the emerging use cases for digital business. 

One of the key things that Gartner notes is that the integration platform allows multiple people from across the organization to work in user experiences that best fits their needs. This means that business users can be productive in a simpler experience that guides them through solving straightforward problems, while TeraPixels Systems IT specialists in San Diego have expert levels of control to deal with the more complex enterprise scenarios. These users can then work together through reuse of the assets that have been shared; while preserving governance across the whole.

Satisfying the emerging use cases of the digital transformation is as important as supporting the various user communities. The bulk of this paper will explore these emerging use cases, but first we should further elaborate on the key capabilities that must be part of the integration platform.

IBM Cloud Pak for Integration

IBM Cloud Integration brings together the key set of integration capabilities into a coherent platform that is simple, fast and trusted. It allows you to easily build powerful integrations and APIs in minutes, provides leading performance and scalability, and offers unmatched end-to-end capabilities with enterprise-grade security. IBM Cloud Pak for Integration is built on the open source Kubernetes platform for container orchestration. 

IBM Cloud Pak for Integration is the most complete hybrid integration platform in the industry including all of the key integration capabilities your team needs:

Application and Data Integration Connects applications and data sources on-premises or in the cloud, in order to coordinate exchange business information so that data is available when and where needed.

API Lifecycle Exposes and manages business services as reusable APIs for select developer communities both internal and external to your organization. Organizations adopt an API strategy to accelerate how effectively they can share their unique data and services assets to then fuel new applications and new business opportunities.

Enterprise Messaging Ensures real-time information is available from anywhere at anytime by providing reliable message delivery without message loss, duplication or complex recovery in the event of a system or network issue. 

High Speed Data Transfer: Move huge amounts of data between on-premises and cloud or cloud-to cloud rapidly and predictably with enhanced levels of security. Facilitates how quickly organizations can adopt cloud platforms when data is very large

Secure Gateway Extend Connectivity and Integration beyond the enterprise with DMZ-ready edge capabilities that protect APIs, the data they move, and the systems behind them

Hybrid Integration Platforms: Digital Business Calls for Integration Modernization and Greater Agility

Integration is the lifeblood of today’s digital economy. Hybrid integration is a key business imperative for most enterprises, as digitalization has led to a proliferation of applications, services, APIs, and data stores that need to be connected to realize end-to-end functionality and, in many cases, an entirely new digital business proposition. A hybrid integration platform caters to a range of integration needs, including on-premises app integration, cloud application integration, messaging, event streaming, rapid API creation and lifecycle management, B2B/EDI integration, mobile application/back-end integration, and file transfer. User productivity tools and deployment flexibility are key characteristics of a hybrid integration platform that helps enterprises respond faster to evolving digital business requirements. 

Ovum view

Ovum ICT Enterprise Insights 2018 survey results indicate a strong inclination on the part of IT leaders to invest in integration infrastructure modernization, including the adoption of new integration platforms. IT services professionals in Orange County and other areas continue to struggle to meet new application and data integration requirements driven by digitalization and changing customer expectations. Line-of-business (LoB) leaders are no longer willing to wait for months for the delivery of integration capabilities that are mission-critical for specific business initiatives. Furthermore, integration competency centers (ICCs) or integration centers of excellence (COEs) are being pushed hard to look for alternatives that significantly reduce time to value without prolonged procurement cycles.

Digital business calls for flexible integration capabilities that connect diverse applications, services, APIs, and data stores; hybrid integration continues to be a complex IT issue. The current enterprise IT agenda gives top priority to connecting an ever-increasing number of endpoints and mitigating islands of IT infrastructure and information silos that make the vision of a “connected enterprise” difficult to achieve. Hybrid integration, which involves disparate applications, data formats, deployment models, and transactions, is a multifaceted problem for which there is no simple solution. For example, while an enterprise service bus (ESB) can be appropriate for data/protocol transformation and on-premises application integration, integration PaaS (iPaaS) is clearly a popular solution for SaaS-to-on-premises and SaaS-to-SaaS integration.

The center-of-gravity hypothesis applies to integration architecture. There is a greater inclination to deploy integration platforms closer to applications and data sources. APIs continue to gain prominence as flexible interfaces to digital business services and enablers for enterprises looking to innovate and participate in the wider digital economy. The unrelenting drive toward SaaS is leading to a rapid shift of integration processes to the cloud. A combination of these trends is driving the emergence of a new agile hybrid integration paradigm, with cloud-based integration platforms used for cloud, mobile application/back-end, B2B/EDI, and data integration. This integration paradigm or pattern is gaining popularity as enterprises do not have the luxury of executing dedicated, cost intensive and time-consuming integration projects to meet digitalization-led, hybrid integration requirements. Enterprise IT leaders realize that existing legacy integration infrastructure offers less flexibility and is difficult to maintain, so they are now more open to new integration approaches or platforms that improve developer productivity and allow them to “do more with less.” Moreover, traditional, heavyweight middleware is a barrier for enterprises looking to achieve agile hybrid integration to meet critical digital business requirements.

Agile hybrid integration calls for modular solutions that integrate well with each other and offer a uniform user experience (UX) and developer productivity tools to reduce time to integration and cost of ownership. For example, enterprises need to achieve integration within a few days of subscribing to a new set of SaaS applications, and frequently need to expose SaaS applications via representational state transfer (REST) APIs for consumption by mobile applications. They may also need to design and manage a new set of APIs for externalization of the enterprise or monetization of new applications and enterprise data assets. A hybrid integration platform can meet all these requirements, with modular integration solutions deployed on-premises, in the cloud, or on software containers according to the requirements of specific use cases. 

In the background of changing digital business requirements, IT leaders need to focus on revamping their enterprise integration strategy, which invariably will involve adoption of a hybrid integration platform that offers deployment and operational flexibility and greater agility at a lower cost of ownership to meet multifaceted hybrid integration requirements. Integration modernization initiatives aim to use new integration patterns, development and cultural practices, and flexible deployment options to drive business agility and reduce costs. It is important to identify a strategic partner (and not just a software vendor with systems integration capabilities) that can provide essential advice and best practices based on years of practical experience to ensure that integration modernization initiatives stay on track and deliver desired outcomes. 

Recommendations

  • Enterprise IT leaders should focus on developing a forward-looking strategy for hybrid integration using the best of existing on-premises middleware and specific cloud-based integration services (i.e., PaaS products for hybrid integration). For all practical purposes, and in most cases, it would make sense to opt for a hybrid integration platform. This does not imply a complete “rip and replace” strategy for deciding the future of existing on-premises middleware. With DevOps practices, microservices, and containerized applications gaining popularity, IT leaders should evaluate the option of deploying middleware (integration platforms) on software containers as a means to driving operational agility and deployment flexibility
  • With several middleware vendors focusing on developing a substantial proposition for hybrid integration, it would be better to exploit a more cohesive set of integration capabilities provided by the same vendor. A “do it yourself” approach to integration or federation between middleware products offered by different vendors is rarely easy, and it is of course easier to train users on a hybrid integration platform offering a uniform UX
  • Integration is still predominantly carried out by IT service practitioners; however, IT leaders should consider “ease of use” for both integration practitioners and less skilled, non-technical users (e.g., power users) when selecting integration platforms for a range of hybrid integration use 

Integration modernization is a recurring theme driven by digitalization and the need for greater agility 

Hybrid integration complexity continues to drive integration modernization

Over the last couple of years, “integration modernization” has regularly featured in Ovum’s conversations with enterprise IT leaders. Digitalization has led to an almost unrelenting need for expose and consume APIs and exploiting digital assets to cater for ever-changing customer requirements and drive growth via new digital business models. Digital business initiatives call for more open, agile, and API-led integration capabilities, reducing time to integration. Enterprises need to develop customer-centric and more flexible business processes that can easily be extended via APIs to a range of access channels. The business side is asking some tough questions, including how fast and at what cost IT can deliver the desired integration capabilities. Ovum ICT Enterprise Insights 2018 survey results show that over 60% of respondent enterprises are planning substantial investment (including strategic investment in new iPaaS solutions) in iPaaS solutions over the next 18-month period. The survey results indicate that about 58% of respondent enterprises are planning substantial investment in API platforms over the same period. These figures clearly indicate enterprise interest in investing in new integration platforms to tackle hybrid integration challenges

Hybrid integration platform

Hybrid integration involves a mix of on-premises, cloud, B2B/EDI, mobile application/back-end integration, rapid API creation and lifecycle management, messaging, events, and file transfer use case scenarios of varying complexity (see Figure 1). Owing to specific business-IT requirements, enterprises may not have the flexibility to use “on-premises only” middleware or only cloud-based integration platforms. In certain cases, even the same integration capabilities (e.g., API management) need to be used both as on-premises middleware and as a cloud service (i.e., PaaS).  

An important aspect of hybrid integration requirements driven by digitalization is the need to support a range of user personas, including application developers, integration practitioners, enterprise/solution architects, and less skilled business users (i.e., non-technical users). Given the persistent time and budget constraints, enterprises often do not have the luxury of deploying only technical resources for hybrid integration initiatives and ICCs/integration COEs are not always in the driver’s seat. Simplified and uniform UX, self-service integration capabilities, and developer productivity tools are therefore critical in meeting hybrid integration requirements.

Ovum defines a hybrid integration platform as a cohesive set of integration software (middleware) products that enable users to develop, secure, and govern integration flows, connecting diverse applications, systems, services, and data stores, as well as enabling rapid API creation/composition and lifecycle management to meet the requirements of a range of hybrid integration use cases. A hybrid integration platform is “deployment model agnostic” in terms of delivering requisite integration capabilities, be it on-premises and cloud deployments or containerized middleware.

The key characteristics of a hybrid integration platform include:

  • support for a range of application, service, and data integration use cases, with an API-led, agile approach to integration, reducing development effort and costs 
  • uniformity in UX across different integration products or use cases and for a specific user persona 
  • uniformity in underlying infrastructure resources and enabling technologies 
  • flexible integration at a product or component API level 
  • self-service capabilities for enabling less skilled, non-technical users 
  • the flexibility to rapidly provision various combinations of cloud-based integration services based on specific requirements 
  • openness to federation with external, traditional on-premises middleware platforms 
  • support for embedding integration capabilities (via APIs) into a range of applications or solutions
  • developer productivity tools (e.g., a “drag-and-drop” approach to integration flow development and pre-built connectors and templates) and their extension to a broader set of integration capabilities 
  • flexible deployment options: on-premises deployment, public, private, and hybrid cloud deployment, and containerization 
  • centralization of administration and governance capabilities.

Specific features and capabilities of hybrid integration platforms vary from vendor to vendor, and certain hybrid integration platforms may not offer some of the above-specified capabilities. It is noteworthy that the evolution from traditional middleware and PaaS for specific integration use cases (e.g., iPaaS for SaaS integration) to a hybrid integration platform is a work in progress for a majority of middleware vendors. 

iPaaS is now a default option for SaaS integration, and the iPaaS model for delivery of cloud integration capabilities is no longer about only offering dozens or hundreds of connectors and pre-built integration templates. It is important for iPaaS vendors to target new user personas and a broader set of integration use cases. In this context, we see two key developments: self-service integration capabilities for less skilled, non-technical user enablement, and artificial intelligence (AI)/machine learning (ML) capabilities simplifying development of integration flows

Terapixels offers IT services in San Diego that has a hybrid of on site and remote monitoring. In addition, this hybrid environments call for a cloud-native integration paradigm that readily supports DevOps practices and drives operational agility by reducing the burden associated with cluster management, scaling, and availability. As per such a cloud-native integration paradigm, integration runtimes run on software containers, are continuous integration and continuous delivery and deployment (CI/CD) ready, and are significantly lightweight and responsive enough to start and stop within a few seconds. Many enterprises have made substantial progress in containerizing applications to benefit from a microservices architecture and portability across public, private, and hybrid cloud

microservices architecture and portability across public, private, and hybrid cloud environments. Containerized applications and middleware represent a good combination; in cases where an application and a runtime are packaged and deployed together, developers can benefit from container portability and ease of use offered by the application and middleware combination. 

It also makes sense for applications and middleware to share a common architecture, as DevOps teams can then avoid the overhead and complexity associated with the proposition of running containerized applications on different hardware and following different processes to the existing ones with traditional middleware. This is true even in cases that do not involve much re-architecting of the applications; DevOps teams can still develop and deploy faster using fewer resources

Developers are increasingly building APIs that support new applications that use loosely coupled microservices. Each microservice has a particular function that can be independently scaled or maintained without impacting other loosely coupled services. A microservices architecture can involve both internal and external APIs, with internal APIs invoked for inter-service communication and external API calls initiated by API consumers. IT leaders must realize that microservices management is different in scope from API management and focus on effectively meeting both requirements

Good API design and operations principles (i.e., API- and design-first principles) are gaining ground in enterprises that have previous experience of experimenting with enterprise API initiatives linked to new digital business services. Consequently, API platforms are gaining traction. Multicloud API management and deployment on software containers are areas of significant interest to large enterprises. An API platform enables users to develop, run, manage, and secure APIs and microservices, and offers a superset of capabilities in comparison to those provided by API lifecycle management solutions. As the graphical approach to integration flows provided by application integration capabilities can now be deployed as microservices, these technologies jointly provide a holistic approach to the rapid creation/composition of APIs and the subsequent management of their lifecycle and operations. A key benefit of an API platform is the ability to create, test, and implement an API rapidly and reiterate the cycle to create a new version of it based on user feedback (i.e., the application of DevOps-style techniques to API lifecycle and operations).  

Internet of Things (IoT) integration use cases call for message-oriented middleware (MoM) that offer standards-based message queue (MQ) middleware to ease integration with enterprise applications and data stores. It is particularly suitable for heterogeneous environments, as any type of data can be transported as messages; MQ middleware is frequently used in mainframe, cloud, mobile, and IoT integration use case scenarios. A hybrid integration platform should support integration requirements of such use cases.

A lot of data is generated in the form of streams of events, with publishers creating events and subscribers consuming these events in different ways or via different means. Event-driven applications can deliver better customer experiences. For example, this could be in the form of adding context to ML models to obtain real-time recommendations that evolve continually as per the requirements of a specific use case. Embedding real-time intelligence into applications and real-time reaction or responsiveness to events are key capabilities in this regard.

For distributed applications using microservices, developers can opt for asynchronous event-driven integration, in addition to the use of synchronous integration and APIs. Apache Kafka, an open source stream-processing platform, is a good option for such use cases that require high throughput and scalability. Kubernetes can be used as a scalable platform for hosting Apache Kafka applications. As Apache Kafka reduces the need for point-to-point integration for data sharing, it can reduce latency to just a few milliseconds, thereby enabling faster delivery of data to the users. A hybrid integration platform should cater to the integration requirements of event-driven applications.

A hybrid integration platform with simplified UX, scalable architecture, and flexible deployment options

Key attributes at architectural and operational levels simplify hybrid integration and drive developer productivity and cost savings

The IBM Cloud Pak for Integration (shown in Figure 2) solves a range of hybrid integration requirements, including on-premises and SaaS application and data integration, rapid API creation/composition and lifecycle management, API security and API monetization, messaging, event streaming, and high-speed transfer. IBM offers a holistic integration platform exploiting a container based portable architecture for a range of hybrid integration use cases, as well as providing essential advice and support to help enterprises succeed with their integration modernization initiatives.

IBM Cloud Pak for Integration was built for deployment on containers and provides a modern architecture that includes the management of containerized applications and Kubernetes, an open source container orchestration system. An interesting trend is the adoption of DevOps culture, microservices, and PaaS for responsiveness to changes driven by digital business requirements. With IBM Cloud Pak for Integration’s container-based architecture, users have the flexibility to deploy on any environment that has Kubernetes infrastructure, as well as exploit a self-service approach to integration. IBM Cloud Pak for Integration enables simplified creation and reuse of integrations, their deployment close to the source, and self-service integration to deliver faster time to integration at lower cost. It offers the benefit of a unified UX for developing and sharing integrations, which promotes integration asset reuse to improve developer productivity.

With IBM Cloud Pak for Integration, users can deploy integration capabilities easily onto a Kubernetes environment. This provision helps achieve faster time to value for integration modernization initiatives by integrating the monitoring, logging, and IT security systems of a private cloud environment to ensure uniformity across a cloud integration platform deployment. Containerization fosters the flexibility of cloud private architecture, thereby helping users meet performance and scalability requirements as specified in the service-level agreements (SLAs) of their business applications. Another benefit is common administration and governance enabled via a single point of accessibility. This mitigates the need for logging in to multiple tools and better supports access management across different teams. In terms of deployment flexibility, IBM supports deployment on any cloud or on-premises deployment.

IBM espouses an approach that differentiates API management from microservices management but also combines the two to offer more than the sum of the parts. Istio running on Kubernetes allows users to manage the interactions between microservices running in containers. Integration between Security Gateway and Istio service mesh (involving security, application resiliency, and dynamic routing between microservices) can offer a good solution to end-to-end routing. IBM has optimized the gateway for cloud-native workloads. An interesting trend is the growth in the number of API providers offering additional endpoints to adapt to emerging architectural styles, such as GraphQL. GraphQL APIs have the ability to use a single query to fetch required data from multiple resources. IBM is extending its integration platform’s API capabilities to provide support for GraphQL management, and this approach decouples GraphQL management from GraphQL server implementation. 

IBM’s Agile Integration methodology

Agile integration focuses on delivering business agility as part of an integration modernization initiative. It espouses the transition of integration ownership from centralized integration teams to application teams, as supported by the operational consistency achieved via containerization. On the operational agility side, cloud-native infrastructure offers dynamic scalability and resilience. 

A good case in point is a fine-grained integration deployment pattern involving specialized, right-sized containers that deliver improved agility, scalability, and resilience. This is quite different from traditional, centralized ESB patterns, which is why IBM redesigned each of these capabilities, including the application integration features, to be deployed in a microservices-aligned manner. With a fine-grained deployment pattern, enterprises can improve build independence and production speed to drive deployment agility. In a nutshell, as part of integration modernization initiatives, “agile integration” caters to people, processes, and technology aspects to provide necessary advice and guidance to help enterprises achieve faster time to value across diverse deployment environments.

 

Infrastructure as a Service: A Cost-Effective Path to Agile and Competitive IT

The Time to Move to Cloud is Now

Traditional on-premises infrastructure with high upfront costs and weeks long scaling lead times is no longer sufficient to effectively support today’s needs and required responsiveness. IT is increasingly moving to a direct revenue-supporting position within the enterprise. Applications may require scaling from hundreds to tens of thousands of users, or go from one geographic location to multiple locations, in a matter of days.

Not being able to do this has direct revenue impact. Responding to this high velocity of change requires an IT infrastructure layer with comparable flexibility and scalability. Likewise, built-in resilience at the IT infrastructure layer is a necessity to move forward confidently with the digital transformation of the business

Cloud infrastructure or infrastructure as a service (IaaS)is designed to deliver scalable, automated, and utility financial model capabilities. IaaS services are consumed on a pay-as-you-go basis, with no upfront costs, and on-demand scalability. In addition, IaaS services from the major providers are delivered from a globally distributed set of data centers, and designed for immediate availability, resilience, and lower upfront investment. 

From a broader perspective, IaaS and cloud technologies bring to enterprises three key capabilities.

» Low upfront investments. Get started on initiatives at no cost and in turn achieve faster launch and faster time to market for new initiatives. This is important as organizations ramp up their digital assets and experiment with the best ways to leverage technologies—shifting away from a costly capex model to a more beneficial opex one.

» Rapid scaling and resilience. From a capacity and a geographic footprint perspective, cloud technologies allow successful initiatives to be quickly scaled up and replicated across physical locations as needed, allowing solutions to address availability, expansion, and scaling needs at any time without customer disruptions

» Access to a broad ecosystem of higher layer services and partners. This Includes access to faster and more cost effective development tools and databases, advanced analytics capabilities, and technologies like AI/ML. These can jumpstart projects and lead to faster application development and deployments and avoid upfront investment to build these platform capabilities in-house.

Most Common Entry Points and Use Cases for Cloud Infrastructure

With the increase in familiarity and acceptance, cloud IaaS is gaining adoption across nearly all types of enterprise IT use cases and organizations are moving applications to cloud through a variety of the following paths and entry points:

» IT Data Center consolidation and expansion: Legacy technology infrastructure can be rigid and limited in use and management. Often it requires manual input and resource to maintain applications and services and does not scale quickly or easily to suit business needs, without major expenditure and potential down time. Cloud technology offers an increased agility, automation, and intelligent services to all aspects for the datacenter. It enables quick scalability, reducing resource demands and costs, and can improve ROI by expanding services on a global scale

» Business continuity and disaster recovery: Improving IT resiliency and maintaining business continuity are more important than ever for any enterprise. The flexibility and agility of cloud makes it an optimal solution to mitigate risks and maintain business continuity. In fact, the cloud often improves uptime, performance, and availability of applications, data, and workloads from traditional on-premises environments. In the cloud, organizations can recover applications, data, and workloads completely and quickly.

» Application modernization and migration: Another approach is the application modernization and migration path to cloud, where an application is first re-architected to take advantage of the native capabilities available on cloud such as containers, scale-out capability, and other readily available services. The specific path selected is typically determined by the workload itself and the level of technical capability available to move that workload to cloud.

» Virtual machine migration:One commonly seen path to cloud adoption of enterprise applications is the “lift and shift”migration of virtual machines (VMs) into cloud environments. This involves moving the applications on a VM into an identical or near identical VM in an IaaS environment. While this may still require minor configuration changes in the application or deployment scripts, it reduces the rework required on the application before moving it to cloud

» Regulated workloads: With the maturity of cloud services and the expansion of cloud capabilities, cloud infrastructure is also seeing adoption for regulated workloads and highly secure sensitive workloads. These have been enabled by specific capabilities that allow such workloads to run in the cloud such as dedicated bare metal services and built-in security capabilities.

Security Concerns, Skill Sets, and Migration are Top Challenges with Cloud

While cloud IaaS is gaining traction across enterprises, cloud adoption is not without its own challenges. One key challenge reported with cloud adoption continues to be security. Security concerns can be broadly broken into the following three types:

» The ability of the cloud provider to secure its platform sufficiently. The last decade has helped demonstrate to the enterprise IT world that cloud providers’ investments in security often exceed what is possible by enterprises, and that public cloud IaaS offers comparable and often better security than possible on-premises safeguards. 

» The ability of customers to secure their applications running on the cloud platform. This is critical given the shared security model of public cloud. Cloud providers are responsible for security of the infrastructure stack, while the customers need to be responsible for the security of their application that runs on the cloud platform. This often includes use of proprietary tools from the cloud provider, and a good understanding of the platform’s security framework. Protecting applications and data by using the cloud provider’s security framework correctly, continues to be a challenge for enterprises.With increased familiarity and skillset availability, this is a challenge that will be resolved in time. 

» Cloud adoption can bring forward resource limitations. These include the availability of cloud skill sets, lack of clarity around cloud adoption planning, and execution of application transformation and migration. The typical workaround, seen particularly among large enterprises with IT applications that are designed to support thousands of users, is to engage managed services or professional services to assist in this adoption. Availability of a strong service partner with an extensive ecosystem of experts and partners has thus emerged as an important enabler for organizations looking to migrate and transform their businesses in the cloud. 

Advantages of Infrastructure as a Service (IaaS) and Cloud Adoption

IaaS empowers IT service organizations with a foundation for agility—the ability to make IT changes easily, quickly, and cost effectively—in the infrastructure layer. Early adopters are seeing benefits in business metrics such as operational efficiency and customer retention. Key business benefits customers report includes the following: 

» Business agility – enabled by the rapid scalability of IaaS. Organizations can easily scale their IT footprint as needed depending on business needs. IaaS enables faster time to launch for initiatives, swifttime to market for new offerings, and rapid iterations to stay current with market needs.

» Improved customer experience – delivered by high availability architectures built on a resilient public cloud IaaS platform. Organizations that build their services on the cloud can maintain availability through critical phases such as outages, periods of growth or high utilization of services, thus increasing customer satisfaction and loyalty with the solution. This leads to smooth customer base expansion during growth periods.

» Total Cost of Ownership (TCO) benefits – possible because of the “pay as you use” cost model for infrastructure minimizes the need for large upfront investments and over-provisioning. IaaS compute, storage and networking resources can be provisioned and used within minutes when needed and terminated when not needed, allowing instantaneous on-demand access to resources.

» Geographic reach – enabled by the globally distributed set of data centers, all of which deliver a consistent infrastructure environment close to the respective geographies. A Solution that is successful initially in one region can be easily replicated on the IaaS service in other geographies with minimal additional qualification or contract renegotiation, allowing shorter lead times for regional expansion. This allows a cloud-based solution to rapidly expand beyond physical boundaries and reach customers and markets across the globe as needed. 

» Easy access to new technologies and services – through the cloud ecosystem of higher layer service and partners. Thisbroader cloud ecosystem has emerged as a major source of benefits for IaaS customers. Nearly a third of the respondents to IDC’s IaaSView 2019 report indicate this ecosystem is a top driver of their decision to adopt cloud.

Recent Trends in Enterprise IaaS Usage: Multi Cloud and Hybrid Cloud Patterns

Two popularly seen cloud adoption patterns in enterprise IT today are “multicloud” and “hybrid cloud” environments: 

Hybrid Cloud. IDC defines hybrid cloud as the usage of IT services (including IaaS, PaaS, SaaS apps, and SaaS-SIS cloud services) across one or more deployment models using a unified framework. The cloud services used leverage more than one cloud deployment model across public cloud and private cloud deployments. Customers sometimes also include cloud and noncloud combinations when they describe an environment as hybrid cloud (sometimes referred to also as hybrid IT). 

This model is rapidly gaining adoption among enterprise IT organizations (see Figure 2). Factors driving the adoption of hybrid cloud include the desire to retain a higher level of control on certain data sets or workloads, as well as proximity and latency requirements requiring certain workloads to stay on premises. 

Multicloud. IDC defines multicloud as an organizational strategy or the architectural approach to the design of a complex digital service (or IT environment)that involves consumption of cloud services from more than one cloud service provider. These may be directly competing cloud services such as hosted private cloud versus public cloud compute services, public object storage from more than one public cloud service provider, or IaaS and SaaS from one or more cloud service providers. Multicloud encompasses a larger universe than hybrid clouds. 

Factors driving multicloud usage include organic reasons such as independent projects scaling in different parts of the organizations on different cloud platforms, as well as intentional reasons such as a desire to leverage specific cloud platforms for specific unique capabilities. A major factor gating the adoption of multicloud more broadly is the cost/complexity associated with enabling consistent management/governance of many different cloud options.

The Benefits and Differentiators of IBM Cloud IaaS

IBM Cloud Infrastructure as a Service (IaaS) forms the foundation layer of the IBM Cloud portfolio, and delivers the compute, storage, and network functionality, as well as the required virtualization, for customers to build their IT infrastructure environments on these services. The customer continues to be responsible for management of the higher layers of the stack operated on the IaaS platform. Figure 3 shows the functionality delivered in the different layers of the IBM Cloud portfolio and the split of management responsibilities between IBM and the customer in each of these layers. 

IBM Cloud IaaS and the broader IBM cloud ecosystem bring to customers allthe business benefits discussed earlier of cloud IaaS adoption. These are delivered through a combination of IBM’s global datacenter footprint, resilient, scalable, and broad IaaS portfolio. This is complemented by the rich ecosystem of cloud services and partners, including access to the latest technology capabilities such as artificial intelligence and quantum computing. In addition, IBM is in a unique position as a trusted long-time enterprise technology partner, and brings the following differentiated strengths and capabilities to businesses:

» Security and trust –IBM Cloud is built to deliver security across all its services, integrated through the service and delivered as a service. This includes audit compliance and ability to support standards such as PCI 3.0, HIPAA and GDPR, which are often challenging and expensive for enterprises to meet in house with on premises infrastructure. This also includes specific security capabilities like the IBM Cloud Pak for Security and the IBM Data Security Services; and the IBM Cloud Hyper Protect Services with built in data in motion and data at rest protection as well as Keep Your Own Key (KYOK) capability for the most security sensitive use cases. These are further enhanced by IBM’s long track record as a security-conscious technology company and a trusted partner to enterprises, alleviating concerns of misuse of customer data. These have been instrumental in recent large customer wins in the U.S., from some of the largest and most well-known enterprise brands.

» Offerings for specific enterprise needs, such as SAP, VMware, Bare Metal – IDC research shows that most of the enterprise adoption of cloud IaaS for production use starts with existing applications. To offer consistency, IBM Cloud offers specialized qualified IaaS offerings for common enterprise IT services in Orange County. These include bare metal offerings specifically qualified to run SAP and VMware solutions, as well as a mature and broad range of configurable bare metal offerings. These allow customers to configure their cloud IaaS environment to closely match the existing environment for business-critical applications, minimize migration risk, and enjoy the agility and broader benefits of moving to cloud IaaS.

» Access to services and expertise across the globe – The rapid adoption of cloud has outpaced skillset evolution. IBM’s services divisions, Global Technology Services and Global Business Services, acts as an effective delivery arm for IBM’s technology offerings, and can assist customers on their cloud adoption and capability building. These bring to customers professional expertise across containerization, microservices, DevOps,

» Hybrid cloud and multicloud enablers – The 2019 acquisition of Red Hat brought to IBM Cloud a strong suite of cloud-native software including the Red Hat OpenShift platform, a compelling cloud-native platform that could be delivered both on customer premises and on multiple public clouds. These complement the Cloud Paks product portfolio at IBM, which is also designed to deliver a consistent experience for specific enterprise use cases on customer premises and public cloud platforms. IBM Cloud Paks and the IBM Red Hat OpenShift platform are designed with the intent of offering a unified customer experience across public cloud and customer premises infrastructure. These products address one of the top challenges reported by enterprises using cloud today:the lack of consistency across clouds and across premises, which limits the ability to effectively build a multicloud or hybrid cloud environment. The Red Hat OpenShift platform also offers open source compatibility with open source frameworks like Kubernetes and Knative, allowing portability and reducing risk of lock-in for customers. These recent additions and evolutions to the portfolio are complemented by IBM’s long track record of building and operating complex private cloud platformsfor enterprise customers

Conclusion

The cloud value propositions of flexibility and scalability were ideally suited for the initial use cases that deployed on cloud IaaS, such as startup and hobbyist/shadow-IT workloads. While these value propositions continue to be important, enterprise use cases require more from their IT stack. These include end-to-end security, flexibility to operate across multiple premises and platforms, and partners to support the enterprise’s vertical-specific needs. IBM Cloud offers an expansive global cloud infrastructure service inclusive of open hybrid and multicloud enablers and the broad IBM ecosystem of technology and service partners designed to address these needs. With these capabilities and its strong technology portfolio, IBM is well poised to be a trusted cloud partner to enterprises as they transition their IT to the cloud

 

IBM Cloud for Financial Services

Today your business model and your technology are under significant strain.

External conditions such as COVID-19 are driving extreme volatility in channel usage, in transaction volumes, and product demand. Your legacy systems may lack the resiliency needed to handle these challenges. Current customer behaviors and workloads are likely to shift quickly and dramatically again; placing your systems, your costs and your people under perpetual strain. You are faced with infrastructure that is slow and expensive. Additionally, different executives each with their own set of concerns makes moving to the public cloud seem daunting. 

These limitations and concerns are why banks have moved fewer than 20% of all workloads to the cloud, and virtually no complex workloads or those involving sensitive data. Until you find a way to safely and securely migrate and manage substantially greater workloads on the cloud, you will operate at this disadvantage. But it doesn’t have to be this way–it IS possible for banks to benefit fully from public cloud. 

Introducing IBM Cloud for Financial Services

To help enable financial institutions to transform, IBM developed IBM Cloud for Financial Services, built on the IBM Cloud. By working with Bank of America to develop industry informed security control requirements, on-site embedded IT services and leveraging IBM Promontory, the global leader in financial services regulatory compliance, IBM Cloud for Financial Services provides the level of data security and regulatory compliance financial institutions are mandated to adhere to, along with public cloud scale and innovation they want. With this comes the introduction of the IBM Cloud Policy Framework for Financial Services, exclusively available, which deploys a shared-responsibility model for implementing controls. It is designed to enable financial institutions and their ecosystem partners to confidently host apps and workloads in the cloud and be able to demonstrate regulatory compliance significantly faster and more efficiently than they are today.

Workloads will be run on IBM Cloud for VMware Regulated Workloads, a secure, automated reference architecture that enhances VMware vCenter Server on IBM Cloud to deliver a security rich, high-performance platform for VMware workloads in regulated industries. Designed to enable a zero-trust model, this architecture offers our clients in regulated industries a strategic approach to securely extend and scale their VMware IT operations into the IBM Cloud while maintaining compliance.

With nearly thirty ISVs and partners, procurement, contracting and onboarding within the ecosystem can be streamlined, leading to increased revenues and reduced time to market for all parties.

IBM Cloud for your workloads

IBM Cloud for Financial Services is exclusively available in North America, but you can still take advantage of all the products and services IBM Cloud has to offer in our 60-plus global data centers. 

IBM can help you build a strategy for global, regional, industry and government compliance

  • IBM Promontory® for financial services sector (FSS) workloads—operating at the intersection of strategy, risk management, technology and regulation 
  • Strong commitment to our European clients (PCI-DSS and EBA briefing) 

Maintain control of your cloud environment and your data

  • Client-key management (BYOK and KYOK) 
  • Visibility and auditability with physical-asset management and logging and monitoring 
  • Full control of the stack, with transparency for audit purposes, right down to the serial number of the server

Security leadership with market-leading data protection

  • Clients can keep their own key that no one else can see—so not even IBM operators can access the key or the data it protects, unlike other cloud vendors. IBM Cloud Hyper Protect Crypto Services is designed to give clients the control of the cloud data-encryption keys and cloud hardware-security module (HSM)—the only service in the industry with FIPS 140-2 Level 4 certification. 
  • Each workload requires various access and security rules; IBM enables organizations to define and enforce such guidelines by way of integrated container security and DevSecOps for cloudnative applications with IBM Cloud Kubernetes Service. 
  • IBM Cloud Security Advisor detects security misconfigurations so organizations can better assess their security postures and take corrective actions for all parties

Reduce complexity and speed innovation

  • IBM Garage™ for quick creation and scaling of new ideas that can dramatically impact your business 
  • With IBM’s vast ISV and partner ecosystem, banks can reduce overhead and the time and effort to ensure compliance of third-party vendors and more time delivering new services  

 

“We received the best of both worlds: the innovation and speed benefits of the IBM public cloud with the high security of a private cloud.” — Bernard Gavgani, Global Chief Information Officer, BNP Paribas

 

Why IBM?

Built on a foundation of open source software, security leadership and enterprise-grade hardware, IBM Cloud provides the flexibility needed to help relieve the headaches caused when managing workloads often associated with moving to the cloud. IBM Cloud offers the lowest cloud vendor costs and the broadest portfolio of secure-compute choices with a wide array of enterprise-grade security IT services in San Diego and products to help those in regulated industries. And most recently, IBM Cloud has been recognized as a 2019 Gartner Peer Insights Customers’ Choice for Cloud Infrastructure as a Service, Worldwide. The vendors with this distinction have been highly rated by their customers. Read the announcement to learn more

Top 10 Facts Tech Leaders Should Know About Cloud Migration

Cloud Migration Is A Harder Form Of Cloud Adoption

Cloud migration gained much popularity after Amazon Web Services (AWS) re:Invent in 2015 and a revolutionary speech by General Electric’s (GE’s) CIO, Jim Fowler.1 Rather than focusing public cloud adoption on building new apps, Fowler referred to AWS as a preferred outsourcing option to host its existing applications. Prior to this, I&O leaders had disregarded cloud migration as hard, expensive, and detrimental to the performance of applications. The new storyline highlighted megacloud ecosystem benefits, reinforced outsourcing messaging, and, more importantly, promised that cheaper migration methods were no longer problematic and careful planning could mitigate the performance issues.

Decide Whether Migration Is An App Strategy Or A Data Center Strategy

After collecting hundreds of cloud migration stories, Forrester recognizes that enterprises view cloud migration from two vastly different points of view: 1) an application sourcing strategy or 2) a data center strategy. Depending on which lens they’re using, enterprises build their business cases around different timelines, drivers, goals, and expectations (see Figure 1). Organizations may view cloud migration as:

An app sourcing strategy. The goal is to optimize sourcing decisions for a full app portfolio. Typically, the scope of migration is limited to large packaged app hubs, subsets of apps with certain characteristics, or apps with location-based performance challenges. Major enterprise applications, e.g., SAP S4/HANA, commonly move to public cloud platforms with ongoing supplemental managed services support.2 Business cases usually outline mitigated latency, improved experience, or lower operational costs to maintain the migrated workloads.

A data center strategy. The goal is outsourcing as many apps as possible. The scale for this approach is large and usually tied to a “moment of change” (e.g., new executives, a data center refresh, a data center closing, or a contract ending). With such massive scale, these enterprises opt for less expensive migration paths and are more forgiving of performance drops that may occur during the initial migration. Data center strategists rarely complete migrations without the support of consultancies and tooling. Business cases usually rely on classic outsourcing benefits, cost avoidance, and reduced staffing (often through attrition) to justify the expense.

 

Forrester’s Top 10 Cloud Migration Fact

Today, 76% of North American and European enterprise infrastructure decision makers consider migrating existing applications to the cloud as part of their cloud strategy.3 This shockingly high figure is supported with powerful enterprise examples, including Allscripts, BP, Brinks Home Security, Brooks Brothers, Capital One, Chevron, The Coca-Cola Company, Dairy Farmers of America (DFA), GE, Hess, J.B. Hunt Transport, Kellogg, Land O’Lakes, and McDonald’s.4 Despite cloud’s popularity, migration is still hard. It’s still expensive. And it still requires due diligence to mitigate these factors. Here are Forrester’s top 10 facts that I&O leaders should know about cloud migration:

  1. Cloud migration won’t have the same benefits as SaaS migration. When you adopt a software as-a-service (SaaS) technology, you’re using a new app designed specifically for a cloud platform. An app specialist is managing and updating that app. The new app has a new interface that your business users access and recognize as different. When you’re migrating an app to a cloud platform, none of that is true. You’re placing the same app in a generic cloud platform without the support of an app specialist. Any redesign requires your time, and the business user ultimately experiences the same app and interface. The best-case scenario is that performance stays the same and your business users don’t notice. That’s a lot less compelling than the case for SaaS.5 Don’t equate the two migration terms.
  2. Business users don’t care about cloud migration. If all goes well, your business users will experience the same app with no decline in performance. That isn’t a very compelling story for business users. If your cloud strategy is supposed to inspire, don’t focus your marketing on migration. Instead, focus on the elements of your cloud IT strategy that deliver new capabilities. Although its potential is powerful — in that cloud migration can clean up inefficiencies or release spend that might help fund new investments — the migration itself isn’t inspiring. For enterprises with “cloud first” policies, cloud migration may involve a corporatewide awareness that requires technology professionals to engage with the business to help ensure a smooth transition
  3. Cloud migration is hard. Cloud platforms differ in a few fundamental ways from enterprise data centers; they use commodity infrastructure, extremely high average-sustained utilization levels, and minimal operational time per virtual machine (VM).6 Consumers also get a financial reward if their apps vary resource usage as their traffic varies. Knowing this, enterprises have accordingly designed new apps to mitigate cost and obtain high performance. But for existing apps — as highlighted in cloud migration — this is much more difficult. Redesign or modernization, although ideal, is costly. Organizations can systematically solve these challenges, but learning these best practices can be painful. For critical workloads, the tolerance for mistakes can be low, especially when the advantages of the migration itself are less apparent to business users.
  4. Cloud readiness means scalable, resilient, and dependency-aware. To ready existing applications for cloud, enterprises look at basic improvements that can make a big difference in a public cloud. They ensure financial alignment by making their apps scale, consuming fewer resources when they’re less busy. Dependency mapping is another key step toward readiness, eliminating low-value dependencies and grouping applications into ecosystems to inform sets for the migration plan. More-thorough approaches break apps into services to increase application resiliency by eliminating dependencies within a single application. Migration discovery tools provide some readiness findings, including version updates, dependencies, financial implications, minimal application code and architectural feedback, and grouping suggestions
  5. Mass migrations typically align to a moment of change. Rightsourcing decisions explore characteristics that favor cloud.8 Mass migration (e.g., the migration of an entire app portfolio or a substantial number of apps), usually aligns to a “moment of change.” This includes executive changes; acquisitions/divestitures; the end of colocation contracts; infrastructure refreshes; IT cyber security, drastic changes in sourcing; and fear of, or experienced, disruption, any of which motivate significant and costly action at a specific point in time. Aligning to beneficial timing can make it easier to gain support, overcome barriers, or justify the economics behind a costly change. Almost all mass migrations align to one of these moments.
  6. Four paths exist for cloud migration. You may hear references to “the six R’s of migration” — rehost, replatform, repurchase, refactor, retire, and retain.9 Occasionally, other favored “R” terms are mixed in — redesign, rebuild, refresh, etc. Forrester highlights four key paths to cloud migration: 1) lift-and-shift (minimal change and moved through replication technology); 2) lift-and-extend (rehosting the app while making significant changes after the move); 3) hybrid extension (not moving existing parts of an app but rather building new parts in a public cloud); and 4) full replacement (complete or major rewrites to the application).10 Each company uses multiple methods for migration. Lift-and-shift is less resource-intensive, as it involves little change; however, this may cause performance decline. Full replacement requires significant change and resources
  7. Creating a cloud migration business case isn’t easy. Cost savings are hard to come by in cloud migration. Certain characteristics may make it easier to cut costs, such as shutting down data centers, eliminating painful inefficiencies, making minimal changes, and relying on minimal support for the migration. These may not be plausible or even recommended. Some of the more compelling business cases rely on cost avoidance, not cost savings (e.g., not buying new infrastructure). Creating your business case means cost, benefits, and future enablers, as defined by Forrester’s Total Economic Impact™ (TEI) model.11 Although you can support your documentation with any of the case studies noted above, it’s impossible to create your business case before you’ve defined the scope of your migration or gathered data about the specifics of your applications.
  8. Native platforms, consultancies, MSPs, and tools aid migration. Cloud migration is a massive revenue opportunity for cloud platforms. As a result, major public cloud platforms have eagerly built out migration support services, tooling, and certifications. Consultancies provide dedicated assistance to evaluate, plan, and migrate workloads, especially for massive migrations. MSPs also assist in migration but largely focus on the ongoing management after the migration. Standalone discovery and replication software assist both self-run and supported migrations. If you’re looking for support, it’s easy to come by.
  9. Hosted private cloud can be a less painful incremental step. Hosted private cloud isn’t the flashiest cloud technology. In fact, it falls short of public cloud capabilities and expectations in almost every way. However, it has three characteristics that deliver a practical solution for many use cases: 1) It’s often built on VMware products; 2) it has dedicated options; and 3) it’s managed by a service provider. For cloud migrators, it’s far easier to migrate a portfolio of applications to a VMware-based cloud environment, isolated from other clients and partially managed to the OS or app so they can meet aggressive deadlines and stable performance more realistically. This approach can help control costs, avoid performance issues, and provide migration support to the public cloud, with the help of your hosted private cloud provider.
  10. Repatriation happens, but it’s an app-level decision. Applications occasionally go in the other direction. The term repatriation started with cloud-negative origins to save reputation when an ill-advised cloud migration occurred prior to market maturity. More recently, it reflects a one-off sourcing change for an app when its characteristics change during the life of that workload and no longer are acceptable on a public cloud platform. Organizations undertake this effort only when the current state is painful — not simply inconvenient or slightly more expensive. Usually, it’s regulation or significant cost escalation that would drive such a drastic change for an app. AI/ML is a common cost example. Regulation-driven repatriation can mean that the scope of the application has changed, the regulation has changed, or the company’s approach to complying with regulation has evolved. Very rarely do we see complete strategywide repatriation, but when it occurs, it’s large technology footprints or ASIC requirements (e.g., Dropbox) that drive this decision

Prepare Yourself For Your Migration Strategy

Our team of IT Service Professionals in Orange County can start your cloud migration strategy off by educating your migration team, executives, and business users about how cloud migration fits into your larger cloud strategy. I&O professionals should use this report to help outline the key concepts to ensure better communication and accurate expectations. Moving forward, here are the steps you’ll need to tackle:

Identify the best-fit scope. Before jumping into cloud migration, first determine whether you’re seeking gains at the application level or the data center level. This is the first stage of determining scope. For those seeking app-level gains, start with your application portfolio. Create your own sourcing framework. This may include cloud readiness, variability, scalability, location challenges, dependencies, compliance requirements, data types, need for additional support, expected lifetime, and app satisfaction. For those seeking gains at the data center level, the framework will be similar but the results will heavily skew in favor of public cloud or SaaS migration as the preferred options. The framework itself may ask “why not” host in a certain solution rather than whether it’s the best fit or optimized in that platform. Rather than app-level optimization, the goal is system-level optimization, where the enterprise data center is seen as a source of inefficiency

Determine (and find) the support you need. Support is expensive but valuable, depending on your scope, experience, and executive sponsorship. Most migrators leverage some level of support, whether it’s tools, workshops, best practices, early guidance, or full migration support. After determining the right level of support, you’ll need to decide the type of provider that will deliver it and which set of partners meets your needs

Obtain real estimates based on your own numbers. The most common cloud migration inquiry question — “How much will I save from cloud migration?” — is impossible to answer accurately without inputs from your own estate. Your scope, current configurations, trust in autoscaling, anticipated changes, use of consultancies, cost avoidance, and team skill sets will all determine this figure. Each major cloud provider offers calculators. Each consultancy gives its own estimates. Before making definitive claims in your business case, get some real estimates and determine which costs won’t be going away

Five common data security pitfalls to avoid

Data security should be a top priority for enterprises, and for good reason

Even as the IT landscape becomes increasingly decentralized and complex, it’s important to understand that many security breaches are preventable. While individual security challenges and goals may differ from company to company, often organizations make the same widespread mistakes as they begin to tackle data security. What’s more, many enterprise leaders often accept these errors as normal business practice.

There are several internal and external factors that can lead to successful cyberattacks, including:

  •  Erosion of network perimeters 
  •  Increased attack surfaces offered by more complex IT environments 
  •  Growing demands that cloud services place on security practices 
  •  Increasingly sophisticated nature of cyber crimes 
  •  Persistent cybersecurity skills shortage 
  •  Lack of employee awareness surrounding data security risks

How strong is your data security practice?

Let’s look at five of the most prevalent—and avoidable—data security missteps that make organizations vulnerable to potential attacks, and how you can avoid them.

Pitfall 1

Failure to move beyond compliance

Compliance doesn’t necessarily equal security. TeraPixels System and their team of IT service professionals in San Diego, focus their security resources to comply with an audit or certification can become complacent. Many large data breaches have happened in organizations that were fully compliant on paper. The following examples show how focusing solely on compliance can diminish effective security:

Incomplete coverage

Enterprises often scramble to address database misconfigurations and outdated access polices prior to an annual audit. Vulnerability and risk assessments should be ongoing activities.

Minimal effort

Many businesses adopt data security solutions just to fulfill legal or business partner requirements. This mindset of “let’s implement a minimum standard and get back to business” can work against good security practices. Effective data security is a marathon not a sprint.

Fading urgency

Businesses can become complacent towards managing controls when regulations, such as the Sarbanes-Oxley Act (SOX) and the General Data Protection Regulation (GDPR), mature. While, over time, leaders can be less considerate about the privacy from IT service provider, security and protection of regulated data, the risks and costs associated with noncompliance remain.

Omission of unregulated data

Assets, such as intellectual property, can put your organization at risk if lost or shared with unauthorized personnel. Focusing solely on compliance can result in security organizations overlooking and under protecting valuable data.

Solution

Recognize and accept that compliance is a starting point, not the goal

Data security organizations must establish strategic programs that consistently protect their business’ critical data, as opposed to simply responding to compliance requirements

Data security and protection programs should include these core practices:

  • Discover and classify your sensitive data across on-premises and cloud data stores. 
  • Assess risk with contextual insights and analytics. 
  • Protect sensitive data through encryption and flexible access policies. 
  • Monitor data access and usage patterns to quickly uncover suspicious activity. 
  • Respond to threats in real time.
  • Simplify compliance and its reporting

The final element can include legal liabilities related to regulatory compliance, possible losses a business can suffer and the potential costs of those losses beyond noncompliance fines.

Ultimately, you should think holistically about the risk and value of the data you seek to secure. 

Pitfall 2

Failure to recognize the need for centralized data security

Without broader compliance mandates that cover data privacy and security, organization leaders can lose sight of the need for consistent, enterprise-wide data security. 

For enterprises with hybrid multicloud environments, which constantly change and grow, new types of data sources can appear weekly or daily and greatly disperse sensitive data.

Leaders of companies that are growing and expanding their IT infrastructures can fail to recognize the risk that their changing attack surface poses. They can lack adequate visibility and control as their sensitive data moves around an increasingly complex and disparate IT environment. Failure to adopt end-to-end data privacy, security and protection controls—especially within complex environments—can prove to be a very costly oversight.

Operating security solutions in silos can cause additional problems. For example, organizations with a security operations center (SOC) and security information and event management (SIEM) solution can neglect to feed those systems with insights gleaned from their data security solution. Likewise, a lack of interoperability between security people, processes and tools can hinder the success of any security program.

Solution

Know where your sensitive data resides, including on-premises and cloud hosted repositories

Securing sensitive data should occur in conjunction with your broader security efforts. In addition to understanding where your sensitive data is stored, you need to know when and how it’s being accessed, as well—even as this information rapidly changes. Additionally, you should work to integrate data security and protection insights and policies with your overall security program to enable tightly aligned communication between technologies. A data security solution that operates across disparate environments and platforms can help in this process.

So, when is the right time to integrate data security with other security controls as part of a more holistic security practice? Here are a few signs that suggest your organization may be ready to take this next step: 

Risk of losing valuable data 

The value of your organization’s personal, sensitive and proprietary data is so significant that its loss would cause significant damage to the viability of your business.

Regulatory implications 

Your organization collects and stores data with legal requirements, such as credit card numbers, other payment information or personal data.

Lack of security 

oversight Your organization has grown to a point where it’s difficult to track and secure all the network endpoints, including cloud instances. For example, do you have a clear idea of where, when and how data is being stored, shared and accessed across your on-premises and cloud data stores?

Inadequate assessment 

Your organization has adopted a fragmented approach where no clear understanding exists of exactly what’s being spent across all your security activities. For example, do you have processes in place to measure accurately your return on investment (ROI) in terms of the resources being allocated to reduce data security risk?

If any of these situations apply to your organization, you should consider acquiring the security skills and solutions needed to integrate data security into your broader existing security practice.

Pitfall 3

Failure to define who owns responsibility for the data

Even when aware of the need for data security, many companies have no one specifically responsible for protecting sensitive data. This situation often becomes apparent during a data security or audit incident when the organization is under pressure to find out who is actually responsible.

Top executives may turn to the chief information officer (CIO), who might say, “Our job is to keep key systems running. Go talk to someone in my IT staff.” Those IT employees may be responsible for several databases in which sensitive data resides and yet lack a security budget. 

Typically, members of the chief information security officer (CISO) organization aren’t directly responsible for the data that’s flowing through the overall business. They may give advice to the different line-of-business (LOB) managers within an enterprise, but, in many companies, nobody is explicitly responsible for the data itself. For an organization, data is one of its most valuable assets. Yet, without ownership responsibility, properly securing sensitive data becomes a challenge.

Solution 

Hire a CDO or DPO dedicated to the well-being and security of sensitive and critical data assets

A chief data officer (CDO) or data protection officer (DPO) can handle these duties. In fact, companies based in Europe or doing business with European Union data subjects face GDPR mandates that require them to have a DPO. This prerequisite recognizes that sensitive data—in this case personal information—has value that extends beyond the LOB that uses that data. Additionally, the requirement emphasizes that enterprises have a role specifically designed to be responsible for data assets.Consider the following objectives and responsibilities for choosing a CDO or DPO:

Technical knowledge and business sense 

Assess risk and make a practical business case that nontechnical business leaders can understand regarding appropriate security investments

Strategic implementation 

Direct a plan at a technical level that applies detection, response and data security controls to provide protections.

Compliance leadership 

Understand compliance requirements and know how to map those requirements to data security controls so that your business is compliant.

Monitoring and assessment 

Monitor the threat landscape and measure the effectiveness of your data security program

Flexibility and scaling 

Know when and how to adjust the data security strategy and IT embedded services, such as expanding data access and usage policies across new environments by integrating more advanced tools.

Division of labor 

Set expectations with cloud service providers regarding service-level agreements (SLAs) and the responsibilities associated with data security risk and remediation.

Data breach response plan 

Finally, be ready to play a key role to devise a strategic breach mitigation and response plan

Ultimately, the CDO or DPO should lead in fostering data security collaboration across teams and throughout your enterprise, as everyone needs to work together to effectively secure corporate data. This collaboration can help the CDO or DPO oversee the programs and protections your organization needs to help secure its sensitive data.

Pitfall 4

Failure to address known vulnerabilities

High-profile breaches in enterprises have often resulted from known vulnerabilities that went unpatched even after the release of patches. Failure to quickly patch known vulnerabilities puts your organization’s data at risk because cybercriminals actively seek these easy points of entry. 

However, many businesses find it challenging to quickly implement patches because of the level of coordination needed between IT, security and operational groups. Furthermore, patches often require testing to see if they don’t break a process or introduce a new vulnerability. 

In cloud environments, sometimes it’s difficult to know if a contracted service or application component should be patched. Even if a vulnerability is found in a service, its users often lack control over the service provider’s remediation process.

Solution

Establish an effective vulnerability management program with the appropriate technology to support its growth

Vulnerability management typically involves some of the following levels of activity:

  • Maintain an accurate inventory and baseline state for your data assets. 
  • Conduct frequent vulnerability scans and assessments across your entire infrastructure, including cloud assets. 
  • Prioritize vulnerability remediation that considers the likelihood of the vulnerability being exploited and the impact that event would have on your business. 
  • Include vulnerability management and responsiveness as part of the SLA with third-party service providers. 
  • Obfuscate sensitive or personal data whenever possible. Encryption, tokenization and redaction are three options for achieving this end. 
  • Employ proper encryption key management, ensuring that encryption keys are stored securely and cycled properly to keep your encrypted data safe.

Even within a mature vulnerability management program, no system can be made perfect. Assuming intrusions can happen even in the best protected environments, your data requires another level of protection. The right set of data encryption techniques and capabilities can help protect your data against new and emerging threats.

 

Pitfall 5

Failure to prioritize and leverage data activity monitoring

Monitoring data access and use is an essential part of any data security strategy. An organization leader needs to know who, how and when people are accessing data. This monitoring should encompass whether these people should have access, if that access level is correct and if it represents an elevated risk for the enterprise. 

Privileged user identifications are common culprits in insider threats.5 A data protection plan should include real-time monitoring to detect privileged user accounts being used for suspicious or unauthorized activities. To prevent possible malicious activity, a solution must perform the following tasks: 

  • Block and quarantine suspicious activity based on policy violations.
  • Suspend or shut down sessions based on anomalous behavior. 
  • Use predefined regulation-specific workflows across data environments. 
  • Send actionable alerts to IT security and operations systems.

 Accounting for data security and compliance-related information and knowing when and how to respond to potential threats can be difficult. With authorized users accessing multiple data sources, including databases, file systems, mainframe environments and cloud environments, monitoring and saving data from all these interactions can seem overwhelming. The challenge lies in effectively monitoring, capturing, filtering, processing and responding to a huge volume of data activity. Without a proper plan in place, your organization can have more activity information than it can reasonably process and, in turn, diminish the value of data activity monitoring.

Solution

Develop a comprehensive data detection and protection strategy

TeraPixels Systems and our security and IT services professionals in Orange County are typically tasked to secure a variety of businesses. To that end, when starting on a data security journey, you need to size and scope your monitoring efforts to properly address the requirements and risks. This activity often involves adopting a phased approach that enables development and scaling best practices across your enterprise. Moreover, it’s critical to have conversations with key business and IT stakeholders early in the process to understand short-term and long-term business objectives.

These conversations should also capture the technology that will be required to support their key initiatives. For instance, if the business is planning to set up offices in a new geography using a mix of on-premises and cloud-hosted data repositories, your data security strategy should assess how that plan will impact the organization’s data security and compliance posture. If, for example, the company-owned data will now be subject to new data security and compliance requirements, such as the GDPR, California Consumer Privacy Act (CCPA), Brazil’s Lei Geral de Proteção de Dados (LGPD) and so on.

You should also prioritize and focus on one or two sources that likely have the most sensitive data. Make sure your data security policies are clear and detailed for these sources before extending these practices to the rest of your infrastructure. 

You should look for an automated data or file activity monitoring solution with rich analytics that can focus on key risks and unusual behaviors by privileged users. Although it’s essential to receive automated alerts when a data or file activity monitoring solution detects abnormal behavior, you must also be able to take fast action when anomalies or deviations from your data access policies are discovered. Protection actions should include dynamic data masking or blocking.

 

A guide to securing cloud platforms

Rethink security for cloud-based applications

As more organizations move to a cloud-native model for developing apps and managing workloads, cloud computing platforms are rapidly limiting the effectiveness of the traditional perimeter-based security model. While still necessary, perimeter security is by itself insufficient. Because data and applications in the cloud are outside the old enterprise boundaries, they must be protected in new ways. 

Organizations transitioning to a cloud-native model or planning hybrid cloud app deployments must supplement traditional perimeter-based network security with technologies that protect cloud-based workloads. Enterprises must have confidence in how a cloud service provider secures their stack from the infrastructure up. Establishing trust in platform security has become fundamental in selecting a provider

Cloud security drivers 

Data protection and regulatory compliance are among the main drivers of cloud IT services in Orange County—and they’re also inhibitors of cloud adoption. Addressing these concerns extends to all aspects of development and operations. With cloud-native applications, data may be spread across object stores, data services and clouds, which create multiple fronts for potential attacks. And attacks are not just coming from sophisticated cybergangs and external sources; according to a recent survey, 53 percent of respondents confirmed insider attacks in the previous 12 months.

Five fundamentals of cloud security 

As organizations address the specialized security needs of using cloud platforms, they need and expect their providers to become trusted technology partners. In fact, an organization should evaluate cloud providers based on these five aspects of security as they relate to the organization’s own specific requirements: 

  1. Identity and access management: Authentication, identity and access controls 
  2. Network security: Protection, isolation and segmentation 
  3. Data protection: Data encryption and key management 
  4. Application security and DevSecOps: Including security testing and container security 
  5. Visibility and intelligence: Monitoring and analyzing logs, flows and events for patterns

Verify identity and manage access on a cloud platform

Any interaction with a cloud platform starts with verifying identity, establishing who or what is doing the interacting—an administrator, a user or even a service. In the API economy, services take on their own identity, so the ability to accurately and safely make an API call to a service based on this identity is essential to successfully running cloud-native apps. 

Look for providers that offer a consistent way to authenticate an identity for API access and service calls. You also need a way to identify and authenticate end users who access applications hosted in the cloud. As an example, IBM® Cloud uses App ID as a way for developers to integrate authentication into their mobile and web apps.

Strong authentication keeps unauthorized users from accessing cloud systems. Since platform identity and access management (IAM) is so fundamental, organizations that have an existing system should expect cloud providers to integrate their company’s identity management system. This is often supported through identity federation technology that links an individual’s ID and attributes across multiple systems.

Ask prospective cloud providers to prove that their IAM architecture and systems cover all the bases. In the IBM Cloud, for example, identity and access management is based on several key features 

Identity

  • Each user has a unique identifier 
  • Services and applications are identified by their service IDs 
  • Resources are identified and addressed by the cloud resource name (CRN) 
  • Users and services are authenticated and issued tokens with their identities

Access management

  • As users and services attempt to access resources, an IAM system determines whether access and actions are allowed or denied 
  • Services define actions, resources and roles 
  • Administrators define policies that assign users roles and permissions on various resources 
  • Protection extends to APIs, cloud functions and back-end resources hosted on the cloud

As you evaluate a cloud provider’s cloud it solutions, look for access control lists together with common resource names that enable you to limit users not only to certain resources, but also to certain operations on those resources. These capabilities help ensure that your data in your data center is protected from both unauthorized external and internal access.

Extending your own Enterprise Identity Provider (Enterprise IdP) to the cloud is particularly useful when you build a cloud-native app on top of an existing enterprise application that uses the Enterprise IdP. Your users can smoothly log in to both the cloud-native and underlying applications without having to use multiple systems or IDs. Reducing complexity is always a worthy goal.

Redefine network isolation and protection

Many cloud providers use network segmentation to limit access to devices and servers in the same network. Additionally, providers create virtual isolated networks on top of the physical infrastructure and automatically limit users or services to a specific isolated network. These and other basic network security technologies are table stakes for establishing trust in a cloud platform. 

Cloud providers offer protection technologies—from web application firewalls to virtual private networks and denial-of-service mitigation—as services for software-defined network security and charge per usage. Consider the following technologies as crucial network security in the cloud computing era.

Security groups and firewalls 

Cloud customers often insert network firewalls for perimeter protection (virtual private cloud/subnetlevel network access) and create network security groups for instance-level access. Security groups are a good first line of defense for assigning access to cloud resources. You can use these groups to easily add instance-level network security to manage incoming and outgoing traffic on both public and private networks. 

Many customers require perimeter control to secure perimeter network and subnets, and virtual firewalls are an easily deployable way to meet this need. Firewalls are designed to prevent unwanted traffic from hitting servers and to reduce the attack surface. Expect cloud providers to offer both virtual and hardware firewalls that allow you to configure permission-based rules for the entire network or subnets. 

VPNs, of course, provide secure connections from the cloud back to your on-premises resources. They are a must-have if you are running a hybrid cloud environment. 

Micro-segmentation 

Developing applications cloud-natively as a set of small services provides, such as companies that IT Services in San Diego, offer a security advantage of being able to isolate them using network segments. Look for a cloud platform that implements micro-segmentation through the automation of network configuration and network provisioning. Containerized applications architected on the microservices model are fast becoming the norm to support workload isolation that scales. 

Protect data with encryption and key management

Reliably protecting data is a security fundamental for any digital business—especially those in highly regulated industries such as financial services and healthcare. 

Data associated with cloud-native applications may be spread across object stores, data services and clouds. Traditional applications may have their own database, their own VM and sensitive data located in files. In these cases, encryption of sensitive data both at rest and in motion becomes critical. 

Keep your own key (KYOK)

To implement data security that remains 100% private within the public cloud, IBM exclusively offers a solution that enables you to be the sole custodian of your encryption key. As the only service in the industry built on FIPS 140-2 Level 4-certified hardware, IBM Cloud Hyper Protect Crypto Services provides a key management and cloud hardware security module (HSM).

Businesses are right to worry about cloud operators or other unauthorized users accessing their data without their knowledge, and to expect complete visibility into data access. Controlling access to data with encryption and also controlling access to encryption keys are becoming expected safeguards. As a result, a bring-your-own-keys (BYOK) model is now a cloud security requirement. It allows you to manage encryption keys in a central place, provides assurance that root keys never leave the boundaries of the key management system and enables you to audit all key management lifecycle activities (Figure 2).

Trusted compute hosts

It comes down to hardware: nobody wants to deploy valuable data and applications on an untrusted host. Cloud platform providers that offer hardware with measure-verify-launch protocols give you highly secure hosts for applications deployed within the container orchestration system.

Intel Trusted Execution Technology (Intel TXT) and Trusted Platform Module (TPM) are examples of hostlevel technologies that enable trust for cloud platforms. Intel TXT defends against software-based attacks aimed at stealing sensitive information by corrupting system or BIOS code, or by modifying the platform’s configuration. Intel TPM is a hardware-based security device that helps protect the system startup process by ensuring that it is tamper-free before releasing system control to the operating system.

Data protection at rest and in transit

Built-in encryption with BYOK lets you maintain control of your data, whether it’s based on premises or in the cloud. It’s an excellent way to control access to data in cloud-native application deployments. In this approach, the customer’s key management system generates a key on premises and passes it to the provider’s key management service. This approach encompasses data-at-rest encryption across storage types such as block, object and data services. 

For data in transit, secure communication and transfer take place over Transport Layer Security/ Secure Sockets Layer (TLS/SSL). TLS/SSL encryption also allows you to demonstrate compliance, security and governance without requiring administrative control over the cryptosystem or infrastructure. The ability to manage SSL certificates is a requirement for trust in a cloud platform

Meeting audit and compliance needs 

Providing your own encryption keys and keeping them in the cloud—with no service provider access—gives you the visibility and control of information required for CISO compliance audits.

Automate security for DevOps

As DevOps teams build cloud-native services and work with container technologies, they need a way to integrate security checks within an increasingly automated pipeline. Because sites such as Docker Hub promote open exchange, developers can easily save image preparation time by simply downloading what they need. But with that flexibility comes the need to routinely inspect all container images placed in a registry before they are deployed. 

An automated scanning system helps ensure trust by searching for potential vulnerabilities in your images before you start running them. Ask platform vendors if they allow your organization to create policies (such as “do not deploy images that have vulnerabilities” or “warn me prior to deploying these images into production”) as part of DevOps pipeline security.

IBM Cloud Container Service, for example, offers a Vulnerability Advisor (VA) system to provide both static and live container scanning. VA inspects every layer of every image in a cloud customer’s private registry to detect vulnerabilities or malware before image deployment. Because simply scanning registry images can miss problems such as drift from static image to deployed containers, VA also scans running containers for anomalies. It also provides recommendations in the form of tiered alerts. Other VA features that help automate security in the DevOps pipeline include:

Policy violation settings: With VA, administrators can set image deployment policies based on three types of image failure situations: installed packages with known vulnerabilities; remote logins enabled; and remote monitoring management and remote logins enabled with some users who have easily guessed passwords. 

Best practices: VA currently checks 26 rules based on ISO 27000, including settings such as password minimum age and minimum password length. 

Security misconfiguration detection: VA flags each misconfiguration issue, provides a description of it and recommends a course of action to remediate it. 

Integration with IBM X-Force®: VA pulls in security intelligence from five third-party sources and uses criteria such as attack vector, complexity and availability of a known fix to rate each vulnerability. The rating system (critical, high, moderate or low) helps administrators quickly understand the severity of vulnerabilities and prioritize remediation.

 

When it comes to remediation, VA does not interrupt running images for patching. Instead, IBM remediates the “golden” image in the registry and deploys a new image to the container. This approach helps ensure that all future instantiations of that image will have the same fix in place. VMs can still be handled traditionally, using an endpoint security service to patch VMs and fix Linux security vulnerabilities.

Create a security immune system through intelligent monitoring

When moving to the cloud, CISOs often worry about low visibility and loss of control. Since the organization’s entire cloud may go down if a particular key is deleted or a configuration change inadvertently severs a connection back to on-premises resources or an enterprise security operations center (SOC), why shouldn’t the operations engineers expect full visibility into cloud-based workloads, APIs, microservices—everything?

Access trails and audit logs 

All user and administrative access, whether by the cloud provider or your organization, should be logged automatically. A built-in cloud activity tracker can create a trail of all access to the platform and services, including API, web and mobile access. Your organization should be able to consume these logs and integrate them into your enterprise SOC

Enterprise security intelligence 

Make sure you have the option of integrating all logs and events into your on-premises security information and event management (SIEM) system (Figure 3). Some cloud service providers also offer security monitoring with incident management and reporting, real-time analysis of security alerts and an integrated view across hybrid deployments. IBM QRadar®, for example, is a comprehensive SIEM solution offering a set of security intelligence solutions that can grow with an organization’s needs. Its machine learning capabilities train on threat patterns in a way that builds up a predictive security immune system.

Managed security with expertise 

If your organization does not have significant security expertise, explore providers that can manage security for you. Some providers can monitor your security incidents, apply threat intelligence from a variety of industries and correlate this information to take action. Ask if they can also deliver a single pane of glass that integrates in-house and managed security services.

Security that promotes business success

With cloud technology becoming a larger and more important part of running a digital business, it literally pays to look for a cloud provider that offers the right set of capabilities and controls to protect your data, applications and the cloud infrastructure on which customer-facing applications depend. Expect the platform security solution to cover the five key cloud security focus areas: identity and access; network security; security surveillance, data protection; application security; and visibility and intelligence. The goal is to worry less about technology and focus more on your core business.A well-secured cloud provides significant business and IT advantages, including:

Reduced time to value: Since security is already installed and configured, teams can easily provision resources and rapidly prototype user experiences, evaluate results and iterate as needed. 

Reduced capital expenditure: Using security services in the cloud can eliminate many up-front costs, including servers, software licenses and appliances. 

Reduced administrative burden: By successfully establishing and maintaining trust in the cloud platform, the provider with the right security offerings assumes the greatest burden of administration, reducing your costs in reporting and resource maintenance.

Encryption: Protect your most critical data

Encryption is all around us. Our emails can be encrypted. Our video conferences can be encrypted. Even our phone calls can be encrypted. It’s only natural then to assume our most sensitive business data should also be encrypted. Yet according to Ponemon Institute’s 2019 Global Encryption Trends Study, the average rate of adoption of an enterprise encryption strategy is only 45 percent for those surveyed.

How can you be sure that all your sensitive data is encrypted? First, you need to know where it is located. With siloed databases, cloud storage and personal devices in the mix, there’s a good chance that at least some of your sensitive data is exposed. A data breach could lead to the worst kind of exposure — the kind where you notify millions of customers that you failed to protect their privacy and their personal information.

But that doesn’t have to be your reality. The right encryption strategy will not only help protect your data, it can help strengthen your compliance posture. IBM Security Guardium helps identify your sensitive data — on premises and across hybrid multicloud — and helps to protect it with robust encryption and key management solutions. Plus, IBM Security’s strategic consulting can work with you to align your encryption strategy with business goals.

Encryption for a world in motion

The most successful businesses are driven by data and analytics. A recent study from Forrester found that such businesses, on average, grow at least seven times faster than global GDP2 — and driving implies movement. Your data can move between clients and servers. It can move over secure and non-secure networks. It can move between databases in your network. It can move between clouds. Safeguarding your sensitive data on these journeys is critical. Customers expect it and many regulatory agencies require it. So why doesn’t every business do it?

Many organizations simply don’t have the skills and the resources needed to effectively protect all the critical data in their business. Maybe they have a general security on-site imbedded IT service strategy but have not dedicated the time and effort to creating a data encryption strategy. It’s a common problem, and one that cybercriminals prey upon by extracting unencrypted data and gaining unauthorized access to under-protected encryption keys. 

What can you do to help protect your business? You can start by encrypting your sensitive data, implementing strong access controls, managing your encryption keys securely and aligning your encryption efforts with the latest compliance requirements. Without these safeguards in place, your data might not be as protected as it could be.

Is your critical data protected?

Securityand IT Service professionals in San Diego are typically tasked with preventing data breaches, stolen passwords and internal espionage — should be concerned about the level of protection of their data, since data is the lifeblood of their businesses. Encryption can help to make data unusable in the event it is hacked or stolen. Think of it as the first and last line of defense that can help protect your data from full exposure.

There are steps you can take to protect your organization’s data. A good place to start is identifying what data needs to be protected and where it is located. (The answer: more data than you realize and in more places than you expect.) Customer and financial data are obvious choices for encryption, but many companies fail to realize that even older, seemingly non-critical data can contain sensitive information, partly because the definition of what constitutes personally identifiable information (PII) has broadened considerably in the last decade.

Controlling and monitoring data access represents an important part of any data encryption strategy. It’s something that organizations need to balance with frictionless access to data. You want to make sure the right people have quick access to the data they need, while blocking the access privileges of unauthorized users. This is where security best practices can be invaluable:

  • Keep your encryption keys stored in a safe and separate location from your data 
  • Rotate your encryption keys frequently and align your key rotation strategy with your industry’s best practices for key rotation 
  • Always use self-encrypting media to help protect data on your devices 
  • Layer file and database encryption on top of media encryption to provide granular control over access and cryptographic erasure 
  • Use techniques such as data masking and tokenization to anonymize PII data that you share with outside parties

Use encryption to defend against threats

Most security professionals can include firewalls protection services to their IT Service package and are aware of the threats of data breaches and ransomware. They’re on the news, they’re on their minds and stopping them is at the top of most companies’ strategic imperatives. So why do data breaches still occur? Because, for cybercriminals, data breaches and ransomware attacks still work.

Ransomware attacks and data breaches are on the rise, so businesses should be prepared for these types of threats.2 It’s important to note that preparation is different from protection. You can try to protect against network attacks and insider threats 100 percent of the time, but you won’t always be successful. There are simply too many variables, too many chances for human error and too many cybercriminals looking to exploit those vulnerabilities to stop everything. This is why preparation is important — because you actually can encrypt your most sensitive data and render it useless in the event of a breach.

Encryption should be your first and last line of defense against attacks. It protects your data and your organization against internal and external threats and helps safeguard sensitive customer data. But encryption isn’t your only line of defense. Secure and consistent access controls across all your environments — on premises and in the cloud — as well as secure key management is important for keeping sensitive information out of the wrong hands

Use encryption to help address compliance

TeraPixels Systems and our security and IT services professionals in Orange County aren’t the only ones concerned with data protection. Countries, states and industry consortiums are entering the privacy picture with increasing frequency. For example, in 2019 and 2020 respectively, Europe’s Global Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA) introduced new security requirements that can levy heavy fines for non-compliance.

Keeping up with regulations can be difficult work. Understanding what data is impacted by specific regulations in each jurisdiction, the reporting requirements and even the penalties for non-compliance can be a full-time job. And in a world where full-time compliance experts are in scarce supply, many organizations have much to do before achieving compliance readiness.

Encryption, to borrow an expression, can cover a multitude of security sins. It can help to make your critical and sensitive data — what cybercriminals desire — worthless to would-be thieves. In many cases, compliance regulations mandate data encryption on some level. But beyond basic encryption, there are additional measures that every organization can take to protect their data. For example, using pseudo-anonymization strategies such as data masking and tokenization to selectively hide sensitive data as it’s being shared with partners can help make your data productive and protected. Using self-encrypting media on any device that stores data is another important safeguard that can help to prevent unauthorized parties from gaining access to data on stolen or salvaged devices.

How IBM Security Guardium can help protect your data

IBM Security Guardium can provide you with advanced and integrated solutions that help your organization identify, encrypt and securely access your most sensitive data. In addition, IBM Security offers security services and expertise to help your organization develop effective, efficient data protection strategies. At the heart of our encryption solutions are the IBM Security Guardium Data Encryption family of products and IBM Security Guardium Key Lifecycle Manager (GKLM).

IBM Security Guardium Data Encryption (GDE) helps protect critical data across all your data environments, helping to address compliance with industry and government regulations. The integrated family of products that make up GDE feature encryption for files, databases, applications and containers, as well as centralized key and policy management. GDE also provides data masking and tokenization, in addition to integration with third-party hardware security modules.

IBM Security Guardium Key Lifecycle Manager (GKLM)* helps deliver a secured, centrally managed encryption key management solution that supports the Key Management Interoperability Protocol (KMIP) — the standard for encryption key management — and features multi-master clustering for high availability and resiliency. GKLM can help organizations follow industry best practices for encryption key storage, access, security and reliability. GKLM simplifies encryption key management, synchronizes encryption keys between on-premises and cloud environments and automates many encryption functions, including self-encryption for storage media

What does an IT Service Company Do in Orange County? | TeraPixels

A local IT service company in Orange County will service your small to medium sized branch offices. These IT Service companies will maintain your wireless networks, your access control systems, desktops and laptops, and all of the applications that your employees use.

An IT Service company is your outsourced front line contact for all things IT in your branch office or main office. Instead of hiring a full time tech team, an IT company can provide on demand (break-fix) or proactive maintenance service to keep your services up and running.

Here are some examples of things that require professional maintenance:

  • Microsoft Office: Office365
  • Roaming laptops
  • Desktops
  • Computer security
  • Application support
  • Remote-work-at-home support for VPN users.

For example at one medium sized business that uses Terapixels IT Services, there are many 3rd party services and applications that are quite unreliable and quite complicated. These applications include Office365 with its organizational, licensing and software update issues. In addition, there are countless other licensed applications such as Adobe Cloud services that need periodic maintenance.

Terapixels is an expert at licensing and compliance with software licensing issues that come with these applications. Using an Orange County based company guarantees rapid response from local IT experts.

Other IT companies are known to “offshore” their tech people to the Philippines and India. Terapixels is one of the few companies in Orange County to maintain local experts within Southern California.

Our people speak perfect and clear English and have the skills to interact with your company’s employees competently.

Call us today to request a free evaluation of your IT infrastructure.