Orange County 949-556-3131

San Diego 619-618-2211

Toll Free 855-203-6339

IBM Cloud for Financial Services

Today your business model and your technology are under significant strain.

External conditions such as COVID-19 are driving extreme volatility in channel usage, in transaction volumes, and product demand. Your legacy systems may lack the resiliency needed to handle these challenges. Current customer behaviors and workloads are likely to shift quickly and dramatically again; placing your systems, your costs and your people under perpetual strain. You are faced with infrastructure that is slow and expensive. Additionally, different executives each with their own set of concerns makes moving to the public cloud seem daunting. 

These limitations and concerns are why banks have moved fewer than 20% of all workloads to the cloud, and virtually no complex workloads or those involving sensitive data. Until you find a way to safely and securely migrate and manage substantially greater workloads on the cloud, you will operate at this disadvantage. But it doesn’t have to be this way–it IS possible for banks to benefit fully from public cloud. 

Introducing IBM Cloud for Financial Services

To help enable financial institutions to transform, IBM developed IBM Cloud for Financial Services, built on the IBM Cloud. By working with Bank of America to develop industry informed security control requirements, on-site embedded IT services and leveraging IBM Promontory, the global leader in financial services regulatory compliance, IBM Cloud for Financial Services provides the level of data security and regulatory compliance financial institutions are mandated to adhere to, along with public cloud scale and innovation they want. With this comes the introduction of the IBM Cloud Policy Framework for Financial Services, exclusively available, which deploys a shared-responsibility model for implementing controls. It is designed to enable financial institutions and their ecosystem partners to confidently host apps and workloads in the cloud and be able to demonstrate regulatory compliance significantly faster and more efficiently than they are today.

Workloads will be run on IBM Cloud for VMware Regulated Workloads, a secure, automated reference architecture that enhances VMware vCenter Server on IBM Cloud to deliver a security rich, high-performance platform for VMware workloads in regulated industries. Designed to enable a zero-trust model, this architecture offers our clients in regulated industries a strategic approach to securely extend and scale their VMware IT operations into the IBM Cloud while maintaining compliance.

With nearly thirty ISVs and partners, procurement, contracting and onboarding within the ecosystem can be streamlined, leading to increased revenues and reduced time to market for all parties.

IBM Cloud for your workloads

IBM Cloud for Financial Services is exclusively available in North America, but you can still take advantage of all the products and services IBM Cloud has to offer in our 60-plus global data centers. 

IBM can help you build a strategy for global, regional, industry and government compliance

  • IBM Promontory® for financial services sector (FSS) workloads—operating at the intersection of strategy, risk management, technology and regulation 
  • Strong commitment to our European clients (PCI-DSS and EBA briefing) 

Maintain control of your cloud environment and your data

  • Client-key management (BYOK and KYOK) 
  • Visibility and auditability with physical-asset management and logging and monitoring 
  • Full control of the stack, with transparency for audit purposes, right down to the serial number of the server

Security leadership with market-leading data protection

  • Clients can keep their own key that no one else can see—so not even IBM operators can access the key or the data it protects, unlike other cloud vendors. IBM Cloud Hyper Protect Crypto Services is designed to give clients the control of the cloud data-encryption keys and cloud hardware-security module (HSM)—the only service in the industry with FIPS 140-2 Level 4 certification. 
  • Each workload requires various access and security rules; IBM enables organizations to define and enforce such guidelines by way of integrated container security and DevSecOps for cloudnative applications with IBM Cloud Kubernetes Service. 
  • IBM Cloud Security Advisor detects security misconfigurations so organizations can better assess their security postures and take corrective actions for all parties

Reduce complexity and speed innovation

  • IBM Garage™ for quick creation and scaling of new ideas that can dramatically impact your business 
  • With IBM’s vast ISV and partner ecosystem, banks can reduce overhead and the time and effort to ensure compliance of third-party vendors and more time delivering new services  

 

“We received the best of both worlds: the innovation and speed benefits of the IBM public cloud with the high security of a private cloud.” — Bernard Gavgani, Global Chief Information Officer, BNP Paribas

 

Why IBM?

Built on a foundation of open source software, security leadership and enterprise-grade hardware, IBM Cloud provides the flexibility needed to help relieve the headaches caused when managing workloads often associated with moving to the cloud. IBM Cloud offers the lowest cloud vendor costs and the broadest portfolio of secure-compute choices with a wide array of enterprise-grade security IT services in San Diego and products to help those in regulated industries. And most recently, IBM Cloud has been recognized as a 2019 Gartner Peer Insights Customers’ Choice for Cloud Infrastructure as a Service, Worldwide. The vendors with this distinction have been highly rated by their customers. Read the announcement to learn more

Top 10 Facts Tech Leaders Should Know About Cloud Migration

Cloud Migration Is A Harder Form Of Cloud Adoption

Cloud migration gained much popularity after Amazon Web Services (AWS) re:Invent in 2015 and a revolutionary speech by General Electric’s (GE’s) CIO, Jim Fowler.1 Rather than focusing public cloud adoption on building new apps, Fowler referred to AWS as a preferred outsourcing option to host its existing applications. Prior to this, I&O leaders had disregarded cloud migration as hard, expensive, and detrimental to the performance of applications. The new storyline highlighted megacloud ecosystem benefits, reinforced outsourcing messaging, and, more importantly, promised that cheaper migration methods were no longer problematic and careful planning could mitigate the performance issues.

Decide Whether Migration Is An App Strategy Or A Data Center Strategy

After collecting hundreds of cloud migration stories, Forrester recognizes that enterprises view cloud migration from two vastly different points of view: 1) an application sourcing strategy or 2) a data center strategy. Depending on which lens they’re using, enterprises build their business cases around different timelines, drivers, goals, and expectations (see Figure 1). Organizations may view cloud migration as:

An app sourcing strategy. The goal is to optimize sourcing decisions for a full app portfolio. Typically, the scope of migration is limited to large packaged app hubs, subsets of apps with certain characteristics, or apps with location-based performance challenges. Major enterprise applications, e.g., SAP S4/HANA, commonly move to public cloud platforms with ongoing supplemental managed services support.2 Business cases usually outline mitigated latency, improved experience, or lower operational costs to maintain the migrated workloads.

A data center strategy. The goal is outsourcing as many apps as possible. The scale for this approach is large and usually tied to a “moment of change” (e.g., new executives, a data center refresh, a data center closing, or a contract ending). With such massive scale, these enterprises opt for less expensive migration paths and are more forgiving of performance drops that may occur during the initial migration. Data center strategists rarely complete migrations without the support of consultancies and tooling. Business cases usually rely on classic outsourcing benefits, cost avoidance, and reduced staffing (often through attrition) to justify the expense.

 

Forrester’s Top 10 Cloud Migration Fact

Today, 76% of North American and European enterprise infrastructure decision makers consider migrating existing applications to the cloud as part of their cloud strategy.3 This shockingly high figure is supported with powerful enterprise examples, including Allscripts, BP, Brinks Home Security, Brooks Brothers, Capital One, Chevron, The Coca-Cola Company, Dairy Farmers of America (DFA), GE, Hess, J.B. Hunt Transport, Kellogg, Land O’Lakes, and McDonald’s.4 Despite cloud’s popularity, migration is still hard. It’s still expensive. And it still requires due diligence to mitigate these factors. Here are Forrester’s top 10 facts that I&O leaders should know about cloud migration:

  1. Cloud migration won’t have the same benefits as SaaS migration. When you adopt a software as-a-service (SaaS) technology, you’re using a new app designed specifically for a cloud platform. An app specialist is managing and updating that app. The new app has a new interface that your business users access and recognize as different. When you’re migrating an app to a cloud platform, none of that is true. You’re placing the same app in a generic cloud platform without the support of an app specialist. Any redesign requires your time, and the business user ultimately experiences the same app and interface. The best-case scenario is that performance stays the same and your business users don’t notice. That’s a lot less compelling than the case for SaaS.5 Don’t equate the two migration terms.
  2. Business users don’t care about cloud migration. If all goes well, your business users will experience the same app with no decline in performance. That isn’t a very compelling story for business users. If your cloud strategy is supposed to inspire, don’t focus your marketing on migration. Instead, focus on the elements of your cloud IT strategy that deliver new capabilities. Although its potential is powerful — in that cloud migration can clean up inefficiencies or release spend that might help fund new investments — the migration itself isn’t inspiring. For enterprises with “cloud first” policies, cloud migration may involve a corporatewide awareness that requires technology professionals to engage with the business to help ensure a smooth transition
  3. Cloud migration is hard. Cloud platforms differ in a few fundamental ways from enterprise data centers; they use commodity infrastructure, extremely high average-sustained utilization levels, and minimal operational time per virtual machine (VM).6 Consumers also get a financial reward if their apps vary resource usage as their traffic varies. Knowing this, enterprises have accordingly designed new apps to mitigate cost and obtain high performance. But for existing apps — as highlighted in cloud migration — this is much more difficult. Redesign or modernization, although ideal, is costly. Organizations can systematically solve these challenges, but learning these best practices can be painful. For critical workloads, the tolerance for mistakes can be low, especially when the advantages of the migration itself are less apparent to business users.
  4. Cloud readiness means scalable, resilient, and dependency-aware. To ready existing applications for cloud, enterprises look at basic improvements that can make a big difference in a public cloud. They ensure financial alignment by making their apps scale, consuming fewer resources when they’re less busy. Dependency mapping is another key step toward readiness, eliminating low-value dependencies and grouping applications into ecosystems to inform sets for the migration plan. More-thorough approaches break apps into services to increase application resiliency by eliminating dependencies within a single application. Migration discovery tools provide some readiness findings, including version updates, dependencies, financial implications, minimal application code and architectural feedback, and grouping suggestions
  5. Mass migrations typically align to a moment of change. Rightsourcing decisions explore characteristics that favor cloud.8 Mass migration (e.g., the migration of an entire app portfolio or a substantial number of apps), usually aligns to a “moment of change.” This includes executive changes; acquisitions/divestitures; the end of colocation contracts; infrastructure refreshes; IT cyber security, drastic changes in sourcing; and fear of, or experienced, disruption, any of which motivate significant and costly action at a specific point in time. Aligning to beneficial timing can make it easier to gain support, overcome barriers, or justify the economics behind a costly change. Almost all mass migrations align to one of these moments.
  6. Four paths exist for cloud migration. You may hear references to “the six R’s of migration” — rehost, replatform, repurchase, refactor, retire, and retain.9 Occasionally, other favored “R” terms are mixed in — redesign, rebuild, refresh, etc. Forrester highlights four key paths to cloud migration: 1) lift-and-shift (minimal change and moved through replication technology); 2) lift-and-extend (rehosting the app while making significant changes after the move); 3) hybrid extension (not moving existing parts of an app but rather building new parts in a public cloud); and 4) full replacement (complete or major rewrites to the application).10 Each company uses multiple methods for migration. Lift-and-shift is less resource-intensive, as it involves little change; however, this may cause performance decline. Full replacement requires significant change and resources
  7. Creating a cloud migration business case isn’t easy. Cost savings are hard to come by in cloud migration. Certain characteristics may make it easier to cut costs, such as shutting down data centers, eliminating painful inefficiencies, making minimal changes, and relying on minimal support for the migration. These may not be plausible or even recommended. Some of the more compelling business cases rely on cost avoidance, not cost savings (e.g., not buying new infrastructure). Creating your business case means cost, benefits, and future enablers, as defined by Forrester’s Total Economic Impact™ (TEI) model.11 Although you can support your documentation with any of the case studies noted above, it’s impossible to create your business case before you’ve defined the scope of your migration or gathered data about the specifics of your applications.
  8. Native platforms, consultancies, MSPs, and tools aid migration. Cloud migration is a massive revenue opportunity for cloud platforms. As a result, major public cloud platforms have eagerly built out migration support services, tooling, and certifications. Consultancies provide dedicated assistance to evaluate, plan, and migrate workloads, especially for massive migrations. MSPs also assist in migration but largely focus on the ongoing management after the migration. Standalone discovery and replication software assist both self-run and supported migrations. If you’re looking for support, it’s easy to come by.
  9. Hosted private cloud can be a less painful incremental step. Hosted private cloud isn’t the flashiest cloud technology. In fact, it falls short of public cloud capabilities and expectations in almost every way. However, it has three characteristics that deliver a practical solution for many use cases: 1) It’s often built on VMware products; 2) it has dedicated options; and 3) it’s managed by a service provider. For cloud migrators, it’s far easier to migrate a portfolio of applications to a VMware-based cloud environment, isolated from other clients and partially managed to the OS or app so they can meet aggressive deadlines and stable performance more realistically. This approach can help control costs, avoid performance issues, and provide migration support to the public cloud, with the help of your hosted private cloud provider.
  10. Repatriation happens, but it’s an app-level decision. Applications occasionally go in the other direction. The term repatriation started with cloud-negative origins to save reputation when an ill-advised cloud migration occurred prior to market maturity. More recently, it reflects a one-off sourcing change for an app when its characteristics change during the life of that workload and no longer are acceptable on a public cloud platform. Organizations undertake this effort only when the current state is painful — not simply inconvenient or slightly more expensive. Usually, it’s regulation or significant cost escalation that would drive such a drastic change for an app. AI/ML is a common cost example. Regulation-driven repatriation can mean that the scope of the application has changed, the regulation has changed, or the company’s approach to complying with regulation has evolved. Very rarely do we see complete strategywide repatriation, but when it occurs, it’s large technology footprints or ASIC requirements (e.g., Dropbox) that drive this decision

Prepare Yourself For Your Migration Strategy

Our team of IT Service Professionals in Orange County can start your cloud migration strategy off by educating your migration team, executives, and business users about how cloud migration fits into your larger cloud strategy. I&O professionals should use this report to help outline the key concepts to ensure better communication and accurate expectations. Moving forward, here are the steps you’ll need to tackle:

Identify the best-fit scope. Before jumping into cloud migration, first determine whether you’re seeking gains at the application level or the data center level. This is the first stage of determining scope. For those seeking app-level gains, start with your application portfolio. Create your own sourcing framework. This may include cloud readiness, variability, scalability, location challenges, dependencies, compliance requirements, data types, need for additional support, expected lifetime, and app satisfaction. For those seeking gains at the data center level, the framework will be similar but the results will heavily skew in favor of public cloud or SaaS migration as the preferred options. The framework itself may ask “why not” host in a certain solution rather than whether it’s the best fit or optimized in that platform. Rather than app-level optimization, the goal is system-level optimization, where the enterprise data center is seen as a source of inefficiency

Determine (and find) the support you need. Support is expensive but valuable, depending on your scope, experience, and executive sponsorship. Most migrators leverage some level of support, whether it’s tools, workshops, best practices, early guidance, or full migration support. After determining the right level of support, you’ll need to decide the type of provider that will deliver it and which set of partners meets your needs

Obtain real estimates based on your own numbers. The most common cloud migration inquiry question — “How much will I save from cloud migration?” — is impossible to answer accurately without inputs from your own estate. Your scope, current configurations, trust in autoscaling, anticipated changes, use of consultancies, cost avoidance, and team skill sets will all determine this figure. Each major cloud provider offers calculators. Each consultancy gives its own estimates. Before making definitive claims in your business case, get some real estimates and determine which costs won’t be going away

Five common data security pitfalls to avoid

Data security should be a top priority for enterprises, and for good reason

Even as the IT landscape becomes increasingly decentralized and complex, it’s important to understand that many security breaches are preventable. While individual security challenges and goals may differ from company to company, often organizations make the same widespread mistakes as they begin to tackle data security. What’s more, many enterprise leaders often accept these errors as normal business practice.

There are several internal and external factors that can lead to successful cyberattacks, including:

  •  Erosion of network perimeters 
  •  Increased attack surfaces offered by more complex IT environments 
  •  Growing demands that cloud services place on security practices 
  •  Increasingly sophisticated nature of cyber crimes 
  •  Persistent cybersecurity skills shortage 
  •  Lack of employee awareness surrounding data security risks

How strong is your data security practice?

Let’s look at five of the most prevalent—and avoidable—data security missteps that make organizations vulnerable to potential attacks, and how you can avoid them.

Pitfall 1

Failure to move beyond compliance

Compliance doesn’t necessarily equal security. TeraPixels System and their team of IT service professionals in San Diego, focus their security resources to comply with an audit or certification can become complacent. Many large data breaches have happened in organizations that were fully compliant on paper. The following examples show how focusing solely on compliance can diminish effective security:

Incomplete coverage

Enterprises often scramble to address database misconfigurations and outdated access polices prior to an annual audit. Vulnerability and risk assessments should be ongoing activities.

Minimal effort

Many businesses adopt data security solutions just to fulfill legal or business partner requirements. This mindset of “let’s implement a minimum standard and get back to business” can work against good security practices. Effective data security is a marathon not a sprint.

Fading urgency

Businesses can become complacent towards managing controls when regulations, such as the Sarbanes-Oxley Act (SOX) and the General Data Protection Regulation (GDPR), mature. While, over time, leaders can be less considerate about the privacy from IT service provider, security and protection of regulated data, the risks and costs associated with noncompliance remain.

Omission of unregulated data

Assets, such as intellectual property, can put your organization at risk if lost or shared with unauthorized personnel. Focusing solely on compliance can result in security organizations overlooking and under protecting valuable data.

Solution

Recognize and accept that compliance is a starting point, not the goal

Data security organizations must establish strategic programs that consistently protect their business’ critical data, as opposed to simply responding to compliance requirements

Data security and protection programs should include these core practices:

  • Discover and classify your sensitive data across on-premises and cloud data stores. 
  • Assess risk with contextual insights and analytics. 
  • Protect sensitive data through encryption and flexible access policies. 
  • Monitor data access and usage patterns to quickly uncover suspicious activity. 
  • Respond to threats in real time.
  • Simplify compliance and its reporting

The final element can include legal liabilities related to regulatory compliance, possible losses a business can suffer and the potential costs of those losses beyond noncompliance fines.

Ultimately, you should think holistically about the risk and value of the data you seek to secure. 

Pitfall 2

Failure to recognize the need for centralized data security

Without broader compliance mandates that cover data privacy and security, organization leaders can lose sight of the need for consistent, enterprise-wide data security. 

For enterprises with hybrid multicloud environments, which constantly change and grow, new types of data sources can appear weekly or daily and greatly disperse sensitive data.

Leaders of companies that are growing and expanding their IT infrastructures can fail to recognize the risk that their changing attack surface poses. They can lack adequate visibility and control as their sensitive data moves around an increasingly complex and disparate IT environment. Failure to adopt end-to-end data privacy, security and protection controls—especially within complex environments—can prove to be a very costly oversight.

Operating security solutions in silos can cause additional problems. For example, organizations with a security operations center (SOC) and security information and event management (SIEM) solution can neglect to feed those systems with insights gleaned from their data security solution. Likewise, a lack of interoperability between security people, processes and tools can hinder the success of any security program.

Solution

Know where your sensitive data resides, including on-premises and cloud hosted repositories

Securing sensitive data should occur in conjunction with your broader security efforts. In addition to understanding where your sensitive data is stored, you need to know when and how it’s being accessed, as well—even as this information rapidly changes. Additionally, you should work to integrate data security and protection insights and policies with your overall security program to enable tightly aligned communication between technologies. A data security solution that operates across disparate environments and platforms can help in this process.

So, when is the right time to integrate data security with other security controls as part of a more holistic security practice? Here are a few signs that suggest your organization may be ready to take this next step: 

Risk of losing valuable data 

The value of your organization’s personal, sensitive and proprietary data is so significant that its loss would cause significant damage to the viability of your business.

Regulatory implications 

Your organization collects and stores data with legal requirements, such as credit card numbers, other payment information or personal data.

Lack of security 

oversight Your organization has grown to a point where it’s difficult to track and secure all the network endpoints, including cloud instances. For example, do you have a clear idea of where, when and how data is being stored, shared and accessed across your on-premises and cloud data stores?

Inadequate assessment 

Your organization has adopted a fragmented approach where no clear understanding exists of exactly what’s being spent across all your security activities. For example, do you have processes in place to measure accurately your return on investment (ROI) in terms of the resources being allocated to reduce data security risk?

If any of these situations apply to your organization, you should consider acquiring the security skills and solutions needed to integrate data security into your broader existing security practice.

Pitfall 3

Failure to define who owns responsibility for the data

Even when aware of the need for data security, many companies have no one specifically responsible for protecting sensitive data. This situation often becomes apparent during a data security or audit incident when the organization is under pressure to find out who is actually responsible.

Top executives may turn to the chief information officer (CIO), who might say, “Our job is to keep key systems running. Go talk to someone in my IT staff.” Those IT employees may be responsible for several databases in which sensitive data resides and yet lack a security budget. 

Typically, members of the chief information security officer (CISO) organization aren’t directly responsible for the data that’s flowing through the overall business. They may give advice to the different line-of-business (LOB) managers within an enterprise, but, in many companies, nobody is explicitly responsible for the data itself. For an organization, data is one of its most valuable assets. Yet, without ownership responsibility, properly securing sensitive data becomes a challenge.

Solution 

Hire a CDO or DPO dedicated to the well-being and security of sensitive and critical data assets

A chief data officer (CDO) or data protection officer (DPO) can handle these duties. In fact, companies based in Europe or doing business with European Union data subjects face GDPR mandates that require them to have a DPO. This prerequisite recognizes that sensitive data—in this case personal information—has value that extends beyond the LOB that uses that data. Additionally, the requirement emphasizes that enterprises have a role specifically designed to be responsible for data assets.Consider the following objectives and responsibilities for choosing a CDO or DPO:

Technical knowledge and business sense 

Assess risk and make a practical business case that nontechnical business leaders can understand regarding appropriate security investments

Strategic implementation 

Direct a plan at a technical level that applies detection, response and data security controls to provide protections.

Compliance leadership 

Understand compliance requirements and know how to map those requirements to data security controls so that your business is compliant.

Monitoring and assessment 

Monitor the threat landscape and measure the effectiveness of your data security program

Flexibility and scaling 

Know when and how to adjust the data security strategy and IT embedded services, such as expanding data access and usage policies across new environments by integrating more advanced tools.

Division of labor 

Set expectations with cloud service providers regarding service-level agreements (SLAs) and the responsibilities associated with data security risk and remediation.

Data breach response plan 

Finally, be ready to play a key role to devise a strategic breach mitigation and response plan

Ultimately, the CDO or DPO should lead in fostering data security collaboration across teams and throughout your enterprise, as everyone needs to work together to effectively secure corporate data. This collaboration can help the CDO or DPO oversee the programs and protections your organization needs to help secure its sensitive data.

Pitfall 4

Failure to address known vulnerabilities

High-profile breaches in enterprises have often resulted from known vulnerabilities that went unpatched even after the release of patches. Failure to quickly patch known vulnerabilities puts your organization’s data at risk because cybercriminals actively seek these easy points of entry. 

However, many businesses find it challenging to quickly implement patches because of the level of coordination needed between IT, security and operational groups. Furthermore, patches often require testing to see if they don’t break a process or introduce a new vulnerability. 

In cloud environments, sometimes it’s difficult to know if a contracted service or application component should be patched. Even if a vulnerability is found in a service, its users often lack control over the service provider’s remediation process.

Solution

Establish an effective vulnerability management program with the appropriate technology to support its growth

Vulnerability management typically involves some of the following levels of activity:

  • Maintain an accurate inventory and baseline state for your data assets. 
  • Conduct frequent vulnerability scans and assessments across your entire infrastructure, including cloud assets. 
  • Prioritize vulnerability remediation that considers the likelihood of the vulnerability being exploited and the impact that event would have on your business. 
  • Include vulnerability management and responsiveness as part of the SLA with third-party service providers. 
  • Obfuscate sensitive or personal data whenever possible. Encryption, tokenization and redaction are three options for achieving this end. 
  • Employ proper encryption key management, ensuring that encryption keys are stored securely and cycled properly to keep your encrypted data safe.

Even within a mature vulnerability management program, no system can be made perfect. Assuming intrusions can happen even in the best protected environments, your data requires another level of protection. The right set of data encryption techniques and capabilities can help protect your data against new and emerging threats.

 

Pitfall 5

Failure to prioritize and leverage data activity monitoring

Monitoring data access and use is an essential part of any data security strategy. An organization leader needs to know who, how and when people are accessing data. This monitoring should encompass whether these people should have access, if that access level is correct and if it represents an elevated risk for the enterprise. 

Privileged user identifications are common culprits in insider threats.5 A data protection plan should include real-time monitoring to detect privileged user accounts being used for suspicious or unauthorized activities. To prevent possible malicious activity, a solution must perform the following tasks: 

  • Block and quarantine suspicious activity based on policy violations.
  • Suspend or shut down sessions based on anomalous behavior. 
  • Use predefined regulation-specific workflows across data environments. 
  • Send actionable alerts to IT security and operations systems.

 Accounting for data security and compliance-related information and knowing when and how to respond to potential threats can be difficult. With authorized users accessing multiple data sources, including databases, file systems, mainframe environments and cloud environments, monitoring and saving data from all these interactions can seem overwhelming. The challenge lies in effectively monitoring, capturing, filtering, processing and responding to a huge volume of data activity. Without a proper plan in place, your organization can have more activity information than it can reasonably process and, in turn, diminish the value of data activity monitoring.

Solution

Develop a comprehensive data detection and protection strategy

TeraPixels Systems and our security and IT services professionals in Orange County are typically tasked to secure a variety of businesses. To that end, when starting on a data security journey, you need to size and scope your monitoring efforts to properly address the requirements and risks. This activity often involves adopting a phased approach that enables development and scaling best practices across your enterprise. Moreover, it’s critical to have conversations with key business and IT stakeholders early in the process to understand short-term and long-term business objectives.

These conversations should also capture the technology that will be required to support their key initiatives. For instance, if the business is planning to set up offices in a new geography using a mix of on-premises and cloud-hosted data repositories, your data security strategy should assess how that plan will impact the organization’s data security and compliance posture. If, for example, the company-owned data will now be subject to new data security and compliance requirements, such as the GDPR, California Consumer Privacy Act (CCPA), Brazil’s Lei Geral de Proteção de Dados (LGPD) and so on.

You should also prioritize and focus on one or two sources that likely have the most sensitive data. Make sure your data security policies are clear and detailed for these sources before extending these practices to the rest of your infrastructure. 

You should look for an automated data or file activity monitoring solution with rich analytics that can focus on key risks and unusual behaviors by privileged users. Although it’s essential to receive automated alerts when a data or file activity monitoring solution detects abnormal behavior, you must also be able to take fast action when anomalies or deviations from your data access policies are discovered. Protection actions should include dynamic data masking or blocking.

 

A guide to securing cloud platforms

Rethink security for cloud-based applications

As more organizations move to a cloud-native model for developing apps and managing workloads, cloud computing platforms are rapidly limiting the effectiveness of the traditional perimeter-based security model. While still necessary, perimeter security is by itself insufficient. Because data and applications in the cloud are outside the old enterprise boundaries, they must be protected in new ways. 

Organizations transitioning to a cloud-native model or planning hybrid cloud app deployments must supplement traditional perimeter-based network security with technologies that protect cloud-based workloads. Enterprises must have confidence in how a cloud service provider secures their stack from the infrastructure up. Establishing trust in platform security has become fundamental in selecting a provider

Cloud security drivers 

Data protection and regulatory compliance are among the main drivers of cloud IT services in Orange County—and they’re also inhibitors of cloud adoption. Addressing these concerns extends to all aspects of development and operations. With cloud-native applications, data may be spread across object stores, data services and clouds, which create multiple fronts for potential attacks. And attacks are not just coming from sophisticated cybergangs and external sources; according to a recent survey, 53 percent of respondents confirmed insider attacks in the previous 12 months.

Five fundamentals of cloud security 

As organizations address the specialized security needs of using cloud platforms, they need and expect their providers to become trusted technology partners. In fact, an organization should evaluate cloud providers based on these five aspects of security as they relate to the organization’s own specific requirements: 

  1. Identity and access management: Authentication, identity and access controls 
  2. Network security: Protection, isolation and segmentation 
  3. Data protection: Data encryption and key management 
  4. Application security and DevSecOps: Including security testing and container security 
  5. Visibility and intelligence: Monitoring and analyzing logs, flows and events for patterns

Verify identity and manage access on a cloud platform

Any interaction with a cloud platform starts with verifying identity, establishing who or what is doing the interacting—an administrator, a user or even a service. In the API economy, services take on their own identity, so the ability to accurately and safely make an API call to a service based on this identity is essential to successfully running cloud-native apps. 

Look for providers that offer a consistent way to authenticate an identity for API access and service calls. You also need a way to identify and authenticate end users who access applications hosted in the cloud. As an example, IBM® Cloud uses App ID as a way for developers to integrate authentication into their mobile and web apps.

Strong authentication keeps unauthorized users from accessing cloud systems. Since platform identity and access management (IAM) is so fundamental, organizations that have an existing system should expect cloud providers to integrate their company’s identity management system. This is often supported through identity federation technology that links an individual’s ID and attributes across multiple systems.

Ask prospective cloud providers to prove that their IAM architecture and systems cover all the bases. In the IBM Cloud, for example, identity and access management is based on several key features 

Identity

  • Each user has a unique identifier 
  • Services and applications are identified by their service IDs 
  • Resources are identified and addressed by the cloud resource name (CRN) 
  • Users and services are authenticated and issued tokens with their identities

Access management

  • As users and services attempt to access resources, an IAM system determines whether access and actions are allowed or denied 
  • Services define actions, resources and roles 
  • Administrators define policies that assign users roles and permissions on various resources 
  • Protection extends to APIs, cloud functions and back-end resources hosted on the cloud

As you evaluate a cloud provider’s cloud it solutions, look for access control lists together with common resource names that enable you to limit users not only to certain resources, but also to certain operations on those resources. These capabilities help ensure that your data in your data center is protected from both unauthorized external and internal access.

Extending your own Enterprise Identity Provider (Enterprise IdP) to the cloud is particularly useful when you build a cloud-native app on top of an existing enterprise application that uses the Enterprise IdP. Your users can smoothly log in to both the cloud-native and underlying applications without having to use multiple systems or IDs. Reducing complexity is always a worthy goal.

Redefine network isolation and protection

Many cloud providers use network segmentation to limit access to devices and servers in the same network. Additionally, providers create virtual isolated networks on top of the physical infrastructure and automatically limit users or services to a specific isolated network. These and other basic network security technologies are table stakes for establishing trust in a cloud platform. 

Cloud providers offer protection technologies—from web application firewalls to virtual private networks and denial-of-service mitigation—as services for software-defined network security and charge per usage. Consider the following technologies as crucial network security in the cloud computing era.

Security groups and firewalls 

Cloud customers often insert network firewalls for perimeter protection (virtual private cloud/subnetlevel network access) and create network security groups for instance-level access. Security groups are a good first line of defense for assigning access to cloud resources. You can use these groups to easily add instance-level network security to manage incoming and outgoing traffic on both public and private networks. 

Many customers require perimeter control to secure perimeter network and subnets, and virtual firewalls are an easily deployable way to meet this need. Firewalls are designed to prevent unwanted traffic from hitting servers and to reduce the attack surface. Expect cloud providers to offer both virtual and hardware firewalls that allow you to configure permission-based rules for the entire network or subnets. 

VPNs, of course, provide secure connections from the cloud back to your on-premises resources. They are a must-have if you are running a hybrid cloud environment. 

Micro-segmentation 

Developing applications cloud-natively as a set of small services provides, such as companies that IT Services in San Diego, offer a security advantage of being able to isolate them using network segments. Look for a cloud platform that implements micro-segmentation through the automation of network configuration and network provisioning. Containerized applications architected on the microservices model are fast becoming the norm to support workload isolation that scales. 

Protect data with encryption and key management

Reliably protecting data is a security fundamental for any digital business—especially those in highly regulated industries such as financial services and healthcare. 

Data associated with cloud-native applications may be spread across object stores, data services and clouds. Traditional applications may have their own database, their own VM and sensitive data located in files. In these cases, encryption of sensitive data both at rest and in motion becomes critical. 

Keep your own key (KYOK)

To implement data security that remains 100% private within the public cloud, IBM exclusively offers a solution that enables you to be the sole custodian of your encryption key. As the only service in the industry built on FIPS 140-2 Level 4-certified hardware, IBM Cloud Hyper Protect Crypto Services provides a key management and cloud hardware security module (HSM).

Businesses are right to worry about cloud operators or other unauthorized users accessing their data without their knowledge, and to expect complete visibility into data access. Controlling access to data with encryption and also controlling access to encryption keys are becoming expected safeguards. As a result, a bring-your-own-keys (BYOK) model is now a cloud security requirement. It allows you to manage encryption keys in a central place, provides assurance that root keys never leave the boundaries of the key management system and enables you to audit all key management lifecycle activities (Figure 2).

Trusted compute hosts

It comes down to hardware: nobody wants to deploy valuable data and applications on an untrusted host. Cloud platform providers that offer hardware with measure-verify-launch protocols give you highly secure hosts for applications deployed within the container orchestration system.

Intel Trusted Execution Technology (Intel TXT) and Trusted Platform Module (TPM) are examples of hostlevel technologies that enable trust for cloud platforms. Intel TXT defends against software-based attacks aimed at stealing sensitive information by corrupting system or BIOS code, or by modifying the platform’s configuration. Intel TPM is a hardware-based security device that helps protect the system startup process by ensuring that it is tamper-free before releasing system control to the operating system.

Data protection at rest and in transit

Built-in encryption with BYOK lets you maintain control of your data, whether it’s based on premises or in the cloud. It’s an excellent way to control access to data in cloud-native application deployments. In this approach, the customer’s key management system generates a key on premises and passes it to the provider’s key management service. This approach encompasses data-at-rest encryption across storage types such as block, object and data services. 

For data in transit, secure communication and transfer take place over Transport Layer Security/ Secure Sockets Layer (TLS/SSL). TLS/SSL encryption also allows you to demonstrate compliance, security and governance without requiring administrative control over the cryptosystem or infrastructure. The ability to manage SSL certificates is a requirement for trust in a cloud platform

Meeting audit and compliance needs 

Providing your own encryption keys and keeping them in the cloud—with no service provider access—gives you the visibility and control of information required for CISO compliance audits.

Automate security for DevOps

As DevOps teams build cloud-native services and work with container technologies, they need a way to integrate security checks within an increasingly automated pipeline. Because sites such as Docker Hub promote open exchange, developers can easily save image preparation time by simply downloading what they need. But with that flexibility comes the need to routinely inspect all container images placed in a registry before they are deployed. 

An automated scanning system helps ensure trust by searching for potential vulnerabilities in your images before you start running them. Ask platform vendors if they allow your organization to create policies (such as “do not deploy images that have vulnerabilities” or “warn me prior to deploying these images into production”) as part of DevOps pipeline security.

IBM Cloud Container Service, for example, offers a Vulnerability Advisor (VA) system to provide both static and live container scanning. VA inspects every layer of every image in a cloud customer’s private registry to detect vulnerabilities or malware before image deployment. Because simply scanning registry images can miss problems such as drift from static image to deployed containers, VA also scans running containers for anomalies. It also provides recommendations in the form of tiered alerts. Other VA features that help automate security in the DevOps pipeline include:

Policy violation settings: With VA, administrators can set image deployment policies based on three types of image failure situations: installed packages with known vulnerabilities; remote logins enabled; and remote monitoring management and remote logins enabled with some users who have easily guessed passwords. 

Best practices: VA currently checks 26 rules based on ISO 27000, including settings such as password minimum age and minimum password length. 

Security misconfiguration detection: VA flags each misconfiguration issue, provides a description of it and recommends a course of action to remediate it. 

Integration with IBM X-Force®: VA pulls in security intelligence from five third-party sources and uses criteria such as attack vector, complexity and availability of a known fix to rate each vulnerability. The rating system (critical, high, moderate or low) helps administrators quickly understand the severity of vulnerabilities and prioritize remediation.

 

When it comes to remediation, VA does not interrupt running images for patching. Instead, IBM remediates the “golden” image in the registry and deploys a new image to the container. This approach helps ensure that all future instantiations of that image will have the same fix in place. VMs can still be handled traditionally, using an endpoint security service to patch VMs and fix Linux security vulnerabilities.

Create a security immune system through intelligent monitoring

When moving to the cloud, CISOs often worry about low visibility and loss of control. Since the organization’s entire cloud may go down if a particular key is deleted or a configuration change inadvertently severs a connection back to on-premises resources or an enterprise security operations center (SOC), why shouldn’t the operations engineers expect full visibility into cloud-based workloads, APIs, microservices—everything?

Access trails and audit logs 

All user and administrative access, whether by the cloud provider or your organization, should be logged automatically. A built-in cloud activity tracker can create a trail of all access to the platform and services, including API, web and mobile access. Your organization should be able to consume these logs and integrate them into your enterprise SOC

Enterprise security intelligence 

Make sure you have the option of integrating all logs and events into your on-premises security information and event management (SIEM) system (Figure 3). Some cloud service providers also offer security monitoring with incident management and reporting, real-time analysis of security alerts and an integrated view across hybrid deployments. IBM QRadar®, for example, is a comprehensive SIEM solution offering a set of security intelligence solutions that can grow with an organization’s needs. Its machine learning capabilities train on threat patterns in a way that builds up a predictive security immune system.

Managed security with expertise 

If your organization does not have significant security expertise, explore providers that can manage security for you. Some providers can monitor your security incidents, apply threat intelligence from a variety of industries and correlate this information to take action. Ask if they can also deliver a single pane of glass that integrates in-house and managed security services.

Security that promotes business success

With cloud technology becoming a larger and more important part of running a digital business, it literally pays to look for a cloud provider that offers the right set of capabilities and controls to protect your data, applications and the cloud infrastructure on which customer-facing applications depend. Expect the platform security solution to cover the five key cloud security focus areas: identity and access; network security; security surveillance, data protection; application security; and visibility and intelligence. The goal is to worry less about technology and focus more on your core business.A well-secured cloud provides significant business and IT advantages, including:

Reduced time to value: Since security is already installed and configured, teams can easily provision resources and rapidly prototype user experiences, evaluate results and iterate as needed. 

Reduced capital expenditure: Using security services in the cloud can eliminate many up-front costs, including servers, software licenses and appliances. 

Reduced administrative burden: By successfully establishing and maintaining trust in the cloud platform, the provider with the right security offerings assumes the greatest burden of administration, reducing your costs in reporting and resource maintenance.

Encryption: Protect your most critical data

Encryption is all around us. Our emails can be encrypted. Our video conferences can be encrypted. Even our phone calls can be encrypted. It’s only natural then to assume our most sensitive business data should also be encrypted. Yet according to Ponemon Institute’s 2019 Global Encryption Trends Study, the average rate of adoption of an enterprise encryption strategy is only 45 percent for those surveyed.

How can you be sure that all your sensitive data is encrypted? First, you need to know where it is located. With siloed databases, cloud storage and personal devices in the mix, there’s a good chance that at least some of your sensitive data is exposed. A data breach could lead to the worst kind of exposure — the kind where you notify millions of customers that you failed to protect their privacy and their personal information.

But that doesn’t have to be your reality. The right encryption strategy will not only help protect your data, it can help strengthen your compliance posture. IBM Security Guardium helps identify your sensitive data — on premises and across hybrid multicloud — and helps to protect it with robust encryption and key management solutions. Plus, IBM Security’s strategic consulting can work with you to align your encryption strategy with business goals.

Encryption for a world in motion

The most successful businesses are driven by data and analytics. A recent study from Forrester found that such businesses, on average, grow at least seven times faster than global GDP2 — and driving implies movement. Your data can move between clients and servers. It can move over secure and non-secure networks. It can move between databases in your network. It can move between clouds. Safeguarding your sensitive data on these journeys is critical. Customers expect it and many regulatory agencies require it. So why doesn’t every business do it?

Many organizations simply don’t have the skills and the resources needed to effectively protect all the critical data in their business. Maybe they have a general security on-site imbedded IT service strategy but have not dedicated the time and effort to creating a data encryption strategy. It’s a common problem, and one that cybercriminals prey upon by extracting unencrypted data and gaining unauthorized access to under-protected encryption keys. 

What can you do to help protect your business? You can start by encrypting your sensitive data, implementing strong access controls, managing your encryption keys securely and aligning your encryption efforts with the latest compliance requirements. Without these safeguards in place, your data might not be as protected as it could be.

Is your critical data protected?

Securityand IT Service professionals in San Diego are typically tasked with preventing data breaches, stolen passwords and internal espionage — should be concerned about the level of protection of their data, since data is the lifeblood of their businesses. Encryption can help to make data unusable in the event it is hacked or stolen. Think of it as the first and last line of defense that can help protect your data from full exposure.

There are steps you can take to protect your organization’s data. A good place to start is identifying what data needs to be protected and where it is located. (The answer: more data than you realize and in more places than you expect.) Customer and financial data are obvious choices for encryption, but many companies fail to realize that even older, seemingly non-critical data can contain sensitive information, partly because the definition of what constitutes personally identifiable information (PII) has broadened considerably in the last decade.

Controlling and monitoring data access represents an important part of any data encryption strategy. It’s something that organizations need to balance with frictionless access to data. You want to make sure the right people have quick access to the data they need, while blocking the access privileges of unauthorized users. This is where security best practices can be invaluable:

  • Keep your encryption keys stored in a safe and separate location from your data 
  • Rotate your encryption keys frequently and align your key rotation strategy with your industry’s best practices for key rotation 
  • Always use self-encrypting media to help protect data on your devices 
  • Layer file and database encryption on top of media encryption to provide granular control over access and cryptographic erasure 
  • Use techniques such as data masking and tokenization to anonymize PII data that you share with outside parties

Use encryption to defend against threats

Most security professionals can include firewalls protection services to their IT Service package and are aware of the threats of data breaches and ransomware. They’re on the news, they’re on their minds and stopping them is at the top of most companies’ strategic imperatives. So why do data breaches still occur? Because, for cybercriminals, data breaches and ransomware attacks still work.

Ransomware attacks and data breaches are on the rise, so businesses should be prepared for these types of threats.2 It’s important to note that preparation is different from protection. You can try to protect against network attacks and insider threats 100 percent of the time, but you won’t always be successful. There are simply too many variables, too many chances for human error and too many cybercriminals looking to exploit those vulnerabilities to stop everything. This is why preparation is important — because you actually can encrypt your most sensitive data and render it useless in the event of a breach.

Encryption should be your first and last line of defense against attacks. It protects your data and your organization against internal and external threats and helps safeguard sensitive customer data. But encryption isn’t your only line of defense. Secure and consistent access controls across all your environments — on premises and in the cloud — as well as secure key management is important for keeping sensitive information out of the wrong hands

Use encryption to help address compliance

TeraPixels Systems and our security and IT services professionals in Orange County aren’t the only ones concerned with data protection. Countries, states and industry consortiums are entering the privacy picture with increasing frequency. For example, in 2019 and 2020 respectively, Europe’s Global Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA) introduced new security requirements that can levy heavy fines for non-compliance.

Keeping up with regulations can be difficult work. Understanding what data is impacted by specific regulations in each jurisdiction, the reporting requirements and even the penalties for non-compliance can be a full-time job. And in a world where full-time compliance experts are in scarce supply, many organizations have much to do before achieving compliance readiness.

Encryption, to borrow an expression, can cover a multitude of security sins. It can help to make your critical and sensitive data — what cybercriminals desire — worthless to would-be thieves. In many cases, compliance regulations mandate data encryption on some level. But beyond basic encryption, there are additional measures that every organization can take to protect their data. For example, using pseudo-anonymization strategies such as data masking and tokenization to selectively hide sensitive data as it’s being shared with partners can help make your data productive and protected. Using self-encrypting media on any device that stores data is another important safeguard that can help to prevent unauthorized parties from gaining access to data on stolen or salvaged devices.

How IBM Security Guardium can help protect your data

IBM Security Guardium can provide you with advanced and integrated solutions that help your organization identify, encrypt and securely access your most sensitive data. In addition, IBM Security offers security services and expertise to help your organization develop effective, efficient data protection strategies. At the heart of our encryption solutions are the IBM Security Guardium Data Encryption family of products and IBM Security Guardium Key Lifecycle Manager (GKLM).

IBM Security Guardium Data Encryption (GDE) helps protect critical data across all your data environments, helping to address compliance with industry and government regulations. The integrated family of products that make up GDE feature encryption for files, databases, applications and containers, as well as centralized key and policy management. GDE also provides data masking and tokenization, in addition to integration with third-party hardware security modules.

IBM Security Guardium Key Lifecycle Manager (GKLM)* helps deliver a secured, centrally managed encryption key management solution that supports the Key Management Interoperability Protocol (KMIP) — the standard for encryption key management — and features multi-master clustering for high availability and resiliency. GKLM can help organizations follow industry best practices for encryption key storage, access, security and reliability. GKLM simplifies encryption key management, synchronizes encryption keys between on-premises and cloud environments and automates many encryption functions, including self-encryption for storage media