Batter Up: DevOps as a Team Sport

Batter Up: DevOps as a Team Sport

For DevOps to work well, developers, operations staff, security teams, finance, and other business leaders must work together

A DevOps team is not much different than a baseball team. There are individuals with unique skills – developers, security teams, system admins, and business owners – who need to work together to make it successful. Unfortunately, in many workplaces these job functions are siloed, creating obstacles to achieving a high-functioning DevOps culture.

The solution is building strong communication across all functions of your organization so your team knows what to expect and how to respond. If you were a baseball coach, you’d want your catcher to know in advance if the pitcher is about to throw of 95 MPH fastball or an 82 MPH curveball, and the same holds true across IT. 

When a professional baseball player goes into a slump at the plate, he may change out the bat or adjust swing mechanics with the help of a hitting instructor. For a software development organization, a slump may mean being slow to market with new features, or even worse, pushing out buggy software.

Regardless, that team needs to change things up and reach for some new strategies such as deploying a new automated testing tool or replacing a flaw in the software delivery process with an agile development methodology. However, this results in only temporary improvements.

Many organizations fail to expand DevOps beyond the initial Proof of Concept stage. While technology choices can play a part in improvement, it is more often a lack of organizational buy-in and an inability to break free from legacy development methodologies that prevent a DevOps team from winning.

Put Me in Coach: Creating a Winning DevOps Team

In baseball, every hitter has a unique approach. Similarly, DevOps is not a one-size-fits-all solution. Instead, it requires aligning your company mission with industry best practices and integrating CI/CD tools to increase your organization’s time to value. If only it were as simple as switching out your bat!

Understanding how to access the full benefit of DevOps is a thoughtful process that includes requirements gathering, executive sponsorship, vendor selection, tool configuration, testing, organization design changes, staff training, and more. Bringing in an expert partner as a DevOps “coach” can further improve your team’s odds of success.

If you do engage a partner, they should be able to respond to the specific needs of your team and offer flexible solutions that fit your objectives. Maybe you prefer a particular git repository, project management tool, or code deployment solution over another. Or perhaps you are a federal government agency with FedRAMP requirements to address. Every scenario is unique.

Getting support from the right provider allows your development team to focus on what they do best rather than being slowed down by vendor management issues, pipeline software patching, and tool configuration.

Meeting Security & Compliance

As PCI, SOX, FISMA, HIPAA, HITRUST, GDPR, and the many regional and state-level security and data privacy laws continue to evolve, compliance issues can introduce risk to a data-driven business.

A partner with DevOps expertise can be a significant help when it comes to security controls, industry regulations, and government mandates. This means working with experts who know how to make sense of regulations and tune the code pipeline to perform static and dynamic code analysis.

Statistics for Measuring DevOps Success  

It is easy to compare baseball players based on their statistics. For example, Hall of Fame hitters have high on base percentages, score runs, and hit for average. In the case of DevOps, there are key performance indicators that determine team performance.

For example, a decrease in support tickets may be a simple way of noting progress, but DevOps requires a more comprehensive set of measurements. High-achieving DevOps teams will view consistent or even accelerated deployment frequency as indicators of success – and will expect the volume of changes to trend upwards. In contrast, this team will be watching for the length of time it takes from development to deployment and the ratio of unsuccessful deployments and recovery time as stats that should be trending downward. 

Precise planning is equally important; accuracy of development estimates is another stat to measure over time. In each of these instances, you need a partner with experience tracking your stats. This kind of coaching can help your organization fix the flaws that are preventing these metrics from trending in the right direction. 

Post-Game Wrap Up

A DevOps methodology can produce higher-quality work faster, but it is hard to maintain momentum. Organizational silos, lack of coaching, and other roadblocks can all cause significant inefficiencies. 

For DevOps to work well, developers, operations staff, security teams, finance, and other business leaders must work together. A well-maintained CI/CD toolset that allows for consistent, secure, and compliant software is also important.

An experienced partner like Effectual can provide the necessary platform for strategic management and coaching. Combining DevOps experience, expertise, and execution with the flexibility to deliver support and follow through, we help you manage the expectations of your business leaders so software engineers can focus on what they do best.

Contact us to learn how we can help your team achieve DevOps success.

Al Sadowski is Senior VP of Product Management at Effectual, Inc. 

VMware Cloud on AWS: Solving the Challenges of Load Balancers, Active Directory, and Disaster Recovery

VMware Cloud on AWS: Solving the Challenges of Load Balancers, Active Directory, and Disaster Recovery

VMware Cloud on AWS is an enterprise-grade platform. Most customers on VMware Cloud on AWS have load balancing, Active Directory, or authentication built heavily into their application stacks. And at the enterprise level, an attack or failure can be harmful to customer data and prevent them from accessing their information for hours or days at a time. 

There are unique requirements for each of these individual services depending on the specific use case of the environment. At Effectual, we gather the necessary information for each service to ensure requirements are met when we migrate our customer workloads. For example, what’s the most efficient way to build load balancing to handle spikes in traffic? How can we transition Active Directory to be more cloud-centric? What areas within disaster recovery is a company overlooking? 

These are important considerations every organization must make on its cloud journey, and if they’re overlooked during migration, they can pose significant challenges. Let’s explore each component more in-depth.

Load Balancers

Load balancing ensures that none of your servers bear the brunt of network traffic alone. Today’s modern applications can’t run without load balancers and advances in security have improved those applications — though they still require attention.

What could go wrong:

  • Surprise hidden costs
    Security on-premises and security on VMware Cloud on AWS are different – a load balancer sitting locally today would be on AWS after migrating. When that traffic leaves the boundaries of VMware Cloud on AWS and goes to AWS native, it introduces a split-cost model. If you’re not keeping track of spend from all sources, you could be surprised by hidden costs.
  • Overspending on licensing fees
    You could also be overspending on licensing fees. In some cases, load balancer and security mechanism licenses can be transferred over so make sure you understand the agreements on each license before moving forward with any migration – then monitor ongoing costs for upgrades.
  • Troubleshooting that costs you time and money
    If your physical hardware, load balancers, or termination points fail, or if your software-based load balancers scale beyond initial capacity, it can cause significant delays and require your team to troubleshoot on the spot. When that troubleshooting leads to hours of manual labor, it impacts your focus, increases costs, and opens the door to potential vulnerabilities. Therefore, if you’ve moved over to a new environment and the functionality isn’t working as desired, it may require a complete reworking.

Benefits of Load Balancers on VMware Cloud on AWS

When we work with customers, we migrate their workloads to VMware Cloud on AWS in a way that minimizes the impact to the underlying workload and their business operations. We can also ensure security with proper firewalling.

In addition, VMware Cloud on AWS forces updates, which mitigates potential vulnerabilities that could impact underlying workloads. While DDoS attacks are common in the world of cybercrime, having modern virtual load balancers, firewalls, and logging can complement a secure, efficient, and cost-effective solution.

Software load balancers with VMware Cloud on AWS are also more flexible and easier to scale. They’re compatible with more environments and can add or drop virtual servers due to demand, offering an automatic response to network traffic changes.

The advanced load balancing of VMware Cloud on AWS has tangible business results, too:

  • 41% less time spent troubleshooting
  • 43% more efficient application delivery controller management, and
  • Zero specialized hardware required

Active Directory Requirements

Active Directory (AD) is typically available for on-premises Microsoft environments, but you can integrate AWS Directory Services with Virtual Machines (VMs) running on VMware Cloud on AWS. Your AD will likely contain users, computers, applications, shared folders, and other objects – all with their unique attributes. 

What could go wrong:

  • The directory can’t read the AD
    Sometimes, a company will replicate an AD from one place and expect it to function in another environment. However, that doesn’t always work — the IP addresses or networking may have changed, so the internals of the AD would also change, depending on where it’s being migrated to. 

    If the directory service can’t read the AD, it will prevent logging on, authentication, and any services dependent on the directory. This can also happen due to software glitches or unwanted changes in the AD schema, either by accident or a malicious internal actor. 

Benefits of Active Directory Requirements on VMware Cloud on AWS

VMware Cloud on AWS helps you avoid these issues by transitioning to a different kind of cloud-based authentication mechanism. You can also extend the AD into the migration location prior to migration, so the VMs or workloads have something to authenticate to when they are migrated.

Using AD on VMware also allows you to synchronize server clocks in all environments. For networks that rely on time-sensitive updates, you can create consistency across your environments 

Disaster Recovery

As much as we’d like to expect perfection, we must be prepared for risks. Even with an operational disaster recovery solution in place, there are still circumstances where it can fail.

What could go wrong:

  • Vulnerable internet-facing assets
    Per the Verizon 2021 Data Breach Investigations Report, the median random organization with an internet presence has 17 internet-facing assets. All of those assets are open to attack, whether they’re human-induced or caused by a natural disaster.
  • Ransomware or other attacks
    Often, the government or a B2B partner will give an automation mandate that says an enterprise must be recoverable in a certain number of hours or else they won’t do business together. However, even without a mandate, an enterprise can be hit by ransomware or another attack. 

  • Troubleshooting that takes focus away from other tasks
    An on-premises solution has an isolated environment for each component. If something goes awry with that workload, it typically requires the brainpower of several people to fix it. If your team is not able to focus on their other tasks, each minute of troubleshooting is another minute where data is vulnerable.
  • Servers that have not been rebooted
    We have seen enterprise customers that haven’t rebooted their servers in three to five years. This represents serious security risks. The Verizon 2021 Data Breach Investigations Report states that 20% of companies that experienced a breach had vulnerabilities dating back to 2010.

Benefits of Disaster Recovery on VMware Cloud on AWS

In the cloud, as with many things, time equals cost. The more automation you can do, the quicker the time to operation. 

The VMware Cloud on AWS platform provides seamless disaster recovery service. It’s very easy to configure and replicate within the AWS realm to test failure and prove time and again that, should something happen to the primary workload, it’s recoverable in a timely manner.

To maximize your benefits, you need proper tuning, best practices, and a thorough understanding of what your workload consumes the most. All these elements are addressed by VMware Cloud on AWS — a hyperconverged platform where storage, networking, and compute are all bundled together. Instead of waiting for a disaster to hit, you can proactively predict failure. If needed, VMware Cloud on AWS simply replaces the node and it’s back to business as usual.

Finally, the platform maintains a 99.9% SLA uptime of its infrastructure and ensures stability and security with forced upgrades that reduce the possibility of an attack.

The value of a developing a single source of truth 

Think about a previous technology role you’ve had. You learned things along the way that were unique to you. Maybe it was a process for running tests, or a method for tagging and categorizing data. Before you left your company, you may have shared some of your experience with your teammates during calls or written some of it down, but chances are you did not transfer much of your knowledge before departing.

This scenario happens regularly. People leave organizations for new opportunities and take their technical knowledge with them. And with how quickly technology changes, even documentation that does exist may become antiquated after a few years.

Our goal is to understand what a company has, how it’s configured, and what actions can be taken against it. We capture all the anomalies and differences from what customers have done manually and replicate in a test environment. As things change, we then update the document.

When you have a single source of truth in place, it not only helps you stay calm if a disaster does occur, it also provides clear guidance across all teams so you can coordinate an immediate and effective response. Overall operations move more smoothly and efficiently, and your team has more time to focus on more improvements within your business.
 

Summary

VMware Cloud on AWS is a powerful platform for addressing challenges with load balancing, Active Directories, and Disaster Recovery. Working with a partner that understands how to utilize and deploy its solutions will make your next cloud project even more successful.

Learn how we can help you Cloud Confidently® with VMware Cloud on AWS.

Hetal Patel is the Senior VMware Technical Lead at Effectual, Inc. 

How A Cloud Partner Helps You Maximize AWS Cloud Services

How A Cloud Partner Helps You Maximize AWS Cloud Services

Working with an experienced cloud partner can fill in knowledge gaps so you get the most out of the latest cloud services

Cloud services have been widely adopted for years, which is long enough for users to gain confidence in their understanding of the technology. Yet as cloud continues to grow and develop, customer knowledge does not always grow proportionately. When users become overconfident, they can unknowingly overlook new Amazon Web Services (AWS) technologies and services that could have a significant impact on positive business outcomes.

What cloud customers are overlooking

  • Control is not necessarily security
    In the early days of the cloud, many companies were reluctant to offload everything to cloud services due to security concerns. Today, many CTOs and IT Directors are still unsure how security translates to the cloud and remain hesitant about giving up control.

    AWS provides a proven, secure cloud platform for migrating, building, and managing workloads. However, it takes knowledge and expertise to take individual services and architect them into a solution that maintains, or even heightens security. A partner well-versed in AWS services and advanced cloud technologies can identify and deploy tools and services that strengthen your security posture and add value to your business.
  • Keeping up with cloud innovation is an investment in continual learning
    This can be a tall order for organizations with limited internal resources or cloud knowledge. Partnering with cloud experts who stay constantly informed about new AWS services – and know how to implement them – gives you immediate access to cloud innovation. It also frees up your developers and engineers to focus on core initiatives.
  • Aligning business and IT delivers better results
    Internal teams that pitch a cloud-forward strategy often face hesitancy from business leaders. This is because executives have historically made decisions about how to allocate and manage IT resources, leaving developers to work within the parameters they are presented. However, involving solutions architects and cloud engineers in decision-making brings a crucial technical perspective that uncovers additional options with better results.

    Bridging this gap is a matter of translation, as what makes sense to an in-house developer might seem like jargon to executives in other business units. Because our engineers understand both business and technology, we can bring clarity to modernization initiatives by acting as a translator between business and IT – preventing major communication and technical headaches down the line.  

The benefits of pairing managed and professional services

If your cloud partner is capable of handling larger professional services projects such as migrations, app development, and modernization as well as the ongoing maintenance of managed services, you will be far more successful at optimizing resources, improving security, reducing stress, and realizing cost savings.

There are several advantages of pairing professional and managed services:

  • Reduce operational overhead and optimize workloads
    Allowing a partner to directly manage more systems reduces your operational overhead and optimizes workloads. This guarantees your business will not get bogged down with redundant operations or pay for more computing power than is truly needed.

    For instance, you may be paying high colocation costs to house everything in your data center. By engaging a partner that offers both professional and managed services, you can move a workload safely from on-premises to the cloud with the same functionality, make it more secure, maintain compliance, and have confidence it is being optimized for performance and cost.
  • Upgrade and modernize more efficiently
    Having professional services and managed services under one roof makes it easier and more efficient to upgrade or modernize. Changes to infrastructure go much smoother with a trusted cloud partner at the wheel who has access to customer systems. Without access, the partner has to navigate the back and forth between client-controlled systems and new professional services before any real progress can take place.

    The goal is not to scrap an entire in-house system, but to develop a smooth transition where managed and professional services work in harmony. With the right context, and the right cloud partner, you can translate the ROI of pairing professional services and managed services so your executives are onboard with cost-saving proposals and your developers have a seat at the table.

In summary, you can maximize the benefits of cloud services by engaging a partner with the technical expertise, business experience, and deep knowledge of AWS services to support your modernization efforts.

Connect with our Modernization EngineersTM to find out how we can help you unlock the full power of the cloud.

Jeff Finley is a Senior Cloud Architect at Effectual, Inc. 

TIC 3.0 Update

TIC 3.0 Update

Last summer, the Cybersecurity and Infrastructure Security Agency (CISA) released core guidance documentation for the Trusted Internet Connections (TIC) program. The TIC 3.0 updates are aimed to better assist agencies with protecting modern information technology architecture. 

While the updates are significant, there’s a consistent theme throughout TIC 3.0: the federal government is more in tune with the cloud, opening the door to more innovation, flexibility, and opportunity than ever before.

Major takeaways from TIC 3.0

#1 Descriptive, not prescriptive guidelines

Previously, TIC 2.0 featured a 75-page reference architecture document that outlined a hard line between the federal government’s boundary and everything outside of it. The guidance required each individual agency to figure out how to interpret that line and limit the entry and exit points.

The spirit of TIC 2.0 sought to limit internet access points, causing TIC-specific infrastructure to be deployed in Federal data centers. As agencies started moving to the cloud, TIC 2.0 still mandated that traffic needed to travel through these TIC architectures, largely on premise. This resulted in some very complex networking constructs that at best minimized the benefits of the cloud, such as elasticity and ability to scale. 

Since that time, the federal government’s need for innovation facilitated by cloud adoption has driven the need for TIC 3.0.

In contrast, TIC 3.0 offers considerations around security deployments but also allows for innovation as it relates to a trusted internet connection, without outlining the specifics of the implementation. In other words, TIC 3.0 is a descriptive approach, not a prescriptive one. 

While this update enables innovation, it also leaves a lot of judgment calls around how to implement the guidance up to agencies. For example, a partner that has had experience with multiple public sector implementations across a variety of agencies can bring additional insights to bolster an agency as they figure out how to navigate these decisions. 

TIC 3.0 offers considerations around security deployments but also allows for innovation as it relates to a trusted internet connection, without outlining the specifics of the implementation. In other words, TIC 3.0 is a descriptive approach, not a prescriptive one. 

#2 Distributed and elastic security capabilities

The TIC 3.0 update is the closest thing yet to a zero-trust model the federal government has produced for a Trusted Internet Connection. The new guidance introduces “security capabilities” in two broad categories:

  • Universal capabilities: enterprise-level capabilities that outline guiding principles for TIC use cases
  • Policy Enforcement Point (PEP) capabilities: network-level security capabilities that inform technical implementation for a use case

Instead of offering a strict prescription for what must be done, the guidance provides policies for deploying in the cloud closer to the data — a far better security model. Rather than all information flowing through a centralized architecture, PEP security capabilities can now reside very close to the end user. You can also create PEPs across a distributed network, allowing you to scale and distribute resources as you see fit.

In addition, TIC 3.0 guidance specifically addresses pilot activities, emerging technologies, and threat insights as they’re discovered, and will update the security capabilities catalog along the way. The catalog is purposefully built to be elastic and flex with the federal computing landscape as it grows.

#3 Flexible trust zones that facilitate innovation

PEPs must be considered when protecting systems or data within the target architecture, but there’s also the concept of trust zones. Historically, trust zones fell into one of two categories: trusted or untrusted. But the way these zones are applied in TIC 3.0 architecture is more evolved.

Agencies can now make the consideration between highly trusted, moderately trusted, or low-trusted security stances. That distinction creates a building block approach for agencies. If they highly trust one system in the cloud, they can deploy a customized set of security capabilities for that trust level and risk matrix at PEPs. Alternatively, a low-trusted system would require more security capabilities. That level of flexibility drives innovation.

You can even have two systems hosted in the same place conceptually with two different levels of trust. For instance, if the threat matrix against a workload is higher — certain workloads may be more susceptible to DDoS or other types of malicious attacks — or user access needs to be different, you might classify your trust zones differently.

Tool misconfiguration is one of the top security lapses in cloud technologies, not the capability of the tool. Gartner estimates that “through 2025, 99% of cloud security failures will be the customers fault.” Most of the headlines in cloud security issues that have caused panic were due to someone who didn’t understand how to configure properly for a particular workload.

As you’re defining these trust zones and the tools that make up your PEPs, having a partner who’s previously helped agencies define responsibilities, risk acceptance, and implemented the technology stacks to complement the strategy can protect against this kind of misconfiguration with much more confidence. 

#4 Continuous validation instead of a point in time

With TIC 2.0, you had to pass a TIC compliance validation as a point-in-time exercise. Federal Network Security would look at your system through a lens of validation and determine compliance. While compliance was important to achieve, that validation only applied to a specific moment in time.

The TIC 3.0 updates provide a higher level of accountability through the idea of continuous validation, particularly with cloud systems. CISA is deploying cloud capabilities through the Cloud Log Aggregation Warehouse (CLAW) under the Continuous Detection and Monitoring (CDM) and National Cybersecurity Protection System (NCPS) programs.

Rather than a specific point-in-time validation, these programs request logs be sent continuously for ongoing validation. They’re facilitating innovation on one side while stepping up compliance and enforcement on the other side through a continuous validation footing. Anyone who’s been working with big data technologies can figure out how to get logs into CLAW. Doing that in a cost-effective way with the tools that will be most impactful is less obvious, which is another area where a partner can contribute.

Where a partner comes in

The biggest takeaway from these TIC 3.0 updates is that you don’t have to navigate them alone. An experienced partner can help you:

  • Digest the descriptive nature of TIC 3.0 guidelines and align it with your agency’s mission and internal security requirements
  • Define trust zones and configure the tools that will make up your PEPs while significantly reducing the risk of misconfiguration
  • Accelerate innovation while embracing the spirit of the TIC 3.0 building block model

There are a lot of exciting changes within these core guidance documents. With the right partner on your side, you can continue to cloud confidently while reaching new heights of innovation.

How Private NAT Gateways Make for Easier Designs and Faster Deployments

How Private NAT Gateways Make for Easier Designs and Faster Deployments

NAT Gateways have historically been used to protect resources deployed into private subnets in virtual private clouds (VPCs). If you have data that resides in resources deployed on a private subnet in a VPC that needed to access information outside the VPC (on the internet or on premise) and you wanted to exclude incoming connections to those resources, you’d use a NAT Gateway. The NAT Gateway would provide access to get what you needed and allow that traffic to return, but still wouldn’t allow something that originated from the outside get in. 

The core functionality of a NAT Gateway is allowing that one-way request origination flow.

Earlier this month, AWS announced that you can now launch NAT Gateways in your Amazon VPC without associating an internet Gateway to your VPC. The private NAT Gateway allows you to route directly to Virtual Private Gateways or Transit Gateways without an Internet Gateway in the path for resources that need to reach out to internal tools, like a data center, VPC, or something else on-prem.

This might seem like a modest bit of news, but it will lead to improved performance on both the business and engineering levels and demonstrates the constant innovation AWS provides its customers. 

Innovation continues at the fundamental level 

Continuous innovation has always been at the core of how AWS approaches problem solving. This is true for the bleeding edge of technology and for the fundamental building blocks of well-established disciplines, such as networking. The idea of Network Address Translation (NAT) isn’t anything new; it’s been a core building block for years. 

In the past, though, you would have done it on your own server, deploying, configuring, and maintaining a NAT instance. AWS brought the NAT Gateway into the fold; the differentiator being that this was a managed service offering that lets you use infrastructure as code or simply make a few clicks in your console to attach a NAT Gateway to a private subnet in a VPC, so you don’t have to worry about the underlying infrastructure. There are no third-party tools or complex configuration to worry about. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

We see the same type of excitement and attention to detail both with major launches and when introducing new ways to use a product or service that already exists. It’s that combination of innovation for fundamental core offerings to make them easier to use plus bleeding edge innovation that really highlights the depth of expertise across AWS and its partners. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

Learn more about AWS Private NAT Gateways here

A boost in efficiency

Before the private NAT Gateway, every NAT Gateway had a hard dependency on an Internet Gateway being attached to the VPC for egress. Communication to on premise resources via NAT Gateway required routing through an Internet Gateway or some other networking construct that added a layer of complexity to protecting those private resources in a private VPC. The real benefit of this feature add is the ease of protecting those resources that need to reach out. 

At its core, the private NAT Gateway is simplifying that outgoing request pattern — and it leads to a boost in efficiency. 

For businesses, the NAT Gateway allows a consistent managed service offering from AWS to protect those private resources that need outbound connectivity from a private subnet in a VPC. Prior to the private NAT Gateway, you would have needed to solve for that idea using a third-party tool or a more complex networking architecture. 

Security departments now have a more secure pattern that is less prone to misconfiguration since an Internet Gateway is no longer a dependency. This makes understanding how the organization is applying NAT Gateways and how operations teams are managing them easier. They are able to standardize on a private NAT Gateway approach, reproducing the pattern for the networking path for these outgoing requests consistently.

For individual engineers, a private NAT Gateway simplifies design and deployment because of its inherent ease — a few clicks in your console or lines in infrastructure as code, rather than relying on a more cumbersome third-party tool or a more complex configuration. AWS is extending the functionality of the managed service part of a NAT Gateway to the specific use case of handling private subnet outgoing traffic. This addition makes design easier, it makes deployment faster, and it makes the entire subject more repeatable, because you’re just consuming that managed NAT Gateway service from AWS.

Why this is worth a look

As an engineer, I certainly understand the mindset of wanting to minimize complexity. Enterprise users have NAT Gateways deployed with a dependency on an Internet Gateway and likely have more complex routing solutions in place to protect against unintended incoming requests via that Internet Gateway. Those solutions might be working just fine, and that’s great.  

But from my vantage point, I strongly encourage you to take another look at your egress-only internet and NAT Gateways architecture for private subnets. You could be missing an opportunity to greatly streamline how you work.

At worst, you can simplify how you use your “egress-only” communications. At best, you’ll eliminate a third-party tool and save money while freeing up more of your individual time.

That’s worth taking a second look at the way you’re operating. We should be regularly evaluating our deployments anyway, but it especially applies in networking complexity and simplification. 

I look forward to the improved ease of use for my clients with private NAT Gateways, and am confident you’ll find a similar model of success with your deployments.   

It’s Time for Pharma to Embrace the Cloud

It’s Time for Pharma to Embrace the Cloud

Pharmaceutical companies have long been familiar with the pros and cons of cloud vs. non-cloud environments. The same discussions took place when companies in other industries began transitioning from on-premises to outsourced providers.

However, the pharmaceutical industry, and the data they manage, is within the scope of the Federal Drug Administration (FDA). With the FDA’s purview increasing over the years, and the globalization of their compliance oversight (now including over 150 countries exporting FDA‑regulated products to the United States), they have put more of the onus of following regulations on the pharmaceutical companies.

Enhancing Security Through Strategy and Architecture

In response to the FDA asking questions regarding the Code of Federal Regulations Title 21, Part 11 (CFR 21 Part 11), companies complying with GxP regulations have to ask themselves: “What risk level is acceptable for my business?” Often, this becomes a paralyzing exercise fraught with an unwillingness to change or, at best, test runs of cloud technology in safe areas like dev/test that are ultimately never implemented. This inaction leaves them behind more agile competitors who have clear, well-documented policies around adopting cloud technologies without adding significant risk. Lacking a defined cloud initiative does something that many companies may find surprising – it increases their risk and vulnerability as bad actors, security attacks, and attempts at gaining access to sensitive data become more sophisticated.

“What risk level is acceptable for my business?”
This often becomes a paralyzing exercise fraught with unwillingness to change.

Well-architected cloud environments are the best solution to keep up with those security challenges. According to Gartner, “…through 2020, public cloud infrastructure as a service (IaaS) workloads will suffer at least 60 percent fewer security incidents than those in traditional data centers.” This additional security is the result of the major cloud platform providers (AWS, Azure, Google, and Alibaba) having a virtually unlimited budget and tight controls of the underlying platform. While they do provide a secure platform, it is still up to the users to architect secure environments. Gartner also states: “through 2022, at least 95 percent of cloud security failures will be the customer’s fault.”

The Way Forward

So, what can you do to ensure that your FDA-regulated business remains competitive and secure in a world where change is constant, and breaches happen daily? The first step is also one of the most important: Secure the understanding and sponsorship of the entire executive team.

There should be unanimous and clear support from the executive team, and a realistic understanding of the benefits of a cloud solution. Without their support, any adoption challenges may cause the project to stall, create doubt, or even lead to abandoning your cloud initiatives altogether.

Once you have the executive team’s support, a company-wide policy for cloud initiatives needs to be developed. This policy should be created by those with a deep knowledge of cloud computing to take full advantage of the appropriate cloud services for your business requirements. At this point, engaging with a managed service provider or consultant can be highly beneficial and ensure that your cloud initiatives are realistic and follow best practices for cost, security, and compliance requirements.

Developing Effective Adoption Policies

At minimum, a cloud adoption policy should address security and compliance requirements, workload elasticity and scaling demands, departmental ownership and responsibilities, risk assessment and remediation methodologies, and critical dependencies. In addition, you should also consider addressing storage retention, disaster recovery, or business continuity. The process of developing these comprehensive adoption policies allows your organization to gain a better understanding of how the cloud fits into each aspect of your business, while providing clear goals for your teams to pursue.

Having a clearly defined objective is best practice for implementing a cloud solution, but being too focused on the minutiae can lead to tunnel vision and increases the likelihood of creating an inflexible adoption plan. Designing a plan that functions more as a framework or a set of guidelines than a codified set of instructions, in a sense mirroring the flexible nature of the cloud, will help prevent your teams from losing sight of the advantages of cloud services or hindering innovation.

Another common pitfall to cloud adoption is the tendency to apply current, non-cloud policy to your cloud adoption initiatives. Adherence to legacy IT policies will prove challenging to cloud adoption and could make it impossible to fully realize the advantages of moving to a cloud solution. And outdated approaches could even result in greater costs, poor performance, and poorly secured environments. These risks can all be addressed with appropriate cloud-based policies that foster cloud-first approaches to new initiatives.

Becoming a secure, cloud-enabled organization requires consistent diligence from your internal teams and continuous adherence to the company cloud policy. In the end, the most significant risks to the security of your infrastructure are tied to your own policies and oversight, and the continued security of your cloud and data will require the involvement and cooperation of your entire organization. Clear communication and targeted training will help your teams understand their role in organizational security.

An Outsider’s Expertise

If you’re not sure about the effectiveness of your approach to cloud adoption, bringing in a third party to assist with policy creation or implementation can help save time and money while ensuring that best practice security is built into your approach. Outside organizations can also provide valuable assistance if you’ve already implemented cloud solutions, so it’s never too late to get guidance and insight from experts who can point out where processes or solutions can be improved, corrected, or optimized to meet your specific business requirements.

These third-party engagements have proven to be so useful that AWS has created the Well-Architected Framework and an associated Well-Architected Review program that gives their clients an incentive to have a certified third party review and then optimize their AWS solution (learn more about Effectual’s Well-Architected Review offering). Organizations such as the Society of Quality Assurance and the Computer Validation & Information technology Compliance (CVIC) (disclosure: I am a member of the CVIC) are also discussing these issues to provide guidance and best practices for Quality Assurance professionals.

Outside professional and managed services can provide an immense level of assistance through an objective assessment of your organization’s needs. Their focused expertise on all things cloud will lighten the load on your internal IT teams, help ease any fears you may have about cloud adoption, discover potential savings, and provide guidance to fortify the security of your cloud solution.

Mark Kallback is a Senior Account Executive at Effectual, Inc.

Considerations for AWS Control Tower Implementation

Considerations for AWS Control Tower Implementation

AWS Control Tower is a recently announced, console-based service that allows you to govern, secure, and maintain multiple AWS accounts based on best practices established AWS.

What resources do I need?

The first thing to understand about Control Tower is that all the resources you need will be allocated to you by AWS. We will need AWS Organizations established, an account factory to create accounts per LOB, and Single Sign On (SSO) to name a few. Based on the size of your entity or organization, those costs may vary. In the Control Tower precursor, AWS Landing Zones, we found that costs for this collection of services could range near $500-$700 monthly for large customers (50+ accounts), as deployed. Control Tower will probably be a similar cost, possibly higher depending on the size of your organization. I will address later in this post on how to go and use Control Tower once you have an account set up a Brownfield situation. In a perfect world, it would be nice to set up the Control Tower and in a Greenfield scenario, but sadly, 99% of the time, that’s not the case.

If you’re a part of an organization that has multiple accounts in different lines of business, this service is for you.

What choices do I need to make?

In order to establish a Cloud Enablement Team to manage Control Tower, you need to incorporate multiple stakeholders. In a large organization, that might entail different people for roles such as:

  1. Platform Owner
  2. Product Owner
  3. AWS Solution Architect
  4. Cloud Engineer (Automation)
  5. Developer
  6. DevOps
  7. Cloud Security

You want to be as inclusive as possible in order to get the most breadth of knowledge. These are the people that will be making the decisions you need to migrate to the cloud and then most importantly, thrive once present and remain engaged. We have the team, so now what can we do to make Control Tower work the best for us?

Decisions for the Team

1. Develop a RACI

This is one of the most crucial aspects of Operations. If you do not have accountability or responsibility, then you don’t have management. Everyone must be able to delineate their tasks from the rest of the team. Finalizing everyone’s role in the workflow then will solve a lot of issues before they happen.

2. Shared Services

In the shared services model, we need to understand what resources are going to the cloud and what will stay. Anything from Active Directory to DNS to one-off internal applications will have to be figured out in a way to accommodate functionality and keep the charge back model healthy. One of Control Tower’s most redeeming and worthy qualities is knowing what each LOB is costing and how they are helping the organization overall.

3. Charge Backs

Since the account factory (previously called Account Vending Machine) is established, each LOB will have its own account. In order to see what the LOB costs are, you must have an account. AWS does not do pricing based on VPC, but by account. Leveraging Control Tower, tagging, and third-party cost management resources all can combine to give an accurate depiction of the costs incurred by a specific line of business.

4. Security

Security will have all logs dumped from each account into a centralized log bucket that can be pointed to the tool of choice to analyze those logs. Other parties may perform audits to read your logs using ready only functions in an account that has nothing else, another feature of Control Tower. The multi-account strategy not only allows for better governance but also now helps in case of compromise. If one account has been compromised, then the blast radius for all the other accounts is minimal. Person X may have accessed a bucket in a specific account, but they did not access it anywhere else. The most important thing to remember is that you cannot treat cloud security like data center security.

There are plenty of choices to make as it relates to Control Tower moving forward for an organization, but if you plan correctly and make wise decisions, then you can secure your environment and keep your billing department happy. Hopefully, this has helped you see what it takes in the real world to prepare. Good luck out there!

A Cloud Security Strategy for the Modern World

A Cloud Security Strategy for the Modern World

In the borderless and elastic world of the cloud, achieving your security and compliance objectives requires a modern, agile, and autonomous security strategy that fosters a culture of security ownership across the entire organization.

Traditional on-premises approaches to cybersecurity can put organizations at risk when applied to the cloud. An updated, modern strategy for cloud security should mitigate risk and help achieve your business objectives.

Cloud service providers such as Amazon Web Services (AWS) place a high priority on the security of their infrastructure and services and are subject to regular, stringent third-party compliance audits. These CSPs provide a secure foundation, but clients are still responsible for securing their data in the cloud and complying with their data protection requirements. This theory is substantiated by Gartner, which estimates that, through 2020, workloads hosted on public cloud will have at least 60% fewer security incidents than workloads hosted in traditional data centers, and 95% of cloud security failures through 2022 will be the fault of customers.

Traditional approaches to cybersecurity weren’t designed for the cloud – it’s time for an update

Updating how you think about cybersecurity and the cloud

Despite the significant security advances made by CSPs since the birth of cloud, users still need a deep understanding of the cloud’s shared responsibilities, services, and technologies to align information security management systems (ISMS) to the cloud. Today, the majority of data breaches in the cloud are the result of customers not fully understanding their data protection responsibilities and adopting poor cloud security practices. As the public cloud services market and enterprise-scale cloud adoption continues to grow, organizations must have a comprehensive understanding of not just cloud, but cloud security specifically.

Through 2022, at least 95% of cloud security failures will be the customer’s fault

Source: Gartner

The cloud can be secure – but are your policies?

A poor grasp of the core differences between on-premises and cloud technology solutions resulted in a number of misconceptions during the early days of cloud adoption. This lack of understanding helped fuel one of the most notable and pervasive cloud myths in the past: that it lacked adequate security. By now, most people have come to realize that cloud adoption and digital transformation do not require a security tradeoff. In fact, the cloud can provide significant governance, risk, and compliance (GRC) advantages over traditional on-premises environments. A cloud-enabled business can leverage the secure foundation of the cloud to increase security posture, reduce regulatory compliance scope, and mitigate organizational responsibilities and risk.

It is common to see enterprise organizations lacking the necessary expertise to become cloud resilient. Companies can address this skills gap through prescriptive reference architectures. AWS, for example, has created compliance programs for dozens of regulatory standards, including ISO 27001, PCI DSS, SOC 1/2/3, and government regulations like FedRAMP, FISMA, and HIPAA in the United States and several European and APAC standards. Beyond these frameworks, consultants and managed service providers can work with organizations to provide guidance or architect environments to meet their compliance needs.

Regardless of the services leveraged, the cloud’s shared responsibility model ensures that the customer will always be responsible for protecting their data in the cloud.

Making the change

Similar to the challenges and benefits of implementing DevOps (discussed here by our CEO, Robb Allen), effective cloud security requires a culture change with the adoption of DevSecOps, shifting the thinking in how teams work together to support the business. By eliminating the barriers between development, operations, and security teams, organizations can foster a unified approach to security. When everyone plays a role in maintaining security, you foster collaboration and a shared goal of securely and reliably meeting objectives at the speed of the modern business.

When everyone plays a role in maintaining security, you foster collaboration and a shared goal of securely and reliably meeting objectives at the speed of the modern business.

Additionally, cloud-specific services and technologies can provide autonomous governance of your ISMS in the cloud. They can become a security blanket capable of automatically mitigating potential security problems, discovering issues faster, and addressing threats more quickly. These types of services can be crucial to the success of security programs, especially for large, dynamic enterprises or organizations in heavily regulated industries.

Implementing the right cloud tools can lead to significant reductions in security incidents and failures, giving your teams greater freedom and autonomy to explore how they use the cloud.

The way to the promised land

Security teams and organizations as a whole need to have a deep understanding of cloud security best practices, services, and responsibilities to create a strategic security plan governed by policies that align with business requirements and risk appetite. Ultimately, however, a proper cloud security strategy needs buy-in and support from key decision-makers and it needs to be governed through strategic planning and sound organizational policies. Your cloud security strategy should enable your business to scale and innovate quickly while effectively managing enterprise risk.

Darren Cook is the Chief Security Officer of Effectual, Inc.