Solving Problems with the Cloud

Solving Problems with the Cloud

Scaling Your Business & Improving Remote Work with Cloud Innovation

Throughout the life cycle of any business, there are obstacles to overcome. Even if you’re close to perfection in one area, there will always be another challenge looming in front of you like an overwhelming math equation on a chalkboard. And unless you’ve got Will Hunting on speed-dial, you may not know where to begin. 

Customers come to Effectual with a variety of business challenges, but two have stood out in the era of COVID:

  1. How to ensure smooth remote work experiences; and
  2. How to scale quickly to meet growing demand

Challenges of Remote Work

The acceleration of remote work is pushing digital transformation faster as companies adapt and try to deliver work environments that support employee productivity and engagement. Though many of them responded to the remote work reality of the pandemic by offering at-home perks and collaborative online tools, the majority were behind the 8-ball with their remote work options.

Inefficient remote desktops

Remote desktops are one solution companies adopted, yet they can be slow and inefficient – and simply aren’t that innovative when it comes to fostering a positive remote work experience. While using remote desktops for employees that are unable to come to a local office or data center can make sense, latency and performance concerns increase the farther away they sit from the data center serving the solution. The question then becomes, what is their experience like when it comes to latency and collaboration with other remote team members? 

Security vulnerabilities

There are also security concerns with remote employees. About half of workers in the U.S. were hit by phishing emails, phone calls, or texts in the first six months of working remotely. As personal and professional lives blend together, employees may also become a bit lax about using their social media or personal email accounts on a work device. These scenarios leave companies vulnerable to security threats.

The truth is that we’re likely never going to return to pre-pandemic levels of office work. In fact, only one in three U.S. companies indicate plans for a return to the “in-person first” employment model this year, with nearly half of businesses embracing a hybrid workforce. This means concerns about the remote work experience will remain for the foreseeable future.

Tools like AWS Workspaces allow for distributed remote desktops across regions and availability zones, getting the tool as close as possible to the end user to maximize their experience. We have helped many customers deploy AWS Workspaces in response to the remote work landscape securely and performantly.

The truth is that we’re likely never going to return to pre-pandemic levels of office work. In fact, only one in three U.S. companies indicate plans for a return to the “in-person first” employment model this year, with nearly half of businesses embracing a hybrid workforce.

Roadblocks to Rapid Scaling

Though companies are beginning to recognize that the cloud can help them scale a product worldwide or open new markets, there are still many misconceptions when it comes to how to implement effective modern strategies.

Lack of internal expertise

For example, an executive may get inspired by an article about how a move to the cloud can save money and increase business agility. If they task their internal team with spinning up infrastructure without the guidance of an experienced cloud partner or solution architect, what seemed like a bargain can turn into an expensive project that costs far more. 

Not architecting for failure

In times of growth, you can’t simply move everything that’s on-premises over to the cloud as is and expect the exact same results without proper planning and execution. Werner Vogel, Amazon’s Chief Technology Officer has instructed for years that “everything fails all the time.”

It’s a rare occurrence, but availability of your application could be more at risk than when you were in the data center if your cloud presence hasn’t been architected for this reality. In other words, you’re architecting for failure. But if you prepare properly, you can achieve all that the cloud has to offer for reliability, availability, elasticity, and cost optimization.

In times of growth, you can’t simply move everything that’s on-premises over to the cloud as is and expect the exact same results without proper planning and execution.

When you launch an application, you also do not know what the response will be like without proper testing — you may have ten or ten million people hitting an application all at once. If you haven’t built your app to scale dynamically with demand, it will either crash or its performance will be severely impacted. In any case, end-user experience will suffer.

Forgetting to evaluate tradeoffs

Last, companies often fail to evaluate tradeoffs when making decisions. For example, every new technical pattern your team deploys represents a potential increase in cost. It is important to decide how performant you need to be versus how much cost you’re willing to tolerate. 

The gaming industry is an example of using the cloud to make informed decisions around scaling. A company has two to four weeks to make money on a product launch that it’s been building for three to five years. In that first month, infrastructure cost almost doesn’t matter. The product must work — and work well — because latency is its biggest enemy. Those infrastructures are frequently over-provisioned on purpose so they stay performant, and then can scale down when demand stabilizes. 

Working with an experienced cloud partner can help you identify those tradeoffs and be ready to implement tradeoff decisions at the technical level.

Solving Problems with the Cloud

With a clear strategy and the right expertise, you can use the cloud to address these challenges and deliver high-performing, scalable solutions. Here are some primary considerations:

Build performant architecture

Using the global network of the cloud for distributed performance can dramatically improve the internal experience of your remote employees. When you spin up remote desktops in multiple regions around the world using AWS or another cloud provider, you are putting that infrastructure closer to end users so they can execute more effectively. 

Put security tools at the edge

Beyond performant architecture, the cloud offers the ability to put security tools out at the edge. Moving data and compute closer to the end user makes it more performant for them, but we’re also moving all those security tools alongside the data and compute. Because that is happening where the infrastructure lives, it offers a much wider implementation of protection for the whole architecture. You’re not centralizing security at a certain place for all vulnerability identification.

In my role, I’m regularly working with federal civilian and Department of Defense agencies at all Impact Levels, including secret and top-secret workloads — and they’re all using the cloud. These organizations cloud confidently because they’re pushing security tools out in the same regions as compute and storage resources. Those tools protect the point of entry and keep that critical information safe.

Again, that security isn’t as effective without us architecting for each organization’s specific requirements and for the benefits that the cloud provides. 

Develop a migration strategy that fits your objectives

In times of growth, moving your on-premises workloads to the cloud requires a well-defined migration strategy in order to mitigate risk and ensure your operations continue to run efficiently. This is not to say that it can’t happen quickly, but must include proper preparation and architecting for failure so that your company can truly leverage the benefits of cloud computing.

recent customer decided to migrate immediately to AWS as a lift-and-shift move in order to keep up with rapidly growing demand. They plan to pursue application and data modernization efforts in the coming months, but because they needed to address urgent issues, the move to AWS improves both scalability and reliability. We were able to help them take advantage of the immediate benefits of AWS, such as moving databases to the Amazon Relational Database Service (RDS) with little impact to the overall application. Once you have successfully migrated your workloads, there are many opportunities for continued modernization.

In times of growth, moving your on-premises workloads to the cloud requires a well-defined migration strategy in order to mitigate risk and ensure your operations continue to run efficiently.

Last, if you are considering a move to the cloud, remember that you don’t necessarily need to change everything all at once. One of our customers recently experienced a massive spike in traffic to their on-premises hosted web application. They called us concerned their infrastructure couldn’t handle the traffic. In less than 24 hours, we were able to stand up AWS CloudFront in front of their servers to ensure all that traffic received a cached version out of the content distribution network. By effectively offloading cached requests to CloudFront, their application remained reliable and highly available to their end users, with nothing migrated to AWS.

The cloud can help you solve even your toughest business problems — if you have the expertise to take advantage of its benefits. Not sure where to start? Learn how we can help. 

Jeff Carson is VP of Public Sector Technology at Effectual, Inc. 

TIC 3.0 Update

TIC 3.0 Update

Last summer, the Cybersecurity and Infrastructure Security Agency (CISA) released core guidance documentation for the Trusted Internet Connections (TIC) program. The TIC 3.0 updates are aimed to better assist agencies with protecting modern information technology architecture. 

While the updates are significant, there’s a consistent theme throughout TIC 3.0: the federal government is more in tune with the cloud, opening the door to more innovation, flexibility, and opportunity than ever before.

Major takeaways from TIC 3.0

#1 Descriptive, not prescriptive guidelines

Previously, TIC 2.0 featured a 75-page reference architecture document that outlined a hard line between the federal government’s boundary and everything outside of it. The guidance required each individual agency to figure out how to interpret that line and limit the entry and exit points.

The spirit of TIC 2.0 sought to limit internet access points, causing TIC-specific infrastructure to be deployed in Federal data centers. As agencies started moving to the cloud, TIC 2.0 still mandated that traffic needed to travel through these TIC architectures, largely on premise. This resulted in some very complex networking constructs that at best minimized the benefits of the cloud, such as elasticity and ability to scale. 

Since that time, the federal government’s need for innovation facilitated by cloud adoption has driven the need for TIC 3.0.

In contrast, TIC 3.0 offers considerations around security deployments but also allows for innovation as it relates to a trusted internet connection, without outlining the specifics of the implementation. In other words, TIC 3.0 is a descriptive approach, not a prescriptive one. 

While this update enables innovation, it also leaves a lot of judgment calls around how to implement the guidance up to agencies. For example, a partner that has had experience with multiple public sector implementations across a variety of agencies can bring additional insights to bolster an agency as they figure out how to navigate these decisions. 

TIC 3.0 offers considerations around security deployments but also allows for innovation as it relates to a trusted internet connection, without outlining the specifics of the implementation. In other words, TIC 3.0 is a descriptive approach, not a prescriptive one. 

#2 Distributed and elastic security capabilities

The TIC 3.0 update is the closest thing yet to a zero-trust model the federal government has produced for a Trusted Internet Connection. The new guidance introduces “security capabilities” in two broad categories:

  • Universal capabilities: enterprise-level capabilities that outline guiding principles for TIC use cases
  • Policy Enforcement Point (PEP) capabilities: network-level security capabilities that inform technical implementation for a use case

Instead of offering a strict prescription for what must be done, the guidance provides policies for deploying in the cloud closer to the data — a far better security model. Rather than all information flowing through a centralized architecture, PEP security capabilities can now reside very close to the end user. You can also create PEPs across a distributed network, allowing you to scale and distribute resources as you see fit.

In addition, TIC 3.0 guidance specifically addresses pilot activities, emerging technologies, and threat insights as they’re discovered, and will update the security capabilities catalog along the way. The catalog is purposefully built to be elastic and flex with the federal computing landscape as it grows.

#3 Flexible trust zones that facilitate innovation

PEPs must be considered when protecting systems or data within the target architecture, but there’s also the concept of trust zones. Historically, trust zones fell into one of two categories: trusted or untrusted. But the way these zones are applied in TIC 3.0 architecture is more evolved.

Agencies can now make the consideration between highly trusted, moderately trusted, or low-trusted security stances. That distinction creates a building block approach for agencies. If they highly trust one system in the cloud, they can deploy a customized set of security capabilities for that trust level and risk matrix at PEPs. Alternatively, a low-trusted system would require more security capabilities. That level of flexibility drives innovation.

You can even have two systems hosted in the same place conceptually with two different levels of trust. For instance, if the threat matrix against a workload is higher — certain workloads may be more susceptible to DDoS or other types of malicious attacks — or user access needs to be different, you might classify your trust zones differently.

Tool misconfiguration is one of the top security lapses in cloud technologies, not the capability of the tool. Gartner estimates that “through 2025, 99% of cloud security failures will be the customers fault.” Most of the headlines in cloud security issues that have caused panic were due to someone who didn’t understand how to configure properly for a particular workload.

As you’re defining these trust zones and the tools that make up your PEPs, having a partner who’s previously helped agencies define responsibilities, risk acceptance, and implemented the technology stacks to complement the strategy can protect against this kind of misconfiguration with much more confidence. 

#4 Continuous validation instead of a point in time

With TIC 2.0, you had to pass a TIC compliance validation as a point-in-time exercise. Federal Network Security would look at your system through a lens of validation and determine compliance. While compliance was important to achieve, that validation only applied to a specific moment in time.

The TIC 3.0 updates provide a higher level of accountability through the idea of continuous validation, particularly with cloud systems. CISA is deploying cloud capabilities through the Cloud Log Aggregation Warehouse (CLAW) under the Continuous Detection and Monitoring (CDM) and National Cybersecurity Protection System (NCPS) programs.

Rather than a specific point-in-time validation, these programs request logs be sent continuously for ongoing validation. They’re facilitating innovation on one side while stepping up compliance and enforcement on the other side through a continuous validation footing. Anyone who’s been working with big data technologies can figure out how to get logs into CLAW. Doing that in a cost-effective way with the tools that will be most impactful is less obvious, which is another area where a partner can contribute.

Where a partner comes in

The biggest takeaway from these TIC 3.0 updates is that you don’t have to navigate them alone. An experienced partner can help you:

  • Digest the descriptive nature of TIC 3.0 guidelines and align it with your agency’s mission and internal security requirements
  • Define trust zones and configure the tools that will make up your PEPs while significantly reducing the risk of misconfiguration
  • Accelerate innovation while embracing the spirit of the TIC 3.0 building block model

There are a lot of exciting changes within these core guidance documents. With the right partner on your side, you can continue to cloud confidently while reaching new heights of innovation.

How Private NAT Gateways Make for Easier Designs and Faster Deployments

How Private NAT Gateways Make for Easier Designs and Faster Deployments

NAT Gateways have historically been used to protect resources deployed into private subnets in virtual private clouds (VPCs). If you have data that resides in resources deployed on a private subnet in a VPC that needed to access information outside the VPC (on the internet or on premise) and you wanted to exclude incoming connections to those resources, you’d use a NAT Gateway. The NAT Gateway would provide access to get what you needed and allow that traffic to return, but still wouldn’t allow something that originated from the outside get in. 

The core functionality of a NAT Gateway is allowing that one-way request origination flow.

Earlier this month, AWS announced that you can now launch NAT Gateways in your Amazon VPC without associating an internet Gateway to your VPC. The private NAT Gateway allows you to route directly to Virtual Private Gateways or Transit Gateways without an Internet Gateway in the path for resources that need to reach out to internal tools, like a data center, VPC, or something else on-prem.

This might seem like a modest bit of news, but it will lead to improved performance on both the business and engineering levels and demonstrates the constant innovation AWS provides its customers. 

Innovation continues at the fundamental level 

Continuous innovation has always been at the core of how AWS approaches problem solving. This is true for the bleeding edge of technology and for the fundamental building blocks of well-established disciplines, such as networking. The idea of Network Address Translation (NAT) isn’t anything new; it’s been a core building block for years. 

In the past, though, you would have done it on your own server, deploying, configuring, and maintaining a NAT instance. AWS brought the NAT Gateway into the fold; the differentiator being that this was a managed service offering that lets you use infrastructure as code or simply make a few clicks in your console to attach a NAT Gateway to a private subnet in a VPC, so you don’t have to worry about the underlying infrastructure. There are no third-party tools or complex configuration to worry about. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

We see the same type of excitement and attention to detail both with major launches and when introducing new ways to use a product or service that already exists. It’s that combination of innovation for fundamental core offerings to make them easier to use plus bleeding edge innovation that really highlights the depth of expertise across AWS and its partners. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

Learn more about AWS Private NAT Gateways here

A boost in efficiency

Before the private NAT Gateway, every NAT Gateway had a hard dependency on an Internet Gateway being attached to the VPC for egress. Communication to on premise resources via NAT Gateway required routing through an Internet Gateway or some other networking construct that added a layer of complexity to protecting those private resources in a private VPC. The real benefit of this feature add is the ease of protecting those resources that need to reach out. 

At its core, the private NAT Gateway is simplifying that outgoing request pattern — and it leads to a boost in efficiency. 

For businesses, the NAT Gateway allows a consistent managed service offering from AWS to protect those private resources that need outbound connectivity from a private subnet in a VPC. Prior to the private NAT Gateway, you would have needed to solve for that idea using a third-party tool or a more complex networking architecture. 

Security departments now have a more secure pattern that is less prone to misconfiguration since an Internet Gateway is no longer a dependency. This makes understanding how the organization is applying NAT Gateways and how operations teams are managing them easier. They are able to standardize on a private NAT Gateway approach, reproducing the pattern for the networking path for these outgoing requests consistently.

For individual engineers, a private NAT Gateway simplifies design and deployment because of its inherent ease — a few clicks in your console or lines in infrastructure as code, rather than relying on a more cumbersome third-party tool or a more complex configuration. AWS is extending the functionality of the managed service part of a NAT Gateway to the specific use case of handling private subnet outgoing traffic. This addition makes design easier, it makes deployment faster, and it makes the entire subject more repeatable, because you’re just consuming that managed NAT Gateway service from AWS.

Why this is worth a look

As an engineer, I certainly understand the mindset of wanting to minimize complexity. Enterprise users have NAT Gateways deployed with a dependency on an Internet Gateway and likely have more complex routing solutions in place to protect against unintended incoming requests via that Internet Gateway. Those solutions might be working just fine, and that’s great.  

But from my vantage point, I strongly encourage you to take another look at your egress-only internet and NAT Gateways architecture for private subnets. You could be missing an opportunity to greatly streamline how you work.

At worst, you can simplify how you use your “egress-only” communications. At best, you’ll eliminate a third-party tool and save money while freeing up more of your individual time.

That’s worth taking a second look at the way you’re operating. We should be regularly evaluating our deployments anyway, but it especially applies in networking complexity and simplification. 

I look forward to the improved ease of use for my clients with private NAT Gateways, and am confident you’ll find a similar model of success with your deployments.