TIC 3.0 (UPDATE)

TIC 3.0 (UPDATE)

Last summer, the Cybersecurity and Infrastructure Security Agency (CISA) released core guidance documentation for the Trusted Internet Connections (TIC) program. The TIC 3.0 updates are aimed to better assist agencies with protecting modern information technology architecture. 

While the updates are significant, there’s a consistent theme throughout TIC 3.0: the federal government is more in tune with the cloud, opening the door to more innovation, flexibility, and opportunity than ever before.

Here are five major takeaways from TIC 3.0.

Descriptive, not prescriptive guidelines

Previously, TIC 2.0 featured a 75-page reference architecture document that outlined a hard line between the federal government’s boundary and everything outside of it. The guidance required each individual agency to figure out how to interpret that line and limit the entry and exit points.

The spirit of TIC 2.0 sought to limit internet access points, causing TIC-specific infrastructure to be deployed in Federal data centers. As agencies started moving to the cloud, TIC 2.0 still mandated that traffic needed to travel through these TIC architectures, largely on premise. This resulted in some very complex networking constructs that at best minimized the benefits of the cloud, such as elasticity and ability to scale. 

Since that time, the federal government’s need for innovation facilitated by cloud adoption has driven the need for TIC 3.0.

In contrast, TIC 3.0 offers considerations around security deployments but also allows for innovation as it relates to a trusted internet connection, without outlining the specifics of the implementation. In other words, TIC 3.0 is a descriptive approach, not a prescriptive one. 

While this update enables innovation, it also leaves a lot of judgment calls around how to implement the guidance up to agencies. For example, a partner that has had experience with multiple public sector implementations across a variety of agencies can bring additional insights to bolster an agency as they figure out how to navigate these decisions. 

TIC 3.0 offers considerations around security deployments but also allows for innovation as it relates to a trusted internet connection, without outlining the specifics of the implementation. In other words, TIC 3.0 is a descriptive approach, not a prescriptive one. 

Distributed and elastic security capabilities

The TIC 3.0 update is the closest thing yet to a zero-trust model the federal government has produced for a Trusted Internet Connection. The new guidance introduces “security capabilities” in two broad categories:

  • Universal capabilities: enterprise-level capabilities that outline guiding principles for TIC use cases
  • Policy Enforcement Point (PEP) capabilities: network-level security capabilities that inform technical implementation for a use case

Instead of offering a strict prescription for what must be done, the guidance provides policies for deploying in the cloud closer to the data — a far better security model. Rather than all information flowing through a centralized architecture, PEP security capabilities can now reside very close to the end user. You can also create PEPs across a distributed network, allowing you to scale and distribute resources as you see fit.

In addition, TIC 3.0 guidance specifically addresses pilot activities, emerging technologies, and threat insights as they’re discovered, and will update the security capabilities catalog along the way. The catalog is purposefully built to be elastic and flex with the federal computing landscape as it grows.

Flexible trust zones that facilitate innovation

PEPs must be considered when protecting systems or data within the target architecture, but there’s also the concept of trust zones. Historically, trust zones fell into one of two categories: trusted or untrusted. But the way these zones are applied in TIC 3.0 architecture is more evolved.

Agencies can now make the consideration between highly trusted, moderately trusted, or low-trusted security stances. That distinction creates a building block approach for agencies. If they highly trust one system in the cloud, they can deploy a customized set of security capabilities for that trust level and risk matrix at PEPs. Alternatively, a low-trusted system would require more security capabilities. That level of flexibility drives innovation.

You can even have two systems hosted in the same place conceptually with two different levels of trust. For instance, if the threat matrix against a workload is higher — certain workloads may be more susceptible to DDoS or other types of malicious attacks — or user access needs to be different, you might classify your trust zones differently.

Tool misconfiguration is one of the top security lapses in cloud technologies, not the capability of the tool. Gartner estimates that “through 2025, 99% of cloud security failures will be the customers fault.” Most of the headlines in cloud security issues that have caused panic were due to someone who didn’t understand how to configure properly for a particular workload.

As you’re defining these trust zones and the tools that make up your PEPs, having a partner who’s previously helped agencies define responsibilities, risk acceptance, and implemented the technology stacks to complement the strategy can protect against this kind of misconfiguration with much more confidence. 

Continuous validation instead of a point in time

With TIC 2.0, you had to pass a TIC compliance validation as a point-in-time exercise. Federal Network Security would look at your system through a lens of validation and determine compliance. While compliance was important to achieve, that validation only applied to a specific moment in time.

The TIC 3.0 updates provide a higher level of accountability through the idea of continuous validation, particularly with cloud systems. CISA is deploying cloud capabilities through the Cloud Log Aggregation Warehouse (CLAW) under the Continuous Detection and Monitoring (CDM) and National Cybersecurity Protection System (NCPS) programs.

Rather than a specific point-in-time validation, these programs request logs be sent continuously for ongoing validation. They’re facilitating innovation on one side while stepping up compliance and enforcement on the other side through a continuous validation footing. Anyone who’s been working with big data technologies can figure out how to get logs into CLAW. Doing that in a cost-effective way with the tools that will be most impactful is less obvious, which is another area where a partner can contribute.

Where a partner comes in

The biggest takeaway from these TIC 3.0 updates is that you don’t have to navigate them alone. An experienced partner can help you:

  • Digest the descriptive nature of TIC 3.0 guidelines and align it with your agency’s mission and internal security requirements
  • Define trust zones and configure the tools that will make up your PEPs while significantly reducing the risk of misconfiguration
  • Accelerate innovation while embracing the spirit of the TIC 3.0 building block model

There are a lot of exciting changes within these core guidance documents. With the right partner on your side, you can continue to cloud confidently while reaching new heights of innovation.

How Private NAT Gateways Make for Easier Designs and Faster Deployments

How Private NAT Gateways Make for Easier Designs and Faster Deployments

NAT Gateways have historically been used to protect resources deployed into private subnets in virtual private clouds (VPCs). If you have data that resides in resources deployed on a private subnet in a VPC that needed to access information outside the VPC (on the internet or on premise) and you wanted to exclude incoming connections to those resources, you’d use a NAT Gateway. The NAT Gateway would provide access to get what you needed and allow that traffic to return, but still wouldn’t allow something that originated from the outside get in. 

The core functionality of a NAT Gateway is allowing that one-way request origination flow.

Earlier this month, AWS announced that you can now launch NAT Gateways in your Amazon VPC without associating an internet Gateway to your VPC. The private NAT Gateway allows you to route directly to Virtual Private Gateways or Transit Gateways without an Internet Gateway in the path for resources that need to reach out to internal tools, like a data center, VPC, or something else on-prem.

This might seem like a modest bit of news, but it will lead to improved performance on both the business and engineering levels and demonstrates the constant innovation AWS provides its customers. 

Innovation continues at the fundamental level 

Continuous innovation has always been at the core of how AWS approaches problem solving. This is true for the bleeding edge of technology and for the fundamental building blocks of well-established disciplines, such as networking. The idea of Network Address Translation (NAT) isn’t anything new; it’s been a core building block for years. 

In the past, though, you would have done it on your own server, deploying, configuring, and maintaining a NAT instance. AWS brought the NAT Gateway into the fold; the differentiator being that this was a managed service offering that lets you use infrastructure as code or simply make a few clicks in your console to attach a NAT Gateway to a private subnet in a VPC, so you don’t have to worry about the underlying infrastructure. There are no third-party tools or complex configuration to worry about. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

We see the same type of excitement and attention to detail both with major launches and when introducing new ways to use a product or service that already exists. It’s that combination of innovation for fundamental core offerings to make them easier to use plus bleeding edge innovation that really highlights the depth of expertise across AWS and its partners. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

Learn more about AWS Private NAT Gateways here

A boost in efficiency

Before the private NAT Gateway, every NAT Gateway had a hard dependency on an Internet Gateway being attached to the VPC for egress. Communication to on premise resources via NAT Gateway required routing through an Internet Gateway or some other networking construct that added a layer of complexity to protecting those private resources in a private VPC. The real benefit of this feature add is the ease of protecting those resources that need to reach out. 

At its core, the private NAT Gateway is simplifying that outgoing request pattern — and it leads to a boost in efficiency. 

For businesses, the NAT Gateway allows a consistent managed service offering from AWS to protect those private resources that need outbound connectivity from a private subnet in a VPC. Prior to the private NAT Gateway, you would have needed to solve for that idea using a third-party tool or a more complex networking architecture. 

Security departments now have a more secure pattern that is less prone to misconfiguration since an Internet Gateway is no longer a dependency. This makes understanding how the organization is applying NAT Gateways and how operations teams are managing them easier. They are able to standardize on a private NAT Gateway approach, reproducing the pattern for the networking path for these outgoing requests consistently.

For individual engineers, a private NAT Gateway simplifies design and deployment because of its inherent ease — a few clicks in your console or lines in infrastructure as code, rather than relying on a more cumbersome third-party tool or a more complex configuration. AWS is extending the functionality of the managed service part of a NAT Gateway to the specific use case of handling private subnet outgoing traffic. This addition makes design easier, it makes deployment faster, and it makes the entire subject more repeatable, because you’re just consuming that managed NAT Gateway service from AWS.

Why this is worth a look

As an engineer, I certainly understand the mindset of wanting to minimize complexity. Enterprise users have NAT Gateways deployed with a dependency on an Internet Gateway and likely have more complex routing solutions in place to protect against unintended incoming requests via that Internet Gateway. Those solutions might be working just fine, and that’s great.  

But from my vantage point, I strongly encourage you to take another look at your egress-only internet and NAT Gateways architecture for private subnets. You could be missing an opportunity to greatly streamline how you work.

At worst, you can simplify how you use your “egress-only” communications. At best, you’ll eliminate a third-party tool and save money while freeing up more of your individual time.

That’s worth taking a second look at the way you’re operating. We should be regularly evaluating our deployments anyway, but it especially applies in networking complexity and simplification. 

I look forward to the improved ease of use for my clients with private NAT Gateways, and am confident you’ll find a similar model of success with your deployments.