FedRAMP High Agency Authority to Operate on VMware Cloud on AWS GovCloud (US): What it Means for the Public Sector

FedRAMP High Agency Authority to Operate on VMware Cloud on AWS GovCloud (US): What it Means for the Public Sector

VMware recently announced its VMware Cloud on AWS GovCloud (US) achieved FedRAMP Agency Authority to Operate (ATO) at the High Impact Level. High Impact is the most secure authorization level a cloud service provider can achieve through FedRAMP, the Federal Risk and Authorization Management Program.

VMware Cloud on AWS gives public sector IT teams an on-demand, scalable hybrid cloud service, enabling those teams to seamlessly extend, migrate and protect their infrastructure in the cloud.

This announcement has big implications for the public sector, especially for the organizations already using VMware in some capacity — which is a majority of agencies. 

What the FedRAMP High Agency ATO means for government agencies

Within the industry, FedRAMP and FISMA (Federal Information Security Management Act) are often spoken about interchangeably. While both are based on NIST 800-53 and have an end goal of ensuring government data is protected, here’s a quick overview of the distinction: 

  • FISMA offers guidelines to government agencies through a series of controls on how to protect systems and data, in transit or at rest – providing the baseline controls an agency must achieve for a workload whereas the workload is not already accredited by FedRAMP
  • FedRAMP is more stringent and provides an accreditation, controls, documentation, and instructions which can be inherited in an agency ATO

FedRAMP approval means a third party has reviewed a software offering and confirmed it meets FISMA control specifications if deployed per the FedRAMP approved process, which can save agencies a tremendous amount of time and reduce the strain on agency engineering teams. 

NIST 800-53 prescribes controls for systems that have been categorized using the guidance found in FIPS-199 concerning confidentiality, integrity, and availability of data. Workloads that have not been categorized and had the proper controls deployed for the appropriate FISMA classification are not ready for production data.

With the achievement of FedRAMP ATO, government agencies within the public sector can now experience the benefits of VMware Cloud on AWS more rapidly. 

Absent an ATO, the agency is often limited to testing workloads using sample data in development or test environments. FedRAMP inheritance provides an agency the fastest path to deploying a workload into production and achieving an agency ATO. 

With the achievement of FedRAMP ATO, government agencies within the public sector can now experience the benefits of VMware Cloud on AWS more rapidly. For example, an agency can deploy VMware Cloud on AWS GovCloud (US) with the FedRAMP package and inherit all the security controls available within the FedRAMP assessment. 

Data center migration to VMware Cloud on AWS GovCloud (US)

Many organizations have time-limited data center leases. When the next data center lease renewal is on the horizon, the decision to stay in a physical data center or vacate to the cloud is likely part of the overall financial analysis.

Planning to vacate a physical data center can quickly become stressful. Do you need new contracts in place? More engineers? What kind of resources are required? What technical debt is incurred by the decision to vacate?

Agencies are rapidly consolidating and moving away from the physical data center model. Renewing data center leases because “our agency couldn’t get out in time” becomes a less than desirable option. However, the alternative agencies frequently turn to is to try and accelerate modernization — often while misjudging their true technical debt. This often leads to missed timelines, last minute data center lease extensions and a re-baselining of the overall project with new unplanned funding.  

Most agencies are not running physical data center operations on bare metal. Many already have VMware in place, today. An agency with VMWare wants to migrate their applications, workloads, and data to the cloud, quickly — they don’t want to take the time to refactor everything to cloud native infrastructure.

By moving to VMware Cloud on AWS GovCloud (US), agencies can implement a more expedient option: Inherit the FedRAMP ATO and then rapidly and safely move each workload to the cloud while assured the workloads and data remain secure and compliant. In doing so, they can also continue to use standard tools, training, skills, and capabilities on which their staff is already trained.

By moving to VMware Cloud on AWS GovCloud (US), agencies can implement a more expedient option: Inherit the FedRAMP ATO and then rapidly and safely move each workload to the cloud while assured the workloads and data remain secure and compliant.

With this approach, agencies can approach cloud modernization as a marathon, versus a sprint thereby avoiding hasty decisions that could lead to greater problems down the road.

Benefits of VMware Cloud on AWS GovCloud (US)

FedRAMP provides a “do it once, use it many times” framework for government agencies. The benefits of migrating to VMware Cloud on AWS GovCloud (US) can be significant. Consider the following key advantages:

  • Minimal disruption to operations
    The public expects our government to protect data and maintain continuity of operation, especially during times of national emergency. Moreover, the public expects the government to modernize Information Technology investments. VMware Cloud on AWS empowers agencies to continue normal operations during a migration, and allows for a “sandbox” of sorts — empowering development teams to run tests in virtualized environments without risking the foundational integrity of the production workloads. 
  • Substantial time savings during migration
    VMware Cloud on AWS is the fastest way for agencies to move workloads that are currently virtualized to the cloud. Many government agencies tend to shy away from services that haven’t achieved FedRAMP accreditation because of the additional investment in time and money required to meet FISMA requirements using non-FedRAMP’ed tools. A FedRAMP ATO helps streamline the entire process. 
  • Access to AWS innovation 
    Once agencies have made the migration from on-premises to VMware Cloud on AWS, they have a far broader set of options for modernization, including powerful AWS cloud native services and features.  
  • Smaller learning curves
    The FedRAMP ATO provides government agencies with accreditation, controls, documentation, and instructions they need to protect their data. Agencies can move virtual machines (VMs), workloads, and data to AWS inside VCenter without significant investment in learning AWS native tools and services. 
  • Reduced cost for VMware users
    For organizations vacating an on-premises data center and using VMware currently, migration costs will be reduced. It is seamless to migrate all workloads via VCenter and move the VMs from the on-premises data center onto AWS.

This FedRAMP ATO achievement for VMware Cloud on AWS GovCloud (US) highlights the value government agencies can realize from migrating to the cloud. We’re already seeing a mindset shift in government agencies, as more organizations start realizing what the cloud can do for them. The FedRAMP ATO at the High Impact Level will only accelerate the capabilities of these agencies.  

Want to see additional ways the cloud can help innovation within the public sector? Click here for more.

Michael Bryant is Vice President, Public Sector Strategy at Effectual, Inc. 

Solving Problems with the Cloud

Solving Problems with the Cloud

Scaling Your Business & Improving Remote Work with Cloud Innovation

Throughout the life cycle of any business, there are obstacles to overcome. Even if you’re close to perfection in one area, there will always be another challenge looming in front of you like an overwhelming math equation on a chalkboard. And unless you’ve got Will Hunting on speed-dial, you may not know where to begin. 

Customers come to Effectual with a variety of business challenges, but two have stood out in the era of COVID:

  1. How to ensure smooth remote work experiences; and
  2. How to scale quickly to meet growing demand

Challenges of Remote Work

The acceleration of remote work is pushing digital transformation faster as companies adapt and try to deliver work environments that support employee productivity and engagement. Though many of them responded to the remote work reality of the pandemic by offering at-home perks and collaborative online tools, the majority were behind the 8-ball with their remote work options.

Inefficient remote desktops

Remote desktops are one solution companies adopted, yet they can be slow and inefficient – and simply aren’t that innovative when it comes to fostering a positive remote work experience. While using remote desktops for employees that are unable to come to a local office or data center can make sense, latency and performance concerns increase the farther away they sit from the data center serving the solution. The question then becomes, what is their experience like when it comes to latency and collaboration with other remote team members? 

Security vulnerabilities

There are also security concerns with remote employees. About half of workers in the U.S. were hit by phishing emails, phone calls, or texts in the first six months of working remotely. As personal and professional lives blend together, employees may also become a bit lax about using their social media or personal email accounts on a work device. These scenarios leave companies vulnerable to security threats.

The truth is that we’re likely never going to return to pre-pandemic levels of office work. In fact, only one in three U.S. companies indicate plans for a return to the “in-person first” employment model this year, with nearly half of businesses embracing a hybrid workforce. This means concerns about the remote work experience will remain for the foreseeable future.

Tools like AWS Workspaces allow for distributed remote desktops across regions and availability zones, getting the tool as close as possible to the end user to maximize their experience. We have helped many customers deploy AWS Workspaces in response to the remote work landscape securely and performantly.

The truth is that we’re likely never going to return to pre-pandemic levels of office work. In fact, only one in three U.S. companies indicate plans for a return to the “in-person first” employment model this year, with nearly half of businesses embracing a hybrid workforce.

Roadblocks to Rapid Scaling

Though companies are beginning to recognize that the cloud can help them scale a product worldwide or open new markets, there are still many misconceptions when it comes to how to implement effective modern strategies.

Lack of internal expertise

For example, an executive may get inspired by an article about how a move to the cloud can save money and increase business agility. If they task their internal team with spinning up infrastructure without the guidance of an experienced cloud partner or solution architect, what seemed like a bargain can turn into an expensive project that costs far more. 

Not architecting for failure

In times of growth, you can’t simply move everything that’s on-premises over to the cloud as is and expect the exact same results without proper planning and execution. Werner Vogel, Amazon’s Chief Technology Officer has instructed for years that “everything fails all the time.”

It’s a rare occurrence, but availability of your application could be more at risk than when you were in the data center if your cloud presence hasn’t been architected for this reality. In other words, you’re architecting for failure. But if you prepare properly, you can achieve all that the cloud has to offer for reliability, availability, elasticity, and cost optimization.

In times of growth, you can’t simply move everything that’s on-premises over to the cloud as is and expect the exact same results without proper planning and execution.

When you launch an application, you also do not know what the response will be like without proper testing — you may have ten or ten million people hitting an application all at once. If you haven’t built your app to scale dynamically with demand, it will either crash or its performance will be severely impacted. In any case, end-user experience will suffer.

Forgetting to evaluate tradeoffs

Last, companies often fail to evaluate tradeoffs when making decisions. For example, every new technical pattern your team deploys represents a potential increase in cost. It is important to decide how performant you need to be versus how much cost you’re willing to tolerate. 

The gaming industry is an example of using the cloud to make informed decisions around scaling. A company has two to four weeks to make money on a product launch that it’s been building for three to five years. In that first month, infrastructure cost almost doesn’t matter. The product must work — and work well — because latency is its biggest enemy. Those infrastructures are frequently over-provisioned on purpose so they stay performant, and then can scale down when demand stabilizes. 

Working with an experienced cloud partner can help you identify those tradeoffs and be ready to implement tradeoff decisions at the technical level.

Solving Problems with the Cloud

With a clear strategy and the right expertise, you can use the cloud to address these challenges and deliver high-performing, scalable solutions. Here are some primary considerations:

Build performant architecture

Using the global network of the cloud for distributed performance can dramatically improve the internal experience of your remote employees. When you spin up remote desktops in multiple regions around the world using AWS or another cloud provider, you are putting that infrastructure closer to end users so they can execute more effectively. 

Put security tools at the edge

Beyond performant architecture, the cloud offers the ability to put security tools out at the edge. Moving data and compute closer to the end user makes it more performant for them, but we’re also moving all those security tools alongside the data and compute. Because that is happening where the infrastructure lives, it offers a much wider implementation of protection for the whole architecture. You’re not centralizing security at a certain place for all vulnerability identification.

In my role, I’m regularly working with federal civilian and Department of Defense agencies at all Impact Levels, including secret and top-secret workloads — and they’re all using the cloud. These organizations cloud confidently because they’re pushing security tools out in the same regions as compute and storage resources. Those tools protect the point of entry and keep that critical information safe.

Again, that security isn’t as effective without us architecting for each organization’s specific requirements and for the benefits that the cloud provides. 

Develop a migration strategy that fits your objectives

In times of growth, moving your on-premises workloads to the cloud requires a well-defined migration strategy in order to mitigate risk and ensure your operations continue to run efficiently. This is not to say that it can’t happen quickly, but must include proper preparation and architecting for failure so that your company can truly leverage the benefits of cloud computing.

recent customer decided to migrate immediately to AWS as a lift-and-shift move in order to keep up with rapidly growing demand. They plan to pursue application and data modernization efforts in the coming months, but because they needed to address urgent issues, the move to AWS improves both scalability and reliability. We were able to help them take advantage of the immediate benefits of AWS, such as moving databases to the Amazon Relational Database Service (RDS) with little impact to the overall application. Once you have successfully migrated your workloads, there are many opportunities for continued modernization.

In times of growth, moving your on-premises workloads to the cloud requires a well-defined migration strategy in order to mitigate risk and ensure your operations continue to run efficiently.

Last, if you are considering a move to the cloud, remember that you don’t necessarily need to change everything all at once. One of our customers recently experienced a massive spike in traffic to their on-premises hosted web application. They called us concerned their infrastructure couldn’t handle the traffic. In less than 24 hours, we were able to stand up AWS CloudFront in front of their servers to ensure all that traffic received a cached version out of the content distribution network. By effectively offloading cached requests to CloudFront, their application remained reliable and highly available to their end users, with nothing migrated to AWS.

The cloud can help you solve even your toughest business problems — if you have the expertise to take advantage of its benefits. Not sure where to start? Learn how we can help. 

Jeff Carson is VP of Public Sector Technology at Effectual, Inc. 

TIC 3.0 Update

TIC 3.0 Update

Last summer, the Cybersecurity and Infrastructure Security Agency (CISA) released core guidance documentation for the Trusted Internet Connections (TIC) program. The TIC 3.0 updates are aimed to better assist agencies with protecting modern information technology architecture. 

While the updates are significant, there’s a consistent theme throughout TIC 3.0: the federal government is more in tune with the cloud, opening the door to more innovation, flexibility, and opportunity than ever before.

Major takeaways from TIC 3.0

#1 Descriptive, not prescriptive guidelines

Previously, TIC 2.0 featured a 75-page reference architecture document that outlined a hard line between the federal government’s boundary and everything outside of it. The guidance required each individual agency to figure out how to interpret that line and limit the entry and exit points.

The spirit of TIC 2.0 sought to limit internet access points, causing TIC-specific infrastructure to be deployed in Federal data centers. As agencies started moving to the cloud, TIC 2.0 still mandated that traffic needed to travel through these TIC architectures, largely on premise. This resulted in some very complex networking constructs that at best minimized the benefits of the cloud, such as elasticity and ability to scale. 

Since that time, the federal government’s need for innovation facilitated by cloud adoption has driven the need for TIC 3.0.

In contrast, TIC 3.0 offers considerations around security deployments but also allows for innovation as it relates to a trusted internet connection, without outlining the specifics of the implementation. In other words, TIC 3.0 is a descriptive approach, not a prescriptive one. 

While this update enables innovation, it also leaves a lot of judgment calls around how to implement the guidance up to agencies. For example, a partner that has had experience with multiple public sector implementations across a variety of agencies can bring additional insights to bolster an agency as they figure out how to navigate these decisions. 

TIC 3.0 offers considerations around security deployments but also allows for innovation as it relates to a trusted internet connection, without outlining the specifics of the implementation. In other words, TIC 3.0 is a descriptive approach, not a prescriptive one. 

#2 Distributed and elastic security capabilities

The TIC 3.0 update is the closest thing yet to a zero-trust model the federal government has produced for a Trusted Internet Connection. The new guidance introduces “security capabilities” in two broad categories:

  • Universal capabilities: enterprise-level capabilities that outline guiding principles for TIC use cases
  • Policy Enforcement Point (PEP) capabilities: network-level security capabilities that inform technical implementation for a use case

Instead of offering a strict prescription for what must be done, the guidance provides policies for deploying in the cloud closer to the data — a far better security model. Rather than all information flowing through a centralized architecture, PEP security capabilities can now reside very close to the end user. You can also create PEPs across a distributed network, allowing you to scale and distribute resources as you see fit.

In addition, TIC 3.0 guidance specifically addresses pilot activities, emerging technologies, and threat insights as they’re discovered, and will update the security capabilities catalog along the way. The catalog is purposefully built to be elastic and flex with the federal computing landscape as it grows.

#3 Flexible trust zones that facilitate innovation

PEPs must be considered when protecting systems or data within the target architecture, but there’s also the concept of trust zones. Historically, trust zones fell into one of two categories: trusted or untrusted. But the way these zones are applied in TIC 3.0 architecture is more evolved.

Agencies can now make the consideration between highly trusted, moderately trusted, or low-trusted security stances. That distinction creates a building block approach for agencies. If they highly trust one system in the cloud, they can deploy a customized set of security capabilities for that trust level and risk matrix at PEPs. Alternatively, a low-trusted system would require more security capabilities. That level of flexibility drives innovation.

You can even have two systems hosted in the same place conceptually with two different levels of trust. For instance, if the threat matrix against a workload is higher — certain workloads may be more susceptible to DDoS or other types of malicious attacks — or user access needs to be different, you might classify your trust zones differently.

Tool misconfiguration is one of the top security lapses in cloud technologies, not the capability of the tool. Gartner estimates that “through 2025, 99% of cloud security failures will be the customers fault.” Most of the headlines in cloud security issues that have caused panic were due to someone who didn’t understand how to configure properly for a particular workload.

As you’re defining these trust zones and the tools that make up your PEPs, having a partner who’s previously helped agencies define responsibilities, risk acceptance, and implemented the technology stacks to complement the strategy can protect against this kind of misconfiguration with much more confidence. 

#4 Continuous validation instead of a point in time

With TIC 2.0, you had to pass a TIC compliance validation as a point-in-time exercise. Federal Network Security would look at your system through a lens of validation and determine compliance. While compliance was important to achieve, that validation only applied to a specific moment in time.

The TIC 3.0 updates provide a higher level of accountability through the idea of continuous validation, particularly with cloud systems. CISA is deploying cloud capabilities through the Cloud Log Aggregation Warehouse (CLAW) under the Continuous Detection and Monitoring (CDM) and National Cybersecurity Protection System (NCPS) programs.

Rather than a specific point-in-time validation, these programs request logs be sent continuously for ongoing validation. They’re facilitating innovation on one side while stepping up compliance and enforcement on the other side through a continuous validation footing. Anyone who’s been working with big data technologies can figure out how to get logs into CLAW. Doing that in a cost-effective way with the tools that will be most impactful is less obvious, which is another area where a partner can contribute.

Where a partner comes in

The biggest takeaway from these TIC 3.0 updates is that you don’t have to navigate them alone. An experienced partner can help you:

  • Digest the descriptive nature of TIC 3.0 guidelines and align it with your agency’s mission and internal security requirements
  • Define trust zones and configure the tools that will make up your PEPs while significantly reducing the risk of misconfiguration
  • Accelerate innovation while embracing the spirit of the TIC 3.0 building block model

There are a lot of exciting changes within these core guidance documents. With the right partner on your side, you can continue to cloud confidently while reaching new heights of innovation.

Leveraging Amazon EC2 F1 Instances for Development and Red Teaming in DARPA’s First-Ever Bug Bounty Program

Leveraging Amazon EC2 F1 Instances for Development and Red Teaming in DARPA’s First-Ever Bug Bounty Program

This past year, Effectual’s Modernization Engineers partnered with specialized R&D firm Galois to support the launch of DARPA’s first public bug bounty program – Finding Exploits to Thwart Tampering (FETT). The project represents a highly unique use case showcasing Effectual’s application expertise, and was approved this week to be featured on the AWS Partner Network (APN) Blog.

Authored by Effectual Cloud Architect Kurt Hopfer, the blog will reach both AWS customers and technologists interested in learning how to solve complex technical challenges and accelerate innovation using AWS services.

Read the full post on the AWS APN Blog

In 2017, the Defense Advanced Research Projects Agency (DARPA) engaged research and development firm Galois to lead the BESSPIN project (Balancing Evaluation of System Security Properties with Industrial Needs) as part of its System Security Integrated through Hardware and Firmware (SSITH) program.

The objective was to develop tools and techniques to measure the effectiveness of SSITH hardware security architectures, as well as to establish a set of “baseline” Government Furnished Equipment (GFE) systems-on-chip (SoCs) without hardware security enhancements.

While Galois’s initial work on BESSPIN was carried out entirely using on-premises FPGA resources, the pain points of scaling out to a secure, widely-available bug bounty program soon emerged.

It was clear that researchers needed to be able to stress test SSITH hardware platforms without having to acquire their own dedicated hardware and infrastructure. Galois leveraged Amazon EC2 F1 instances to scale infrastructure, increase efficiencies, and accelerate FPGA development.

The company then engaged AWS Premier Consulting Partner Effectual to ensure a secure and reliable AWS environment, as well as to develop a serverless web application that allowed click-button FPGA SoC provisioning to red team researchers for the different processor variants.

The result was DARPA’s first public bug bounty program—Finding Exploits to Thwart Tampering (FETT).

Learn more –> 

 

Education and the Cloud

Education and the Cloud

As cloud computing continues to grow within the State and Local Government industry, it has become increasingly popularized in the Education industry.

AWS started an initiative called AWS Educate to provide students and educators with the training and resources needed for cloud-related learning. Cloud computing skills are in high demand throughout the state of Texas, especially as an increasing number of state and local government agencies are embarking on migrating to the cloud. It has been a slow process for the government to migrate to the cloud, but the education sector is ahead of the process. This is due to high demand for the students, teachers, faculty, staff, and parents needing access to critical information using any devices from anywhere. Educators can benefit by migrating to the cloud: it’s cost efficient, offers stable data storage, development and test environments, easier collaboration for users, enhanced security without add-on applications, simple application hosting, minimizes resource costs, and fast implementation and time-to-value.

With all the capabilities of Cloud environments, the Education industry still has a long way to go. There are certain school districts and even Higher Education institutions, that do not have the amount of access as some of their counterparts. Cloud vendors could make a difference and solidify cloud adoption by offering Cloud education to urban neighborhood schools with laptops, computers, and access to training and certifications. As a start, the three major Cloud providers offer cloud education assistance to students:

When it comes to the rapid advancement in the IT industry, I encourage other young minorities, including my daughter, to pursue a career in the technology industry. Children are the future and Cloud platforms will be the leading solution across all markets.

We offer a bundled package for new users which includes an assessment of their current infrastructure which can be beneficial to any Higher Education Institution or K-12 organization. We can build the future together and keep rising to greater heights!

Reach out to Thy Williams, twilliams@effectual.com, to learn more about our capabilities and discuss our starter package.