FedRAMP High Agency Authority to Operate on VMware Cloud on AWS GovCloud (US): What it Means for the Public Sector

FedRAMP High Agency Authority to Operate on VMware Cloud on AWS GovCloud (US): What it Means for the Public Sector

VMware recently announced its VMware Cloud on AWS GovCloud (US) achieved FedRAMP Agency Authority to Operate (ATO) at the High Impact Level. High Impact is the most secure authorization level a cloud service provider can achieve through FedRAMP, the Federal Risk and Authorization Management Program.

VMware Cloud on AWS gives public sector IT teams an on-demand, scalable hybrid cloud service, enabling those teams to seamlessly extend, migrate and protect their infrastructure in the cloud.

This announcement has big implications for the public sector, especially for the organizations already using VMware in some capacity — which is a majority of agencies. 

What the FedRAMP High Agency ATO means for government agencies

Within the industry, FedRAMP and FISMA (Federal Information Security Management Act) are often spoken about interchangeably. While both are based on NIST 800-53 and have an end goal of ensuring government data is protected, here’s a quick overview of the distinction: 

  • FISMA offers guidelines to government agencies through a series of controls on how to protect systems and data, in transit or at rest – providing the baseline controls an agency must achieve for a workload whereas the workload is not already accredited by FedRAMP
  • FedRAMP is more stringent and provides an accreditation, controls, documentation, and instructions which can be inherited in an agency ATO

FedRAMP approval means a third party has reviewed a software offering and confirmed it meets FISMA control specifications if deployed per the FedRAMP approved process, which can save agencies a tremendous amount of time and reduce the strain on agency engineering teams. 

NIST 800-53 prescribes controls for systems that have been categorized using the guidance found in FIPS-199 concerning confidentiality, integrity, and availability of data. Workloads that have not been categorized and had the proper controls deployed for the appropriate FISMA classification are not ready for production data.

With the achievement of FedRAMP ATO, government agencies within the public sector can now experience the benefits of VMware Cloud on AWS more rapidly. 

Absent an ATO, the agency is often limited to testing workloads using sample data in development or test environments. FedRAMP inheritance provides an agency the fastest path to deploying a workload into production and achieving an agency ATO. 

With the achievement of FedRAMP ATO, government agencies within the public sector can now experience the benefits of VMware Cloud on AWS more rapidly. For example, an agency can deploy VMware Cloud on AWS GovCloud (US) with the FedRAMP package and inherit all the security controls available within the FedRAMP assessment. 

Data center migration to VMware Cloud on AWS GovCloud (US)

Many organizations have time-limited data center leases. When the next data center lease renewal is on the horizon, the decision to stay in a physical data center or vacate to the cloud is likely part of the overall financial analysis.

Planning to vacate a physical data center can quickly become stressful. Do you need new contracts in place? More engineers? What kind of resources are required? What technical debt is incurred by the decision to vacate?

Agencies are rapidly consolidating and moving away from the physical data center model. Renewing data center leases because “our agency couldn’t get out in time” becomes a less than desirable option. However, the alternative agencies frequently turn to is to try and accelerate modernization — often while misjudging their true technical debt. This often leads to missed timelines, last minute data center lease extensions and a re-baselining of the overall project with new unplanned funding.  

Most agencies are not running physical data center operations on bare metal. Many already have VMware in place, today. An agency with VMWare wants to migrate their applications, workloads, and data to the cloud, quickly — they don’t want to take the time to refactor everything to cloud native infrastructure.

By moving to VMware Cloud on AWS GovCloud (US), agencies can implement a more expedient option: Inherit the FedRAMP ATO and then rapidly and safely move each workload to the cloud while assured the workloads and data remain secure and compliant. In doing so, they can also continue to use standard tools, training, skills, and capabilities on which their staff is already trained.

By moving to VMware Cloud on AWS GovCloud (US), agencies can implement a more expedient option: Inherit the FedRAMP ATO and then rapidly and safely move each workload to the cloud while assured the workloads and data remain secure and compliant.

With this approach, agencies can approach cloud modernization as a marathon, versus a sprint thereby avoiding hasty decisions that could lead to greater problems down the road.

Benefits of VMware Cloud on AWS GovCloud (US)

FedRAMP provides a “do it once, use it many times” framework for government agencies. The benefits of migrating to VMware Cloud on AWS GovCloud (US) can be significant. Consider the following key advantages:

  • Minimal disruption to operations
    The public expects our government to protect data and maintain continuity of operation, especially during times of national emergency. Moreover, the public expects the government to modernize Information Technology investments. VMware Cloud on AWS empowers agencies to continue normal operations during a migration, and allows for a “sandbox” of sorts — empowering development teams to run tests in virtualized environments without risking the foundational integrity of the production workloads. 
  • Substantial time savings during migration
    VMware Cloud on AWS is the fastest way for agencies to move workloads that are currently virtualized to the cloud. Many government agencies tend to shy away from services that haven’t achieved FedRAMP accreditation because of the additional investment in time and money required to meet FISMA requirements using non-FedRAMP’ed tools. A FedRAMP ATO helps streamline the entire process. 
  • Access to AWS innovation 
    Once agencies have made the migration from on-premises to VMware Cloud on AWS, they have a far broader set of options for modernization, including powerful AWS cloud native services and features.  
  • Smaller learning curves
    The FedRAMP ATO provides government agencies with accreditation, controls, documentation, and instructions they need to protect their data. Agencies can move virtual machines (VMs), workloads, and data to AWS inside VCenter without significant investment in learning AWS native tools and services. 
  • Reduced cost for VMware users
    For organizations vacating an on-premises data center and using VMware currently, migration costs will be reduced. It is seamless to migrate all workloads via VCenter and move the VMs from the on-premises data center onto AWS.

This FedRAMP ATO achievement for VMware Cloud on AWS GovCloud (US) highlights the value government agencies can realize from migrating to the cloud. We’re already seeing a mindset shift in government agencies, as more organizations start realizing what the cloud can do for them. The FedRAMP ATO at the High Impact Level will only accelerate the capabilities of these agencies.  

Want to see additional ways the cloud can help innovation within the public sector? Click here for more.

Michael Bryant is Vice President, Public Sector Strategy at Effectual, Inc. 

Data Center Modernization with VMware Cloud on AWS

Data Center Modernization with VMware Cloud on AWS

Businesses are facing many challenges as they adapt and respond to the rapid evolution of modern business technologies. Among the most critical is deciding how to use the cloud. While many companies know they want to abandon old-fashioned data centers, they are often unsure of what will take their place. Fortunately, cloud solutions and clear paths for modernization are opening up new flexible, cost-effective options for getting to the cloud.

When considering how to transition traditional data centers to total cloud usage, there are steps that should be taken to avoid migration issues and unforeseen roadblocks, even if it extends the timeline. Instead of trying to rush to the finish line while ignoring the road, we help our customers closely evaluate their options to unlock the promise of cloud. 

One of the best ways to leave data centers behind while minimizing potential business impact to the greatest extent possible is VMware Cloud on AWS – a solution that allows you to start migrating non-cloud applications without moving everything at once or taking systems offline. 

Why Fast Tracking the Cloud is Not Worth the Challenges

Intrigued by the benefits of the cloud, companies may try to make a move before they realize the true undertaking of cloud migration. Though it unlocks additional business value, the process of cloud migration can be more complicated than expected.

For example, taking applications and operations from a traditional setting and moving them to the cloud often requires taking systems offline to complete the detailed transition. For some businesses, any downtime is unacceptable, which means internal teams cannot do the needed work for a cloud migration. 

Companies with Disaster Recovery (DR) requirements have a particularly hard time taking apps down for migration as the systems must be ready to respond 24/7/365. There is no acceptable time for a nationwide bank or emergency service to be offline, so teams are under incredible pressure to make changes quickly and without error. Tricky, time-limited situations can create roadblocks. If internal IT teams don’t have the time they need, it leads to tension and conflict when there are other options that can ease the situation. 

Common problems with rushing a cloud migration:

  • Significant application development work: Attempting to quickly transition one system to another puts additional strain on your engineers and developers.
  • Out of support operating systems: Legacy systems do not always play nice with new initiatives, and you risk system crashes or dealing with unnecessary security vulnerabilities.
  • Overall time to value: Moving to the cloud can take 12+ months, and a business may leap from data center to cloud too hastily – or they’ll decide they’re not ready and re-up their contract, which leaves them potentially stuck in a less than ideal position.

For some businesses, any downtime is unacceptable, which means internal teams cannot do the needed work for a cloud migration. 

The Solution: VMware Cloud on AWS

Businesses too often look past modernization options that might better suit what they want to accomplish and are more accessible to set up. VMware Cloud for AWS provides flexibility and a path forward without limiting future decisions or increasing risk. 

Every business wants to modernize and stay strong as new technology becomes the norm, but some paths are more challenging to complete than others. VMware Cloud for AWS offers benefits that can address those common problems from above. 

  • Flexibility allows for better progress: Instead of the significant app development work to even begin the transition process, developers can flex their workload up or down and look at each workload individually. 
  • Explore before going live: VMware Cloud on AWS lets teams duplicate existing applications without affecting the live versions, allowing businesses to decide the best path for migration and have everything ready to go before any changes to live services. This is akin to running applications in a “Highly Available” architecture state – offering the choice of where production traffic is directed. 
  • Decrease downtime, increase performance and reliability: When ready, a company can make a well informed, quantitative data backed decision to run production workloads out of VMware Cloud on AWS, knowing that it is more performant, more stable and able to meet the business demands and outcomes. As a result, VMware Cloud on AWS users see 83% less unplanned downtime and 27% improved app performance.
  • Ensure a risk-free transition to production: Running your Disaster Recovery environment on VMware Cloud on AWS prepares a risk-free path to future modernization and access to AWS cloud native services. This step provides confidence as you turn down the data center and convert to production. Read more about how you can address DR challenges with VMware Cloud on AWS here.

VMware Cloud on AWS is an option that businesses should explore as an alternative to sprinting towards the cloud unprepared.

VMware Cloud on AWS offers short-term changes in exchange for long-term results that can set up future avenues. If the cloud is the way forward, VMware Cloud on AWS is a stable path towards the cloud that ultimately gets the ball rolling for decisions down the road while limiting real-time risks. There is no need to rush into all cloud operations when VMware Cloud on AWS lets you take your time and determine the right approach.

As an alternative to sprinting towards the cloud unprepared, VMware Cloud on AWS is an option that businesses should explore – with potential to yield enormous long-term advantages.

Interested in learning if it’s right for you? Get in touch with us today

Tom Spalding is Chief Growth Officer at Effectual, Inc. 

Solving Problems with the Cloud

Solving Problems with the Cloud

Scaling Your Business & Improving Remote Work with Cloud Innovation

Throughout the life cycle of any business, there are obstacles to overcome. Even if you’re close to perfection in one area, there will always be another challenge looming in front of you like an overwhelming math equation on a chalkboard. And unless you’ve got Will Hunting on speed-dial, you may not know where to begin. 

Customers come to Effectual with a variety of business challenges, but two have stood out in the era of COVID:

  1. How to ensure smooth remote work experiences; and
  2. How to scale quickly to meet growing demand

Challenges of Remote Work

The acceleration of remote work is pushing digital transformation faster as companies adapt and try to deliver work environments that support employee productivity and engagement. Though many of them responded to the remote work reality of the pandemic by offering at-home perks and collaborative online tools, the majority were behind the 8-ball with their remote work options.

Inefficient remote desktops

Remote desktops are one solution companies adopted, yet they can be slow and inefficient – and simply aren’t that innovative when it comes to fostering a positive remote work experience. While using remote desktops for employees that are unable to come to a local office or data center can make sense, latency and performance concerns increase the farther away they sit from the data center serving the solution. The question then becomes, what is their experience like when it comes to latency and collaboration with other remote team members? 

Security vulnerabilities

There are also security concerns with remote employees. About half of workers in the U.S. were hit by phishing emails, phone calls, or texts in the first six months of working remotely. As personal and professional lives blend together, employees may also become a bit lax about using their social media or personal email accounts on a work device. These scenarios leave companies vulnerable to security threats.

The truth is that we’re likely never going to return to pre-pandemic levels of office work. In fact, only one in three U.S. companies indicate plans for a return to the “in-person first” employment model this year, with nearly half of businesses embracing a hybrid workforce. This means concerns about the remote work experience will remain for the foreseeable future.

Tools like AWS Workspaces allow for distributed remote desktops across regions and availability zones, getting the tool as close as possible to the end user to maximize their experience. We have helped many customers deploy AWS Workspaces in response to the remote work landscape securely and performantly.

The truth is that we’re likely never going to return to pre-pandemic levels of office work. In fact, only one in three U.S. companies indicate plans for a return to the “in-person first” employment model this year, with nearly half of businesses embracing a hybrid workforce.

Roadblocks to Rapid Scaling

Though companies are beginning to recognize that the cloud can help them scale a product worldwide or open new markets, there are still many misconceptions when it comes to how to implement effective modern strategies.

Lack of internal expertise

For example, an executive may get inspired by an article about how a move to the cloud can save money and increase business agility. If they task their internal team with spinning up infrastructure without the guidance of an experienced cloud partner or solution architect, what seemed like a bargain can turn into an expensive project that costs far more. 

Not architecting for failure

In times of growth, you can’t simply move everything that’s on-premises over to the cloud as is and expect the exact same results without proper planning and execution. Werner Vogel, Amazon’s Chief Technology Officer has instructed for years that “everything fails all the time.”

It’s a rare occurrence, but availability of your application could be more at risk than when you were in the data center if your cloud presence hasn’t been architected for this reality. In other words, you’re architecting for failure. But if you prepare properly, you can achieve all that the cloud has to offer for reliability, availability, elasticity, and cost optimization.

In times of growth, you can’t simply move everything that’s on-premises over to the cloud as is and expect the exact same results without proper planning and execution.

When you launch an application, you also do not know what the response will be like without proper testing — you may have ten or ten million people hitting an application all at once. If you haven’t built your app to scale dynamically with demand, it will either crash or its performance will be severely impacted. In any case, end-user experience will suffer.

Forgetting to evaluate tradeoffs

Last, companies often fail to evaluate tradeoffs when making decisions. For example, every new technical pattern your team deploys represents a potential increase in cost. It is important to decide how performant you need to be versus how much cost you’re willing to tolerate. 

The gaming industry is an example of using the cloud to make informed decisions around scaling. A company has two to four weeks to make money on a product launch that it’s been building for three to five years. In that first month, infrastructure cost almost doesn’t matter. The product must work — and work well — because latency is its biggest enemy. Those infrastructures are frequently over-provisioned on purpose so they stay performant, and then can scale down when demand stabilizes. 

Working with an experienced cloud partner can help you identify those tradeoffs and be ready to implement tradeoff decisions at the technical level.

Solving Problems with the Cloud

With a clear strategy and the right expertise, you can use the cloud to address these challenges and deliver high-performing, scalable solutions. Here are some primary considerations:

Build performant architecture

Using the global network of the cloud for distributed performance can dramatically improve the internal experience of your remote employees. When you spin up remote desktops in multiple regions around the world using AWS or another cloud provider, you are putting that infrastructure closer to end users so they can execute more effectively. 

Put security tools at the edge

Beyond performant architecture, the cloud offers the ability to put security tools out at the edge. Moving data and compute closer to the end user makes it more performant for them, but we’re also moving all those security tools alongside the data and compute. Because that is happening where the infrastructure lives, it offers a much wider implementation of protection for the whole architecture. You’re not centralizing security at a certain place for all vulnerability identification.

In my role, I’m regularly working with federal civilian and Department of Defense agencies at all Impact Levels, including secret and top-secret workloads — and they’re all using the cloud. These organizations cloud confidently because they’re pushing security tools out in the same regions as compute and storage resources. Those tools protect the point of entry and keep that critical information safe.

Again, that security isn’t as effective without us architecting for each organization’s specific requirements and for the benefits that the cloud provides. 

Develop a migration strategy that fits your objectives

In times of growth, moving your on-premises workloads to the cloud requires a well-defined migration strategy in order to mitigate risk and ensure your operations continue to run efficiently. This is not to say that it can’t happen quickly, but must include proper preparation and architecting for failure so that your company can truly leverage the benefits of cloud computing.

recent customer decided to migrate immediately to AWS as a lift-and-shift move in order to keep up with rapidly growing demand. They plan to pursue application and data modernization efforts in the coming months, but because they needed to address urgent issues, the move to AWS improves both scalability and reliability. We were able to help them take advantage of the immediate benefits of AWS, such as moving databases to the Amazon Relational Database Service (RDS) with little impact to the overall application. Once you have successfully migrated your workloads, there are many opportunities for continued modernization.

In times of growth, moving your on-premises workloads to the cloud requires a well-defined migration strategy in order to mitigate risk and ensure your operations continue to run efficiently.

Last, if you are considering a move to the cloud, remember that you don’t necessarily need to change everything all at once. One of our customers recently experienced a massive spike in traffic to their on-premises hosted web application. They called us concerned their infrastructure couldn’t handle the traffic. In less than 24 hours, we were able to stand up AWS CloudFront in front of their servers to ensure all that traffic received a cached version out of the content distribution network. By effectively offloading cached requests to CloudFront, their application remained reliable and highly available to their end users, with nothing migrated to AWS.

The cloud can help you solve even your toughest business problems — if you have the expertise to take advantage of its benefits. Not sure where to start? Learn how we can help. 

Jeff Carson is VP of Public Sector Technology at Effectual, Inc. 

Data Integration: The Key to Business Intelligence

Data Integration: The Key to Business Intelligence

No matter what industry your business is in, advanced cloud technologies allow you to collect a vast amount of data. By 2025, IDC estimates the amount of digital data generated will grow to 175 zettabytes of data worldwide, with 49% of stored data residing in public cloud environments. Your success at translating this data into valuable business intelligence depends on a well-executed strategy focused on data integration.

While this is not an easy task, collecting data without a comprehensive data strategy is almost the same as collecting no data at all.

Common Data Management Challenges

  • Unclear business objectives and KPIs
    Your data can help you achieve your goals, but without a clear purpose of what you’re working toward, it will only take you so far. Communicating objectives and KPIs across your business and technical teams helps you align on priorities. Once you agree on what you’re trying to solve for, you can then build an integrated data strategy that works toward the solution.

    Miscommunication about the goals for data collection occurs frequently between business departments and IT. For example, imagine your company has just developed a new mobile app and Marketing is interested in tracking the most popular screen to determine user engagement. When that request lands in the IT department, that team may interpret the request from a tactical viewpoint and capture data on which screen received the most clicks during a given period. However, digging deeper into this question from a strategic business lens, it would be more helpful to evaluate engagement by capturing data on which screen users spent the most time on.

    Recommendation: Start with a mutual understanding of business objectives prior to gathering data to save time and frustration.
  • Data silos
    Though organizations are gathering increasing volumes of data, it is often separated into silos by business units (Finance, IT, Marketing, Operations, Legal) that have different lenses for determining what data should be stored and analyzed. These data silos can lead to operational inefficiencies, redundancies, critical errors, and unnecessary cost, turning data management into a cumbersome process – with increased risk to your business.

    Recommendation: Establish a single source of truth (such as a Data Lake) and clearly define proper data governance in the early stages of developing your integrated data strategy.

  • Disorganized and uncategorized data
    Another common issue is gathering data without utilizing proper automated data transformation processes or ETL (Extract, Transform and Load) scripts aligned with your business drivers and objectives. This results in disorganized, uncategorized data and unnecessary costs, leaving different departments unable to track and monitor data – or simply in the dark as to what data is available to them.

    For example, companies often spend hundreds of thousands of extra dollars for unused on-demand instances simply because they aren’t monitoring them. This might occur when your development team builds a proof of concept for a small process, spins up several servers, and simply fails to spin them down after testing. Due to a lack of visibility into cloud spend data and poor communication between teams, you will still be spending money on those servers even though you are not using them.

    Recommendation: Leverage data automation and governance services from Amazon Web Services (AWS) such as Amazon Config to automate tagging and improve resource monitoring. This is also critical for categorizing data that has a regulatory compliance impact (GDPR or PCI for example).

Advantages of an integrated data strategy

Overcoming these challenges and ensuring data integration requires a thoughtful approach to data management. As the MIT Center for Information Systems Research notes, a successful data strategy builds a foundation — “a central, integrated concept that articulates how data will enable and inspire business strategy.”

As a partner, we can help you develop a plan that ensures you’re collecting the right data in the right manner to access the most valuable insights for powering your business.

This alignment has several key advantages:

  • Make better business decisions: If you’re not tracking data and gaining real insights — not just what people say is happening but analyzing the actual numbers driving outcomes — you cannot make truly informed business decisions. Going from merely having data to having a plan for using that data as business intelligence provides you visibility into aspects of your organization that you may not have had before.
  • Reduce your risk: Integrating business logic, tagging, and automating data transformation processes improves accountability across teams, reducing the potential for inadequate security or inconsistent processes. A lack of data organization could lead to incorrect or duplicate data with potentially serious ramifications.

  • Meet regulatory governance and compliance: A data strategy reinforces your governance, risk management, and compliance efforts by introducing accountability requirements within your teams. As the Data Governance Institute asserts: “establishing appropriate checks-and-balances that can guide management efforts is probably the single most important role of Data Governance.”
  • Increase your ability to predict, respond, and adapt to unforeseen changes: Though AI and machine learning are rapidly transforming our ability to respond to economic disruption and other factors, you still need to inform learning models and provide proper business logic to leverage those technologies properly. This is particularly true with predictive analytics using a machine learning service like Amazon Forecast for more accurate forecasts.

  • Mitigate seasonality and economic factors: The COVID-19 pandemic is a stark example of an event that shifted spend and focus across business units, creating an enormous change to operations. AI/ML based data science can help you navigate unpredictability and analyze complex variables to determine seasonal trends. These highly accurate tools allow you to plan and adapt accordingly.

  • Gain competitive advantage: The ability to leverage data for business intelligence and predictive analytics provides organizations in every industry one of today’s biggest competitive advantages – especially in a dynamic, quickly changing marketplace. Beyond mitigating risk and unforeseen factors that could adversely affect your business, you can also identify new growth opportunities for your products or services.

An integrated data strategy delivers valuable business intelligence for your organization, offering a roadmap for success both in the short- and long-term. Though you may think about tackling it on your own, it’s a lot easier to navigate with someone who’s been there before and can access advanced AWS services to unlock your valuable data — the right way.

Learn how you can get started.

Zach Shapiro is a Solutions Architect at Effectual, Inc. 

How A Cloud Partner Helps You Maximize AWS Cloud Services

How A Cloud Partner Helps You Maximize AWS Cloud Services

Working with an experienced cloud partner can fill in knowledge gaps so you get the most out of the latest cloud services

Cloud services have been widely adopted for years, which is long enough for users to gain confidence in their understanding of the technology. Yet as cloud continues to grow and develop, customer knowledge does not always grow proportionately. When users become overconfident, they can unknowingly overlook new Amazon Web Services (AWS) technologies and services that could have a significant impact on positive business outcomes.

What cloud customers are overlooking

  • Control is not necessarily security
    In the early days of the cloud, many companies were reluctant to offload everything to cloud services due to security concerns. Today, many CTOs and IT Directors are still unsure how security translates to the cloud and remain hesitant about giving up control.

    AWS provides a proven, secure cloud platform for migrating, building, and managing workloads. However, it takes knowledge and expertise to take individual services and architect them into a solution that maintains, or even heightens security. A partner well-versed in AWS services and advanced cloud technologies can identify and deploy tools and services that strengthen your security posture and add value to your business.
  • Keeping up with cloud innovation is an investment in continual learning
    This can be a tall order for organizations with limited internal resources or cloud knowledge. Partnering with cloud experts who stay constantly informed about new AWS services – and know how to implement them – gives you immediate access to cloud innovation. It also frees up your developers and engineers to focus on core initiatives.
  • Aligning business and IT delivers better results
    Internal teams that pitch a cloud-forward strategy often face hesitancy from business leaders. This is because executives have historically made decisions about how to allocate and manage IT resources, leaving developers to work within the parameters they are presented. However, involving solutions architects and cloud engineers in decision-making brings a crucial technical perspective that uncovers additional options with better results.

    Bridging this gap is a matter of translation, as what makes sense to an in-house developer might seem like jargon to executives in other business units. Because our engineers understand both business and technology, we can bring clarity to modernization initiatives by acting as a translator between business and IT – preventing major communication and technical headaches down the line.  

The benefits of pairing managed and professional services

If your cloud partner is capable of handling larger professional services projects such as migrations, app development, and modernization as well as the ongoing maintenance of managed services, you will be far more successful at optimizing resources, improving security, reducing stress, and realizing cost savings.

There are several advantages of pairing professional and managed services:

  • Reduce operational overhead and optimize workloads
    Allowing a partner to directly manage more systems reduces your operational overhead and optimizes workloads. This guarantees your business will not get bogged down with redundant operations or pay for more computing power than is truly needed.

    For instance, you may be paying high colocation costs to house everything in your data center. By engaging a partner that offers both professional and managed services, you can move a workload safely from on-premises to the cloud with the same functionality, make it more secure, maintain compliance, and have confidence it is being optimized for performance and cost.
  • Upgrade and modernize more efficiently
    Having professional services and managed services under one roof makes it easier and more efficient to upgrade or modernize. Changes to infrastructure go much smoother with a trusted cloud partner at the wheel who has access to customer systems. Without access, the partner has to navigate the back and forth between client-controlled systems and new professional services before any real progress can take place.

    The goal is not to scrap an entire in-house system, but to develop a smooth transition where managed and professional services work in harmony. With the right context, and the right cloud partner, you can translate the ROI of pairing professional services and managed services so your executives are onboard with cost-saving proposals and your developers have a seat at the table.

In summary, you can maximize the benefits of cloud services by engaging a partner with the technical expertise, business experience, and deep knowledge of AWS services to support your modernization efforts.

Connect with our Modernization EngineersTM to find out how we can help you unlock the full power of the cloud.

Jeff Finley is a Senior Cloud Architect at Effectual, Inc. 

How Private NAT Gateways Make for Easier Designs and Faster Deployments

How Private NAT Gateways Make for Easier Designs and Faster Deployments

NAT Gateways have historically been used to protect resources deployed into private subnets in virtual private clouds (VPCs). If you have data that resides in resources deployed on a private subnet in a VPC that needed to access information outside the VPC (on the internet or on premise) and you wanted to exclude incoming connections to those resources, you’d use a NAT Gateway. The NAT Gateway would provide access to get what you needed and allow that traffic to return, but still wouldn’t allow something that originated from the outside get in. 

The core functionality of a NAT Gateway is allowing that one-way request origination flow.

Earlier this month, AWS announced that you can now launch NAT Gateways in your Amazon VPC without associating an internet Gateway to your VPC. The private NAT Gateway allows you to route directly to Virtual Private Gateways or Transit Gateways without an Internet Gateway in the path for resources that need to reach out to internal tools, like a data center, VPC, or something else on-prem.

This might seem like a modest bit of news, but it will lead to improved performance on both the business and engineering levels and demonstrates the constant innovation AWS provides its customers. 

Innovation continues at the fundamental level 

Continuous innovation has always been at the core of how AWS approaches problem solving. This is true for the bleeding edge of technology and for the fundamental building blocks of well-established disciplines, such as networking. The idea of Network Address Translation (NAT) isn’t anything new; it’s been a core building block for years. 

In the past, though, you would have done it on your own server, deploying, configuring, and maintaining a NAT instance. AWS brought the NAT Gateway into the fold; the differentiator being that this was a managed service offering that lets you use infrastructure as code or simply make a few clicks in your console to attach a NAT Gateway to a private subnet in a VPC, so you don’t have to worry about the underlying infrastructure. There are no third-party tools or complex configuration to worry about. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

We see the same type of excitement and attention to detail both with major launches and when introducing new ways to use a product or service that already exists. It’s that combination of innovation for fundamental core offerings to make them easier to use plus bleeding edge innovation that really highlights the depth of expertise across AWS and its partners. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

Learn more about AWS Private NAT Gateways here

A boost in efficiency

Before the private NAT Gateway, every NAT Gateway had a hard dependency on an Internet Gateway being attached to the VPC for egress. Communication to on premise resources via NAT Gateway required routing through an Internet Gateway or some other networking construct that added a layer of complexity to protecting those private resources in a private VPC. The real benefit of this feature add is the ease of protecting those resources that need to reach out. 

At its core, the private NAT Gateway is simplifying that outgoing request pattern — and it leads to a boost in efficiency. 

For businesses, the NAT Gateway allows a consistent managed service offering from AWS to protect those private resources that need outbound connectivity from a private subnet in a VPC. Prior to the private NAT Gateway, you would have needed to solve for that idea using a third-party tool or a more complex networking architecture. 

Security departments now have a more secure pattern that is less prone to misconfiguration since an Internet Gateway is no longer a dependency. This makes understanding how the organization is applying NAT Gateways and how operations teams are managing them easier. They are able to standardize on a private NAT Gateway approach, reproducing the pattern for the networking path for these outgoing requests consistently.

For individual engineers, a private NAT Gateway simplifies design and deployment because of its inherent ease — a few clicks in your console or lines in infrastructure as code, rather than relying on a more cumbersome third-party tool or a more complex configuration. AWS is extending the functionality of the managed service part of a NAT Gateway to the specific use case of handling private subnet outgoing traffic. This addition makes design easier, it makes deployment faster, and it makes the entire subject more repeatable, because you’re just consuming that managed NAT Gateway service from AWS.

Why this is worth a look

As an engineer, I certainly understand the mindset of wanting to minimize complexity. Enterprise users have NAT Gateways deployed with a dependency on an Internet Gateway and likely have more complex routing solutions in place to protect against unintended incoming requests via that Internet Gateway. Those solutions might be working just fine, and that’s great.  

But from my vantage point, I strongly encourage you to take another look at your egress-only internet and NAT Gateways architecture for private subnets. You could be missing an opportunity to greatly streamline how you work.

At worst, you can simplify how you use your “egress-only” communications. At best, you’ll eliminate a third-party tool and save money while freeing up more of your individual time.

That’s worth taking a second look at the way you’re operating. We should be regularly evaluating our deployments anyway, but it especially applies in networking complexity and simplification. 

I look forward to the improved ease of use for my clients with private NAT Gateways, and am confident you’ll find a similar model of success with your deployments.   

Leveraging Amazon EC2 F1 Instances for Development and Red Teaming in DARPA’s First-Ever Bug Bounty Program

Leveraging Amazon EC2 F1 Instances for Development and Red Teaming in DARPA’s First-Ever Bug Bounty Program

This past year, Effectual’s Modernization Engineers partnered with specialized R&D firm Galois to support the launch of DARPA’s first public bug bounty program – Finding Exploits to Thwart Tampering (FETT). The project represents a highly unique use case showcasing Effectual’s application expertise, and was approved this week to be featured on the AWS Partner Network (APN) Blog.

Authored by Effectual Cloud Architect Kurt Hopfer, the blog will reach both AWS customers and technologists interested in learning how to solve complex technical challenges and accelerate innovation using AWS services.

Read the full post on the AWS APN Blog

In 2017, the Defense Advanced Research Projects Agency (DARPA) engaged research and development firm Galois to lead the BESSPIN project (Balancing Evaluation of System Security Properties with Industrial Needs) as part of its System Security Integrated through Hardware and Firmware (SSITH) program.

The objective was to develop tools and techniques to measure the effectiveness of SSITH hardware security architectures, as well as to establish a set of “baseline” Government Furnished Equipment (GFE) systems-on-chip (SoCs) without hardware security enhancements.

While Galois’s initial work on BESSPIN was carried out entirely using on-premises FPGA resources, the pain points of scaling out to a secure, widely-available bug bounty program soon emerged.

It was clear that researchers needed to be able to stress test SSITH hardware platforms without having to acquire their own dedicated hardware and infrastructure. Galois leveraged Amazon EC2 F1 instances to scale infrastructure, increase efficiencies, and accelerate FPGA development.

The company then engaged AWS Premier Consulting Partner Effectual to ensure a secure and reliable AWS environment, as well as to develop a serverless web application that allowed click-button FPGA SoC provisioning to red team researchers for the different processor variants.

The result was DARPA’s first public bug bounty program—Finding Exploits to Thwart Tampering (FETT).

Learn more –> 

 

Empowering Marketers and Driving Customer Engagement with Amazon Pinpoint

Empowering Marketers and Driving Customer Engagement with Amazon Pinpoint

In an increasingly virtual world of remote work, online learning, and digital interfacing, successful customer engagement can differentiate you from competitors and provide deeply valuable insights into the features and innovations important to your users. A well-designed, well-managed user experience not only helps you gain market share, but also uncovers new revenue opportunities to grow your business.

At Effectual, we begin projects with an in-depth discovery process that includes persona development, customer journey mapping, user stories, and UX research to design solutions with engaging, meaningful user experiences. Post-launch, your ability to capture, iterate, and respond to user feedback is just as essential to your success.

In our experience, many SaaS-based companies simply miss this opportunity to stay engaged with their customers. Reasons for this include the complexity and cost of designing, deploying, and managing customized marketing campaigns across multiple channels as well as the lack of real time data analytics to inform them. The result is a tidal wave of generic emails, poorly-timed push notifications, and failed initiatives that impact customer retention and engagement.

Amazon Pinpoint is a scalable outbound and inbound marketing communications service that addresses these challenges and empowers marketers to engage with customers throughout their lifecycle. The service provides data insights and a marketing dashboard inside the Amazon Web Services (AWS) admin console for creating and managing customized communications, leveraging automation, data analytics, filters, and integrations with other AWS products and third-party solutions.

Easy to use and scale

  • Manage campaigns from a user friendly marketing dashboard
  • Scale reliably in a secure AWS environment

Targeted customer groups

  • Segment audiences from mobile and web application data or existing customer list

Customized messaging across email, SMS, push notifications

  • Personalize content to engage customers using static and dynamic attributes
  • Create customer journeys that automate multi-step campaigns, bringing in endpoints from your app, API or directly from a CSV
  • Engage customers with targeted emails and push notifications from the AWS admin portal using rich text editor and customizable templates

Built-in analytics

  • Set up customer endpoints by user email, phone #, or UserID # to track user behavior within your app
  • Use real time web analytics and live data streams to capture immediate feedback
  • Measure campaign data and delivery results against business goals

Integrations with other AWS services and third-party solutions

For marketers, Amazon Pinpoint is a powerful tool for improving digital engagement – particularly when integrated with other AWS services that utilize machine learning and live stream analytics. Organizations that invest in designing engaging user experiences for their solutions will only benefit from continually improving and innovating them.

Have an idea of project to discuss? Contact us to learn more about using Amazon Pinpoint to improve your customer engagement.

App Modernization: Strategic Leverage for Managing Rapid Change

App Modernization: Strategic Leverage for Managing Rapid Change

The last few months of the COVID crisis have made this even more evident, dramatically exposing security faults and the limitations of outdated monolithic applications and costly on-premises infrastructure. This lack of modernization is preventing many businesses and agencies from adapting to new economic realities and finding a clear path forward.

Applications architected for the cloud provide flexibility to address scalability and performance challenges and to explore new opportunities without requiring heavy investment.

Whether improving efficiencies with a backend process or creating business value with a new customer-facing app, modernizing your IT solutions helps you respond quickly to changing conditions, reduce your compliance risk, and optimize costs to match your needs. Applications that are already architected to take advantage of the cloud also provide flexibility to address scalability and performance challenges as well as to explore new opportunities without disrupting budgets and requiring heavy investment.

First, what defines technologies that are NOT modern?

  • Inflexible monolithic architectures
  • Inability to scale up or down with changes in demand
  • Security only implemented on the outside layer, not at the component layer
  • Costly on-premises infrastructure
  • Legacy hardware burdens
  • Waterfall development approaches

Maintaining legacy technologies is more expensive than modernizing them

Some of the most striking examples of the complexity, costs, and failures associated with legacy technologies have recently been seen in the public sector. In fact, some state unemployment systems have failed to handle the overwhelming increase in traffic and demand, impacting those in greatest need of assistance. There are those that are already taking measures within the public sector. Beth Cappello, acting CIO of the US Department of Homeland Security, recently stated that had her predecessors not taken steps to modernize their infrastructure and adopt cloud technologies, the ability for DHS personnel to remain connected during the pandemic would have been severely impacted.

Many government applications run on 30+ year-old mainframe computers using an antiquated programming language, creating a desperate need for COBOL developers to fix the crippled technologies. What the situation reveals is the dire need to replatform, refactor, and rearchitect these environments to take advantage of the scalability, reliability, and performance of the cloud.

Benefits of modernization:

  • Security by design
  • Resilient microservices architecture
  • Automated CI/CD pipeline
  • Infrastructure as code
  • Rapid development, increased pace of innovation
  • Better response to customer feedback and market demands
  • Flexible, pay-as-you-go pricing models
  • Automated DevOps processes
  • Scalable managed services (ie: Serverless)
  • In-depth analytics and data insights

The realities of preparing for the unknown

As a result of shelter-in-place orders since early March, we have seen both the success of customers who have modernized as well as the struggles of those still in the process of migrating to the cloud.

Food for All is a customer with a farm-to-table grocery app that experienced a 400x increase in revenue as people rushed to sign up for their service during the first few weeks of the pandemic. Because we had already built their architecture for the Amazon Web Services (AWS) cloud, the company’s technology environment was able to scale easily to meet demand. In addition, they have a reliable DevOps environment that allowed them to immediately onboard more developers to begin building and publishing new features based on user feedback.

Unfortunately, other customers have not been able to adapt as quickly.

When one of our retail clients lost a large number of customers in the wake of COVID, they needed help scaling down their environment as rapidly as possible to cut their costs on AWS. However, the inherited architecture had been written almost 10 years ago, making it expensive and painfully time-consuming to implement adjustments or changes. As a result, the company is currently weighing whether to turn off their app and lose revenue or invest in modernizing it to recover their customers.

In fact, many early cloud adopters have not revisited their initial architectures to ensure they are taking advantage of the hundreds of new features and services released by AWS each year

For another large enterprise customer, the need to reduce technology costs meant laying off a third of their payroll. Though our team is helping them make progress on refactoring their AWS workloads, they were still unable to scale down 90% of their applications in time to avoid such a difficult decision. The situation has significantly increased their urgency to modernize.

The need for a cloud-first modernization service provider

With AWS now 14 years old, it is important to realize that modernization is just as important to early adopters as it is for the public sector’s legacy workloads. In fact, many early cloud adopters have not revisited their initial architectures to ensure they are taking advantage of the hundreds of new features and services released by AWS each year (during Andy Jassy’s 2019 re:Invent keynote alone, he announced 30 new capabilities in 3 hours). For these reasons, and many more, our Modernization Engineers help customers make regular assessments of their cloud infrastructure and workloads to maintain a forward-looking, modern IT estate.

Whether migrating out of an on-premise data center or colo, rearchitecting an existing cloud workload, or developing with new cloud-native features,it has never been more important to implement a modern cloud strategy. This is particularly true for optimizing services across your organization and embracing security as a core pillar.

According to Gartner, 99% of cloud security failures through 2025 will be the customer’s fault. Clearly, no organization wants to be a part of this statistic. Ongoing management of your critical workloads is a worthy investment that ensures your mission-critical assets are secure. The truth is that if security isn’t done right, it simply doesn’t matter.

We work frequently with customers looking to completely exit their data center infrastructure and migrate to an OPEX model in the cloud. In these engagements, we identify risks and dependencies using a staged approach to ensure the integrity of data and functionality of applications. However, this migration or “evacuation” is not an end state. In fact, it is often the first major milestone on a client’s journey toward continuous improvement and optimization. It is also nearly impossible to do efficiently without modern technology and the cloud.

Modern cloud management mitigates risk and enables modernization

While some workloads and applications may be considered cloud-ready for a relatively straightforward lift and shift migration, they can usually benefit from refactoring, rearchitecting, or replatforming based on a thorough assessment of usage patterns. Cloud adoption on its own will only go so far to improve performance and organizational flexibility.

Effectual is a Modernization Service Provider that understands how to modernize applications, their metrics, operational costs, security implications, and compliance requirements

A modern digital strategy allows you to unlock the true capabilities of the cloud, increasing scalability, agility, efficiency, and one of the most critical benefits of any modernization initiative – improved security. Modernized technologies can also utilize cutting edge security protocols and continuous compliance tools that are simply not available with physical infrastructure.

Unlike traditional MSPs (Managed Service Providers) who manage on-premises servers in physical data centers, Effectual is a cloud-first Modernization Service Provider that understands how to modernize applications, their metrics, operational costs, security implications, and compliance requirements. When our development team finishes a project, our customers can Cloud Confidently™ knowing that their environment is in experienced hands for ongoing management.

Most importantly, the path to modernization is not necessarily linear, whether you are developing an application specifically for the cloud, refactoring or rearchitecting as part of a data center migration, or updating and securing an existing cloud environment. New ideas, priorities, and changes to the world we live in require that we adapt, innovate, and rethink our approach to solving business challenges in even the most uncertain times.

When your organization or team needs the power to pivot, we have the Modernization Engineers, systems, tools, and processes to support that change.

Ready to begin your modernization journey?
Contact us to get started.

Ryan Comingdeer is the Chief Cloud Architect at Effectual.

Using Proofs of Concept to Increase Your ROI

Using Proofs of Concept to Increase Your ROI

Not so long ago, R&D departments had to fight for internal resources and justify capital expenditures in order to explore new technologies. Developing on premise solutions was expensive and time-consuming, and decisions were focused on ensuring success and avoiding failure.

In the past 5 years, cloud platforms have radically sped up the pace of innovation, offering companies of all sizes the ability to build, test, and scale solutions at minimal cost. Technology is now a tool to differentiate yourself from your competitors, increase your margins, and open up new markets.

Small investments, big payoffs

By committing only a small portion of your budget to R&D, you can now leverage plug and play cloud services to experiment and test Proofs of Concept (POCs) with potentially huge bottom line payoffs. For large companies, utilizing POCs requires a shift away from risk-averse waterfall development to an agile approach that embraces failure as a path to innovation.

Enterprise organizations can learn from entrepreneurs, who’ve been natural first adopters when it comes to cloud solutions. Startups aren’t afraid of using pay-as-you-go services to build quick POCs for validating markets, testing technical features, and collecting customer feedback. Far more comfortable with agile development, successful early stage companies like Effectual customer Warm Welcome are adept at taking calculated risks and viewing failure as an invitation for learning.

In contrast, enterprise customers may struggle at first to embrace an agile approach and accept failure as an opportunity for insight. As established businesses, they also make the mistake of assuming reputation alone will ensure successful outcomes and often downplay the importance of customer feedback. However, this changes quickly after companies gain experience with POCs and understand the value of testing their assumptions before committing to building out final solutions.

POC vs MVP: What’s the difference?

A Proof of Concept is the first phase of designing a software application. A POC allows you to quickly solve a business challenge for a specific use case in order to:

  • Evaluate tradeoffs
  • Measure costs
  • Test technical functionality
  • Collect user feedback 
  • Determine market acceptance

POCs are timed-boxed (defined by # of hours), with clear KPIs (key performance indicators) for measuring your results. This keeps costs low and provides rapid insights into what changes need to be made before you invest significant resources to scale it.

POCs are rolled out to a controlled, focused group of users (“friends and family”) with the goal of quickly figuring out cost and technical issues. It’s not uncommon to go through 3-4 POCs before finding the one you’re ready to advance. Failure is an accepted and necessary part of this process.

For example, one of our large retail customers has dedicated $4k/month to its backlog pipeline for R&D. At the beginning of the year, we sat down with their team to identify 4-5 business problems the company wanted to tackle. For one particular POC, we developed and tested two different POCs (one cloud-based, one on-premise) before finding a hybrid solution that was the right compromise between cost and functionality.

To minimize risk, they rolled out their hybrid POC to a single store location in order to collect user feedback. Only after making recommended changes did the company commit to moving forward with an MVP at multiple locations across several states. Within 18 months, they have seen a significant return on their investment in both higher sales and increased customer retention. 

A Minimum Viable Product (MVP) is a featured-boxed solution that turns your proven concept into a functional basic product you can test with a wider user base. While it resides outside of a critical business path, an MVP usually requires greater investment and takes longer to evaluate. The goal of an MVP is to:

  • Increase speed to market
  • Establish loyal users
  • Prove market demand
  • Collect broader customer feedback

Organizations of any size can use Proof of Concept to ensure the fast delivery of a final product that meets the needs of customers and provides a measurable return on investment. Learn more about how a POC can drive your business forward.

Have an idea or project to discuss? Contact us to learn more.

AWS IoT Solutions Accelerate the Potential of Edge Computing

AWS IoT Solutions Accelerate the Potential of Edge Computing

IoT is revolutionizing consumer markets, large-scale manufacturing and industrial applications at an incredible pace. In virtually every industry, these technologies are becoming synonymous with a competitive advantage and winning corporate strategies.

We’re witnessing the same trend with our own customers, as companies integrate IoT solutions with their offerings and deploy edge devices to improve customer experience, reduce costs, and expand opportunities.

Installing these smart devices and collecting data is relatively easy. However, processing, storing, analyzing, and protecting massive volumes of that data is where (Internet of) Things gets complicated.

As an AWS Premier Consulting Partner, Effectual guides customers on how to leverage Amazon Web Services (AWS) innovation for their IoT solutions. This includes building performant, cost-effective cloud architecture based on the 5 Pillars of the Well-Architected Framework that scale quickly and securely process real-time streaming data. Most importantly, we apply AI and machine learning (ML) to provide clients with meaningful analytics that drive informed business decisions.

Two of the most common AWS services we deploy for IoT projects are AWS Lambda and AWS DynamoDB.

AWS Lambda: Serverless computing for continuous scaling
A fully-managed platform, AWS Lambda runs code for your applications or backend services without requiring any server management or administration. It also scales automatically to workloads with a flexible consumption model where you pay for only the computing resources you consume. While Lambda is an excellent environment for any kind of rapid, scalable development, it’s ideal for startups and growing companies who need to conserve resources while scaling to meet demand.

We successfully deployed AWS Lambda for a project with Oregon-based startup Wingo IoT. With an expanding pipeline of industrial customers and limited resources, Wingo needed a cost-efficient, flexible architecture for instant monitoring and powerful analytics. We used Lambda to build a custom NOC dashboard with a comprehensive view of real-time operations.

DynamoDB: Fast access with built-in security
We use DynamoDB with AWS Services such as Amazon Kinesis and AWS Lambda to build key-value and document databases with unlimited capacity. Offering low latency and high performance at scale, DynamoDB can support over 10 trillion requests a day with secure backup and restore capabilities.

When Effectual client Dialsmith needed a transactional database to handle the thousands of records per second created by its video survey tool, we used DynamoDB to solve its capacity constraints. The service also provided critical backup and restore capabilities for protecting its sensitive data.

In our experience, AWS IoT solutions both anchor and accelerate the potential of edge computing. Services such as AWS Lambda and Amazon DynamoDB can have a lasting, positive impact on your ability to scale profitability. Before you deploy IoT technologies or expand your fleet of devices, we recommend a thorough evaluation of the cloud-based tools and services available to support your growth.

Have an idea or project to discuss? Contact us to learn more.

5 Reasons Your Development Team Should be Using the Well-Architected Framework

5 Reasons Your Development Team Should be Using the Well-Architected Framework

Amazon Web Services (AWS) offers the most powerful platforms and innovative cloud technologies in the industry, helping you scale your business on demand, maximize efficiencies, minimize costs, and secure data. But in order to take full advantage of what AWS has to offer, you need to start with a clear understanding of your workload in the cloud.

How to Build Better Workloads
Whether you’re working with an internal team or an outsourced consulting partner, the AWS Well-Architected Framework is an educational tool that builds awareness of steps and best practices for architecting for the AWS Cloud. We begin all of our development projects with a Well-Architected Review to give clients full visibility into their workload. This precise, comprehensive process provides them essential insights for comparing strategies, evaluating options, and making informed decisions that add business value. Based on our experience, using well-architected best practices and design principles helps you:

1 – Plan for failure
One of the primary Well-Architected design principles is to architect for failure. This means knowing how to mitigate risks, eliminate downtime, prevent data loss, and protect against security threats. The Well-Architected process uncovers potential security and reliability vulnerabilities long before they happen so you can either avoid them or build a plan proactively for how you’ll respond if they do. This upfront effort can save you considerable time and resources. For example, having a disaster recovery plan in place can make it far easier for you to spin up another environment if something crashes.

Clients who plan for failure shrink their Recovery Time Objective (downtime) and their Recovery Point Objective (data loss) by 2000%.

2 –  Minimize surprises
Mitigating your risks also means minimizing surprises. The Well-Architected Framework offers an in-depth and comprehensive process for analyzing your choices and options as well as for evaluating how a given decision can impact your business. In our Well-Architected Reviews, we walk you through in-depth questions about your workload to create an accurate and holistic view of what lies ahead. When the review answers and recommendations are shared with all departments and stakeholders of a workload, they’re often surprised by the impacts of decisions on costs, performance, reliability and security.

3 – Understand the trade-offs of your decisions
Building well-architected workloads ensures you have options for responding to changing business requirements or external issues, with a structure for evaluating the trade-offs of every one of those choices. If you feel your application isn’t performant, you may have 10 different possible solutions for improving performance. Each one has a tradeoff, whether it be cost, maintainability, or more. The Well-Architected Framework can help your team decide the best option.

Identifying and executing refactoring options based on modern technologies and services can save up to 60% of architecture costs.

As an organization, you should never feel boxed in when it comes to options for improving your workload. The process and questions presented in the Well-Architected Framework can help both your technical and business departments look at all options and identify which ones will have the most positive business impact.

In 75% of the workloads we encounter, the technology department is making the decisions, which means there is no input from business stakeholders as to impacts.

4 – Develop KPIs to monitor the overall health of your application
Choosing KPIs that integrate both technical and business indicators gives you valuable insights into your application’s health and performance. With a Well-Architected approach, you can automate monitoring and set up alarms to notify you of any deviance from expected performance. Once you’ve established this baseline, you can start exploring ways to improve your workload.

KPI’s should be driven by business and should include all areas of your organization including Security, Finance, Operations, IT and Sales. The Well-Architected Framework provides a well-rounded perspective of workload health.

After a Well-Architected Review, it’s common to have 90% of KPIs defining the health of your application come from other business departments – not just from the IT team.

5 – Align your business requirements with engineering goals
Following well-architected best practices also facilitates a DevOps approach that fosters close collaboration between business managers and engineers. When the two are communicating effectively, they understand both the engineering concepts and the business impacts of their decisions. This saves time and resources and leads to more holistic solutions.

To fully leverage the AWS Cloud, make sure your development team has a strong foundation in the Well-Architected Framework. You’ll be able to build better workloads for the cloud that can take your business further, faster.

Have an idea or project to discuss? Contact us to learn more.

Enabling IT Modernization with VMware Cloud on AWS

Enabling IT Modernization with VMware Cloud on AWS

Cloud and virtualization technologies offer a broad range of platform and infrastructure options to help organizations address their operational needs, no matter how complex or unique, and reduce their dependence on traditional data centers.

As the demand for cloud and cloud-compatible services continues to grow across departments within organizations, cloud adoption rates are steadily rising and IT decision makers are realizing that they no longer need to be solely reliant on physical data centers. This has led countless organizations to shrink their data center footprints.

The benefits unlocked by VMC on AWS can have significant impacts on your organization…including the impressive performance of a VMware environment sitting on top of the AWS backbone.

VMware Cloud on AWS is unique in bridging this gap, as it utilizes the same skill sets many organizations have in-house to manage their existing VMware environments. Sure, there are considerations when migrating, but ultimately the biggest change in moving to VMware Cloud (VMC) on AWS is the underlying location of the software defined data center (SDDC) within vCenter. The benefits unlocked by VMC on AWS can have significant impacts on your organization – eliminating the need to worry about the security and maintenance of physical infrastructure (and the associated hands on hardware to address device failure) as well as the impressive performance of a VMware environment sitting on top of the AWS backbone.

Technology That Suits Your Needs

Full and partial data center evacuations are becoming increasingly common and, while there are instances of repatriation (organizations moving workloads from the cloud back to the data center), the majority of organizations are sticking with “cloud-first” policies to gain and maintain business agility. Sometimes, however, even a company that’s begun their IT modernization efforts may still have systems and applications hosted on-premises or in a data center.

This may seem to indicate some hesitance to fully adopt the cloud, but it’s usually due to long-term strategy, technical barriers to native cloud adoption, or misconceptions about cloud security and compliance requirements. It’s rare to find an organization that isn’t loaded with technical debt, fully committed to specific software, tied to lengthy data center commitments – or all of the above.

Mission-critical legacy applications may not be compatible with the cloud, and organizations lack the resources or expertise to refactor those applications so that they can properly function in a native cloud environment. Or perhaps there’s a long-term digital strategy to eventually move all systems and applications to the cloud but, in the meantime, they’re still leashed to the data center. Scenarios like these, and many more, are ideal for VMware Cloud on AWS, which allows organizations to easily migrate legacy VMware workloads with minimal refactoring or rearchitecting, or extend their existing data center systems to the cloud.

New, But Familiar

VMware Cloud on AWS was developed in collaboration between VMware, a pioneer and global leader in server virtualization, and AWS, the leading public cloud provider, to seamlessly extend on-premises vSphere environments to SDDCs built on AWS. VMC on AWS makes it easier for organizations to begin or expand their public cloud adoption by enabling lift and shift migration capabilities for applications running in the data center or on-premises VMware environments.

VMC on AWS also has a relatively minimal learning curve for in-house operations staff because, despite being hosted on AWS, it’s still VMware vSphere at its core and the environments are managed using the vCenter management console. This familiar toolset allows IT teams to begin utilizing the cloud without any major workforce retraining and upskilling initiatives because they can still use VMware’s suite of server virtualization and management tools.

The Right Tools for the Job

The vSphere suite of server virtualization products and vCenter management console may be familiar, but they’re far from outdated or limited. VMware continues to invest in the future, strengthening its cloud and virtualization portfolio by enhancing their existing offerings and developing additional services and tools to further enable IT modernization and data center evacuations.

These efforts mean we can expect VMware to continue playing a major role in helping organizations achieve and maintain agility by ensuring secure workload mobility across platforms, from public cloud to private cloud to hardware.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies mesh well.

HCX, which essentially consists of a series of integrations that establish connectivity across systems and platforms and allows workloads to be migrated without any code or configuration changes, is regularly updated to enhance its functionality. VMware HCX can be used to perform live migrations using vMotion and bulk migration for up to 100 VMs at a time. VMware HCX can also provide a secure, accelerated network extension which, beyond providing a seamless migration experience and minimizing operational impacts usually associated with migrating workloads, helps improve the environment’s resiliency through workload rebalancing. This same functionality plays a critical role in disaster recovery and business continuity by replicating data across multiple locations.

A Thoughtful Approach to Modernization

Whether an organization is prioritizing the optimization of spend, revenue growth, streamlining operations, or revitalizing and engaging their workforce, a mature and robust digital strategy should be at the heart of the “how.” Cloud adoption will not solve these business challenges on its own – that requires forethought, planning, and expertise.

It can be challenging to make the right determinations about what’s best for your own unique business needs without a clear understanding of those needs. And for organizations still relying on old school hardware-based systems, the decision to remain with on-premises deployments, move to the cloud, or lift and shift to a platform like VMC on AWS requires a comprehensive assessment of their applications, hardware, and any existing data center/real estate commitments.

Internal teams may not have the specific technical expertise, experience, or availability to develop suitable digital strategies or perform effective assessments, especially as they focus on their primary day to day responsibilities. As an AWS Premier Consulting Partner with the VMware Master Services Competency in VMware Cloud on AWS, Effectual has established its expertise in VMware Cloud on AWS, making us an ideal partner to help ease that burden.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies, which may be at very different stages of their respective lifecycles, mesh well. They need to develop an appropriate modernization strategy and determine the best fit for each application and workload. The right partner can play a critical role in successfully overcoming these challenges.

Hetal Patel is a Senior VMware Technical Lead and co-founder at Effectual, Inc.

Network Virtualization – The Missing Piece of Digital Transformation

Network Virtualization – The Missing Piece of Digital Transformation

The cloud revolution continues to impact IT, changing the way digital content is accessed and delivered. It should come as no surprise that this revolution has affected the way we approach modern networking.

When it comes down to it, the goal of digital transformation is the same for all organizations, regardless of industry: increase the speed at which you’re able to respond to market changes and evolving business requirements, improve your ability to adopt and adapt to new technology, and enhance overall security. Digital strategies are maturing, becoming more thoughtful and effective in the process, as organizations understand that the true value of cloud adoption and increased virtualization isn’t just about cost savings.

Technology is more fluid than ever, and dedicated hardware is limiting individual progress and development more and more every day. Luckily, cloud and virtualized infrastructure have helped lay the groundwork for change, giving companies the opportunity to more readily follow the flow of technological progress but, in the same way that a chain is only as strong as its weakest link, these same companies are only as agile their most rigid component. And that rigid chokepoint, more often than not, is hardware-based network infrastructure.

A lack of network agility was even noted by Gartner as being one of the Top 10 Trends Impacting Infrastructure and Operations for 2019.

A Bit of History
We likely wouldn’t have the internet like we do today if not for the Department of Defense needing a way to connect large, costly research computers across long distances to enable the sharing of information and software. Early computers had no way to connect and transmit data to each other.
The birth of ARPANET in 1969, the world’s first packet-based network, and it’s ensuing expansion was monumental in creating the foundation for the Information Age.

The Case for Virtualization

While some arguments can still be made about whether a business might benefit more from traditional, hardware-based solutions or cloud-based options, there’s an inarguable fact right in front of us: software moves faster than hardware. This is what drove industries toward server and storage virtualization. However, network infrastructure still tends to be relegated to hardware, with the same manual provisioning and configuration processes that have been around for decades. The challenge of legacy, hardware-based network infrastructure is a clear obstacle that limits an organization’s ability to keep up with changing technologies and business requirements.

The negative effect of hardware-based networking goes beyond the limitation of speed and agility. Along with lengthy lead times, the process of scaling, modifying, or refreshing network infrastructure can require a significant amount of CapEx since you have to procure the hardware, and a significant amount of OpEx since you have to manually configure the newly acquired network devices. In addition, manual configuration is well-known to be error-prone, which can lead to connectivity issues (further increasing deployment lead time) and security compromises.

Networking at the Speed of Business and Innovation

As organizations move away from silos in favor of streamlined and automated orchestration, approaches to network implementation need to be refreshed. Typical data center network requests can take days, even weeks to fulfill since the hardware needs to be procured, configured (with engineers sometimes forced to individually and painstakingly configure each device), and then deployed.

Software-defined networking (SDN), however, changes all of that. With properly designed automation, right-sized virtual network devices can be programmatically created, provisioned, and configured within seconds. And due to the reduced (or even fully eliminated) need for manual intervention, it’s easier to ensure that that newly deployed devices are consistently and securely configured to meet business and compliance requirements.

Automation allows networking to match the pace of business by relying on standardized, pre‑defined templates to provide fast and consistent networking and security configurations. This lessens the strain and burden on your network engineers.

Network teams have focused hard on increasing availability, with great improvements. However for future success, the focus for 2019 and beyond must incorporate how network operations can be performed at a faster pace.

Source: Top 10 Trends Impacting Infrastructure & Operations for 2019, Gartner

Embracing Mobility

Modern IT is focused on applications, and the terminology and methods for implementing network appliances reflect that – but those applications are no longer tied to the physical data center. Sticking to a hardware-focused networking approach severely restricts the mobility of applications, which is a limitation that can kill innovation and progress.

Applications are not confined to a single, defined location and maturing digital and cloud strategies have led to organizations adopting multiple public and private clouds to achieve their business requirements. This had led to an increase in applications being designed to be “multi-cloud ready.” Creating an agile network infrastructure that extends beyond the on-premises locations, matching the mobility of those applications, is especially critical.

Network capabilities have to be able to bridge the gap from functioning consistently across all locations, whether they’re hardware-based legacy platforms, virtual private cloud environments, or pure public cloud environments.

This level of agility is beneficial for all organizations, even if they’re still heavily invested in hardware and data center space, because it allows them to begin exploring, adopting, and benefiting from public cloud use. Certain technologies, like VMware Cloud on AWS, already enable organizations to bridge that gap and begin reaping the benefits of Amazon’s public cloud, AWS.

According to the RightScale 2019 State of the Cloud Report from Flexera, 84% of enterprise organizations have adopted a multi-cloud strategy, and 58% have adopted a hybrid cloud strategy, utilizing both public and private clouds. On average, the respondents reported using nearly five clouds on average.

A Modern Approach to Security

Digital transformation creates fertile ground for new opportunities – both business opportunities and opportunities for bad actors. Since traditional approaches to cybersecurity weren’t designed for the cloud, cloud adoption and virtualization have contributed to a growing need to overhaul information security practices.

Traditional, classical network security models focused on the perimeter – traffic entering or leaving the data center – but, as a result of virtualization, the “perimeter” doesn’t exist anymore. Applications and data are distributed, so network security approaches have to focus on the applications themselves. With network virtualization, security services are elevated to the virtual layer, allowing security policies to “follow” applications, maintaining a consistent security configuration to protect the elastic attack surface.

But whether your network remains rooted in hardware or becomes virtualized, the core of your security should still be based on this: Security must be an integral part of your business requirements and infrastructure. It simply cannot be bolted on anymore.

Picking the Right Tools and Technology for the Job

Choosing the right tools and technology to facilitate hybrid deployments and enable multi‑platform solutions can help bridge the gap between legacy systems and 21st century IT.  This level of interoperability and agility help make cloud adoption just a little less challenging.

Addressing the networking challenges discussed in this post, VMware Cloud on AWS has an impressive set of tools that enable and simplify connectivity between traditionally hosted on-premises environments and the public cloud. This interconnectivity makes VMware Cloud on AWS an optimal choice for a number of different deployment use cases, including data center evacuations, extending on-premises environments to the public cloud, and improving disaster recovery capabilities.

Developed in partnership with Amazon, VMware Cloud on AWS allows customers to run VMware workloads in the cloud, and their Hybrid Cloud Extension (HCX) enables large-scale, bi-directional connections between on-premises environments and the VMware Cloud on AWS environment. In addition, VMware’s Site Recovery Manager provides simplified one-click disaster recovery operations with policy-based replication, ensuring operational consistency.

If you’re interested in learning more about VMware Cloud on AWS or how we can help you use the platform to meet your business goals, check out our migration and security services for VMware Cloud on AWS.

Ryan Boyce is the Director of Network Engineering at Effectual, Inc.

The Cloud-First Mindset

The Cloud-First Mindset

Across every industry, cloud-native businesses are disrupting legacy institutions that have yet to transform traditional IT platforms.

To remain competitive, industry giants need to change the way they think about technology and adopt a cloud-first mindset. Prioritizing cloud-based solutions and viewing them as the natural, default option is vital to the success of new projects and initiatives.

Migrating legacy systems to cloud has the added benefit of eliminating technical debt from older policies and processes. However, it is important to be mindful in order to avoid creating new technical debt when developing and deploying cloud systems. While adopting a cloud-first mindset may seem like an expected result of digital transformation, it requires significant changes to an organization’s culture and behavior, similar to those required for the effective adoption and implementation of DevOps methodologies.

We have to rethink the old way of doing things – cloud is the new normal

Evolving needs and capabilities

When “cloud” first entered the lexicon of modern business, it was incorrectly thought of as a cost‑cutting measure. Organizations were eager to adopt the cloud with the promise of savings – despite not fully understanding what it was or its ever-growing capabilities. These types of implementations were generally short-sighted: lacking a well-defined digital strategy and focused on immediate needs rather than long-term goals.

As adoption increased, it became apparent that adjusting approach and redefining digital strategy is necessary for success. Optimizing applications for the cloud and developing comprehensive governance policies to rein in cloud sprawl, shadow IT, and uncontrolled (and unmonitored) spend are just part of the equation.

“…spending on data center systems is forecast to be $195 billion in 2019, but down to $190 billion through 2022. In contrast, spending on cloud system infrastructure services (IaaS) will grow from $39.5 billion in 2019 to $63 billion through 2021.”

Source: Cloud Shift Impacts All IT Markets, Gartner

A cloud-first approach reshapes the way an organization thinks about technology and helps mitigate the potential to recreate unnecessary technical debt that was eliminated through digital transformation initiatives.

The human element of digital transformation

Digital transformation should extend beyond technology. It’s a long-term endeavor to modernize your business, empower your people, and foster collaboration across teams. Transforming your systems and processes will have a limited impact if you don’t also consider the way your teams think, interact, and behave. This is especially important because the significant operational changes introduced by modernizing infrastructure and applications can present challenges to employees who feel comfortable maintaining the status quo. Before you can disrupt your industry, you have to be willing to disrupt the status quo within your own organization.

The fact is that change can be difficult for a lot of people, but you can ease the transition and defuse tension by actively engaging your teams. You cannot overstate the importance of clear, two-way communication. Letting your people know what you’re planning to do and also why you’re doing it can help them understand the value of such a potentially massive undertaking. It’s also important to have a solid understanding of what your teams need and creating open lines of communication will enhance requirements gathering efforts. This level of communication ensures that whatever you implement will adequately address their needs, and ultimately improve their workflow and productivity.

The introduction of new tools and technologies, even if they’re updated versions of the ones currently in use, will generally require some level of upskilling. Helping your teams bridge the technical gap is a necessary step.

Competition at its finest

Few sectors have seen the level of disruption faced by the finance industry. FinTech disruptors, born in the cloud and free from the chains of technical debt and bureaucratic overhead, have been able to carve out their place in the market. They’ve attracted customers by creating innovative offerings and customer-focused business models, competing with legacy institutions that seemed to have an unassailable dominance that barred any new entrants.

Legacy retail banking institutions, known for being risk averse, had a tendency to implement new technology very slowly. They were plagued by long development cycles, dedicated hardware solutions, and strict compliance requirements to safeguard highly sensitive data.

When Capital One turned its attention to the cloud, they created a holistic digital strategy that wasn’t limited to tools and systems. They understood that technology was not a line item on a budget, but an investment in the company’s future, and that successfully executing their strategy would require a culture shift. They focused on attracting technologists who could enhance the company’s digital capabilities to increase employee engagement, improve cybersecurity, and improve customer experience by using the latest technologies, including artificial intelligence and machine learning. They also created a cloud training program so their employees would understand the technology, regardless of whether or not they were in technical roles, reinforcing the company’s cloud-first mindset.

FinTech disruptors, born in the cloud and free from the chains of technical debt and bureaucratic overhead, have been able to carve out their place in the market.

Understanding your options

Developing a proper cloud-first mindset is not about limiting your options by using the cloud exclusively. A digitally transformed business doesn’t adopt the latest technology simply for the sake of adoption. In fact, the latest and greatest SaaS or cloud-based offerings may not always be the best option, but you have to know how to make that determination based on the unique needs and circumstances of your business. By objectively assessing business goals, considering all options (including traditionally hosted), and prioritizing agile, flexible solutions, you can redefine your approach to problem-solving and decision-making. This mindset means that cloud is no longer the “alternative.”

We have to rethink the old way of doing things – cloud is the new normal, and hardware-based options should only be implemented if they are truly the best way to meet business goals and overcome challenges. We don’t need to abandon on-premises or traditional IT to maintain or regain competitive edge. We just need to understand that it’s not always the right choice.

This approach will help you develop a macro view of your organization’s needs and prompt you to identify and treat the underlying cause of business challenges, not just the symptoms.

Building a foundation for disruption

Becoming a disruptor in your industry is not the goal of digital transformation –that takes more than just adopting the cloud. The goal is to free your organization from the restraints of costly, outdated legacy infrastructure and monolithic applications, and enabling your teams to scale and innovate. The flexibility of cloud and SaaS-based options reduce the risks associated with developing new products and services for your customers, and instilling a culture of cloud-first thinking gives your people the freedom to explore and experiment. That’s how you drive innovation and compete against new, lean born-in-the-cloud competitors. That’s how you disrupt.

Building Strength Through Partnership

Building Strength Through Partnership

The cloud partner ecosystem is changing. The days when organizations could act as a “Jack of all trades, master of none” are over.

Legacy IT resellers are going the way of the dinosaur in favor of partners who can deliver clear value-add with a track record of transformative success. Specialization is the order of the day. This cuts to the heart of what we mean by a partnership — and how it differs from simply being a “vendor.”

IT partnerships should allow your in-house team to remain focused on generating revenue and building the organization.

Why Specialized Partnerships Matter

Choosing the right partner is absolutely critical to executing a successful cloud transformation. We addressed this in a previous post. Every organization is necessarily limited by its own technical and human resources. The right partner brings expertise, experience, and proven processes to ensure that internal limitations don’t equal a failed transformation process.

A Successful Cloud Partnership

Let’s take a look at one of the most recent and important cloud partnerships: AWS and VMware. AWS brought their cloud platform services together with VMware’s virtualization expertise. The result was a specialized certification program, a robust migration service, a cost insight tool providing greater transparency into cloud spending, and a joint hybrid cloud product to incentivize customer adoption. Each partner brought highly specific value-add services and together they created a game-changing cloud solution for the enterprise.

Partners Versus Vendors

It’s worth exploring what we mean when we talk about being a partner as opposed to being a vendor. A vendor is easy enough to explain: It is a company providing a service. The point is, even the best vendors are not as invested in your success as a partner. They certainly wish their customer success and hope for continued business, but there is no strategic, long-term involvement, or commitment to understanding their clients’ unique business goals.

In some cases, vendors may even push templated or cookie-cutter solutions that simply don’t fit. This isn’t to say that every vendor is out to take advantage of their customers; it’s simply a recognition that a generalized vendor offering tends to be limited, in contrast to a specialized partnership.

By comparison, a successful partnership is a more intimate relationship. In these engagements you’re not just purchasing IT services – you’re working hand-in-hand to grow the efficiency and effectiveness of your IT resources.

Cloud security, migration, and cost optimization are exactly the types of endeavors that call for partners.

The key difference is a subtle but important one — collaboration. It’s often thought that a good partner will “take care of everything” for you, but this is not true, nor should it be. A true partner requires your input to understand how your business defines success, and relies on this data to make informed decisions on the technologies they deploy. It is essential for your teams to be involved in this process, as they will adopt and learn new methodologies and processes throughout the engagement.

It’s not about choosing between vendors or partners. It’s about recognizing where more generalized vendors will fulfill your needs and where specialized partners are a better fit. Simple, straightforward tasks are fine for vendors. More involved and strategic endeavors, however, require a partner. Cloud security, migration, and cost optimization are exactly the types of endeavors that call for partners.

Extending Your In-House Capabilities

IT partnerships should allow your in-house team to remain focused on generating revenue and building the organization. Leaning on strong partners can in effect be an extension of your IT team, expanding your resources and solving problems that may have required additional training or experience beyond the expertise and skill sets of your internal teams.

Cloud security, migration, and cost optimization are exactly the types of endeavors that call for partners.

Keeping your teams focused on their core responsibilities has a highly desirable secondary effect – boosting in-house morale. Not only does this improve the workplace, it makes it easier for you to attract and retain top talent.

Cloud Confidently™

At effectual, we engage you as a partner, not a vendor, which is why we specialize in cloud, and not cloud-adjacent services like data center operations. Our deep experience in cloud enablement facilitates your digital transformation. This includes helping you to determine the best implementation strategy as well as establishing metrics to quantify and measure your success. But our specialization is in security and financial optimization.

The important thing is not to be just technologists, but to be able to understand the business goals [clients are] trying to achieve through the technology.

Cloud is a rapidly evolving ecosystem. AWS rolled out 1,400 new services in 2017, another 800 throughout the first half of 2018, and an impressive number of new product and service announcements during re:Invent 2018. We understand that it can be difficult to wade through these waters to find the right solutions for your business challenges, including your specific security requirements. What’s more, your team is likely already fully committed running core applications and tools. You need a partner who can help keep your in-house team free to do what it is they do best.

RightScale’s 2018 State of the Cloud report found that most organizations believed they were wasting 30 percent of their cloud spend. In fact, the study found that 35 percent of their cloud spend was attributed to waste. We look for ways to help our partners get their invoices under control but also to understand what is driving their cloud costs. Finally, we help organizations properly allocate their spend, ensuring that the right applications, business units, regions, or any other grouping of your business is spending exactly what it should and no more.

We strive to understand your long- and short-term goals by working closely with your organization and provide you with strategic solutions for sustained growth. Interested in learning more? Reach out and let us know what you are looking to solve – we love the hard questions.

Robb Allen is the CEO of Effectual, Inc.

Amazon Web Service as a Data Lake

Amazon Web Service as a Data Lake

“Cloud,” “Machine Learning,” “Serverless,” “DevOps,” – technical terms utilized as buzzwords by marketing to get people excited, interested, and invested in the world of cloud architecture.

And now we have a new one – “Data Lake.” So, what is it? Why do we care? And how are lakes better than rivers and oceans? For one, it might be harder to get swept away by the current in a lake (literally, not metaphorically).

A Data Lake is a place where data is stored regardless of type – structured or unstructured. That data can then have analytics or queries ran against them. An allegory to a data lake is the internet itself. The internet, by design, is a bunch of servers labeled by IP addresses for them to communicate with each other. Search Engine web crawlers visit websites associated with these servers, accumulating data that can then be analyzed with complex algorithms. The results allow a person to type in a few words into a Search Engine and receive the most relatable information. This type of indiscriminate data accumulation and the presentation of context-relatable results is the goal of data lake utilization.

However, for anyone who wants to manage and present data in such a manner, they first need a data store to create their data lake. A prime example of such a store is Amazon S3 (Simple Storage Service) where documents, images, files, and other objects are stored indiscriminately. Have logs from servers and services from your cloud environments? Dump them here. Do you have documentation that is related to one subject, but is in different formats? Place them in S3. The file type does not really matter for a data lake.

ElasticSearch can load data from S3, indexing your data through algorithms you define and providing ways to read and access that data with your own queries. It is a service designed to provide customers with search capability without the need to build your own searching algorithms.

Athena is a “serverless interactive query service.” What does this mean? It means I can load countless CSVs into S3 buckets and have Athena return queried data as a data table output. Think database queries without the database server. Practically, you would need to implement cost management techniques (such as data partitioning) to limit the ingestion costs per query as you are charged on the amount of data read in a query.

Macie is an AWS service that ingests logs and content from all over AWS and analyzes that data for security risks. From personal identity information in S3 buckets to high-risk IAM Users, Macie is an example of what types of analysis and visualization you can do when you have a data lake.

These are just some examples of how to augment your data in the cloud. S3, by itself, is already a data lake – ‘infinite’, unorganized, and unstructured data storage. And the service already is hooked into numerous other AWS services. Data lake is here to stay and is a mere stepping stone to utilizing the full suite of technologies available now and in the future. Start with S3, add your data files, and use Lambda, ElasticSearch, Athena, and traditional web pages to display the results of those services. No servers, no OS configurations or security concerns; just development of queries, lambda functions, API calls, and data presentation – serverless.

Our team is building and managing data lakes and the associated capabilities for multiple organizations and can help yours as well. Reach out to our team at sales@effectual.com for some initial discovery.

The Right Partner is Better than a Crystal Ball

The Right Partner is Better than a Crystal Ball

Mistakes can create amazing learning opportunities and have even led to some of the most beneficial discoveries in human history, but they can also have far-reaching, time-consuming, and costly implications.

Luckily, someone else has probably already made the mistakes you’re bound to make, so why not reap the benefits of their errors and subsequent experience?

Know Your Strengths and Limitations

One of my passions in life is building things. Working with my hands to create something new and unique fills me with a sense of accomplishment beyond description. Over the years, I’ve taken on a variety of projects to scratch this itch; from baking to renovating my kitchen (I’m really quite proud of the custom-built cabinets I made from scratch), to customizing various vehicles I’ve owned over the years. Along the way, I’ve built several Jeeps into serious off-road machines.

In the years before YouTube, when I was first developing the skills required to lift and modify a Jeep, I often encountered situations where I wasn’t confident in my knowledge or abilities. I knew that the vehicle I was working on would need to be just as safe and reliable on the freeway as it would be out in the middle of nowhere – places, where the results of my efforts would be stress-tested and the results of poor workmanship, could be catastrophic. Each time I encountered an area where I had limited knowledge or experience, I would do all the research I could, then find someone with the right experience who could coach me through the process (and critique my work along the way).

Fortunately, I had a ready supply of trusted friends to advise me. In my time spent driving these heavily modified vehicles, I did encounter the occasional failure, but thanks to the skills I developed under the direction of these watchful eyes, none of them ever put me at significant risk.

“The wise man learns from the mistakes of others.”

Otto von Bismarck

As enterprises modernize their infrastructure, they should look at IT as a contributor to business innovation rather than a means to an end. Alongside this shift in view, the expertise required to define strategy, architect solutions, and deliver successful outcomes is increasingly difficult to acquire. In this dearth of available talent, many of the enterprises I’ve been dealing with are struggling with the decision between:

  1. Delaying modernization efforts
  2. Plowing ahead and relying on internal resources to get trained up and tackle the challenges
  3. Bringing in a third party to perform the work of modernization.

Unfortunately, none of these options is ideal.

Choosing the Best Path

The first option, delaying modernization, limits the enterprise’s ability to deliver products and services to their stakeholders and clients in innovative ways – opening the door for disruptive competitors to supplant them. For dramatic evidence of this, look at the contrasting stories of Sears, the first company to offer ‘shop at home’ functionality, and Amazon.com, the disruptor who has supplanted them as the go-to home shopping solution. The option of delaying presents a significant risk, but the risk of assigning internal resources to address issues they’re not fully prepared to handle should not be underestimated.

Plowing ahead with a team that’s facing unique challenges for the first time means you’ll be lacking the benefits of experience and hindsight. In my previous posts, I’ve discussed some of the hidden traps encountered along the modernization journey, many of which can only be seen in hindsight. These traps can only be avoided once you’ve had the inevitable misfortune and experience of having fallen into them before. “Fool me once…”

It’s similar to the Isla de Muerta from the Pirates of the Caribbean movies; “…an island that cannot be found – except by those who already know where it is.” Unlike the movie, there’s nothing magical about the pitfalls that litter the path to modernization. Many of the concepts that we have accepted as IT facts for decades are invalidated by modern approaches. So, the logical decision seems simple: outsource the effort to someone who has been there before.

Finding the Right Experience and Approach

Bringing in a partner that is experienced in the many areas of IT modernization is the safest of the three options, but your mode of engagement with this third party directly relates to the benefits your enterprise will enjoy. The best advice I can offer is to look for a provider who views modernization from a business perspective, not merely as a technical effort. Most providers will tout their technical expertise (we at effectual do too), but the reality is that nearly all competent providers have people with the same certifications. Technical certifications are no longer the differentiator they used to be. When you are interviewing these teams, ask how they plan to interact across your various enterprise teams outside of IT. If they look puzzled or don’t have an answer, you know that they are not a business solution provider.

Once you have an idea of who is able to help with your modernization efforts, you need to make a decision regarding the methodology that will best suit your enterprise. One possible route is to completely turn the effort over to the outsourced team. While this is a fairly risk-free approach that leaves you with transformed IT when the project is over, you don’t gain any of the expertise required to manage your environment moving forward. I’ve found that the greatest benefits are realized when an enterprise and their provider partner together on the solution.

Providers as Partners
Partner resources collaborate with enterprise resources to deliver solutions, while also providing training, insight, oversight, and guidance.

In this scenario, the partner team takes the lead on the migration project under the executive direction of the enterprise, just like my friends who would help with my vehicle builds. Partner resources collaborate with enterprise resources to deliver solutions, while also providing training, insight, oversight, and guidance. At the end of the day, the enterprise enjoys a better purpose-built solution and develops the expertise to enhance it as additional business requirements are identified, or existing requirements evolve.

What the Future Holds

As modernization efforts start to take hold and your teams gain confidence, you should not consider the journey complete. This is the point in the revolution where a modernized organization can truly view IT and engineering as the linchpin of your competitive advantage, whether it be through cloud adoption, big data, artificial intelligence, mobility, or other current technologies. Historically, the interaction between business and IT has been a two-step process. The business conceptualizes features that would benefit some constituency, whether it be internal or external, then directs IT to build it.

In the new world, where technological capabilities are rapidly evolving and growing, the competitive advantage comes primarily from changing that two-step process. The business first asks IT “What is possible?” and then business teams collaborate with IT to deliver forward‑thinking solutions. This is the behavior that enables innovation and disruption within industries. We’ll explore this topic in depth in a future post.

Learning from the Experts

What has made me a successful builder over the years has been my good fortune to have skilled artisans available to guide and coach me through the work as I was learning how to do it. As I learn tips and tricks from experts, I begin to behave like an expert and deliver high quality work. As you look to your modernization efforts your enterprise can Cloud ConfidentlyTM and see similar growth by bringing in the right partners to help lever your team’s skills and understanding.

The Reality of the Cloud and Company Culture for Financial Services – Part 2

The Reality of the Cloud and Company Culture for Financial Services – Part 2

Cloud transformation impacts more than just tech. It also requires a significant shift in company culture.

This is the second post of two in this series. The first post can be found here: The Reality of the Cloud for Financial Services.

The introduction of new technologies impacts workflow, changes the way your teams go about doing their jobs, and how they communicate with each other and customers. Your teams need to understand the day-to-day value of transformation, and they need to feel like part of the process.

Preparing your teams for potentially radical culture changes is especially critical for financial services organizations, which have historically been hesitant to adopt new and innovative technologies due to the heavily regulated nature of the FinServ industry.

  • Technical Preparedness: It’s common for IT teams to feel a loss of control due to cloud transformation. John Dodge of CIO has said that in-house IT can start feeling like a service broker without a sense of ownership. IT will confront a learning curve with interfaces, APIs, and provider management. Technical training and growth must be prioritized.
  • Skills Building: IT isn’t the only department that will have to acquire new skills. Technical skills are a given, and will be increasingly vital, but your team will likely also need to learn new project management skills and develop a keen understanding of the new realities of security and compliance in the cloud.
  • Frequent Communication and Transparency: Some IT departments can be very resistant to change. During a cloud transformation, there can be tension when your IT team is used to getting under the hood and having total control. Frequent, transparent communication, however, can mitigate resistance in IT. A simple email won’t do the trick. Your IT team needs to know what’s happening – and, more importantly, why— far in advance of it happening. Transparent communication can and should be part of a positive feedback loop that informs the track of on-going training.

Considerations in Cloud Transformation

Proper planning enables a successful project.  Here are some key considerations to discuss internally and with your partners:

  • Compliance and Security: This is at the top of the list for a reason – third-party and government-mandated security requirements for financial services companies leave little to no room for error. Your organization, partners, and cloud service provider must all understand your regulatory and compliance requirements and address your overall cloud security posture so that you can maintain compliance. This is literally Job One.
  • Performance: Performance is of key importance to financial services, who frequently need high performance compute to power their transactions and data analysis. You need easily deployable, easily scalable high-performance processing resources for simulations and modelling, machine learning, financial analysis, and data transformations. This requires a detailed ROI and financial analysis to understand costs and avoid sticker shock.
  • Intellectual Property: What is more important to financial services than their intellectual property? Putting that anywhere outside of your own systems is a hugerisk, but a properly architected cloud solution can ensure that your data is safer in the cloud than it is in legacy solutions.

Transitioning to the cloud can give your business a competitive edge. For financial services lacking large, experienced in-house IT teams, it’s worth considering a partner and leveraging their expertise to make your transition a success.

Robb Allen is the CEO of effectual, Inc.

The Reality of the Cloud for Financial Services – Part 1

The Reality of the Cloud for Financial Services – Part 1

According to a report from Markets and Markets, the financial services cloud market is set to reach a total market of $29.47 billion by 2021, growing 24.4 percent.

The study further found that most of this growth will be right here in North America. At the forefront of this trend, Capital One has made public declarations around their all-in cloud, all AWS strategy. Their significant presence on the expo floor at AWS re:Invent was further evidence of this commitment. The growth of cloud adoption in this sector is driven by a simple truth: Financial services are being transformed by the cloud.

Still, some Financial Services companies struggle with the challenge presented by the maze of legacy services built up over decades of mergers and acquisitions. It’s all too common for financial services organizations to still be using IBM mainframes from the 1960s alongside newer technologies. This can make the prospect of cloud transformation seem daunting, to say the least.

However, the question is not if your organization will transition to the cloud, but when. Cloud transformation benefits greatly outweigh the risks — and your competitors are already moving. The longer you wait, the further behind you fall.

Benefits of Cloud Migrations from Legacy Solutions

There are specific benefits to financial services organizations migrating legacy environments to the cloud:

  • Efficiency: A cloud migration offers opportunity for increased efficiency at decreased operational costs. For financial services organizations looking to revolutionize their IT procurement model, a cloud migration from legacy environments is a quick and easy win, opening the door to more modern tools and services.
  • Decreased Storage Costs: The regulatory requirements for data retention can create enormous costs for financial services. Moving to AWS cloud storage solutions can significantly reduce costs, while still meeting stringent data security requirements. A well architected cloud storage solution will meet your needs for virtually unlimited scalability while improving cost transparency and predictability.
  • Increased Agility: To improve their competitive edge, Financial Services organizations are seeking ways to become more agile. A well-planned cloud transformation will result in applications and platforms that effortlessly scale up and down to meet both internal and customer-facing needs. What’s more, a successful cloud transformation means access to new and better tools and advanced resources.
  • Improved Security: Cloud solutions offer access to enterprise-level equipment and the security that comes along with that. This is normally only within the budget of very large organizations. Cloud services provide increased redundancy of data, even across wide geographical areas. They also offer built-in malware protection and best-in-class encryption capabilities.

In my next post, we’ll discuss the “culture transformation” that will enable Financial Services to maximize the return on their cloud transformation investment.

Robb Allen is the CEO of Effectual, Inc.

When Best Efforts Aren’t Good Enough

When Best Efforts Aren’t Good Enough

“Have you tried rebooting it?”

There was a time, not so long ago, when that was the first question a technician would ask when attempting to resolve an issue with a PC or a server that evolved from PCs. This was not limited to servers; IT appliances, network equipment, and other computing devices could all be expected to behave oddly if not regularly rebooted. As enterprise IT departments matured, reboot schedules were developed for equipment as a part of routine preventative maintenance. Initially, IT departments developed policies, procedures, and redundant architectures to minimize the impact of regular reboots on clients. Hardware and O/S manufacturers did their part by addressing most of the issues that caused the need for these reboots, and the practice has gradually faded from memory. While the practice of routine reboots is mostly gone, the architectures, metrics, and SLAs remain.

Five Nines (or 99.999%) availability SLAs became the gold standard for infrastructure and is assumed in most environments today. As business applications have become more complex, integrated, and distributed, the availability of individual systems supporting them has become increasingly critical. Fault tolerance in application development is not trivial, and in application integration efforts it is orders of magnitude more difficult, particularly when the source code is not available to the team performing the integration. These complex systems are fragile and will behave in unpredictable ways if not shut down and restarted in an orderly fashion. If a single server supporting a piece of a large distributed application fails, it can cause system or data corruption that will take significant time to resolve, impacting client access to applications. The fragile nature of applications makes Five Nines architectures very important. Today, applications hosted in data centers rely on infrastructure and operating systems that are rock solid, never failing, and reliable to a Five Nines standard or better.

As we look at cloud, it’s easy to believe that there is an equivalency between a host in your data center and an instance in the cloud. While the specifications look similar, critical differences exist that often get overlooked. For example, instances in the cloud (as well as all other cloud services) have a significantly lower SLA standard than we are used to, some are even provided on a Best Effort basis. It’s easy to understand why this important difference is missed – the hardware and operating systems we currently place in data centers are designed to meet Five Nines standards, so it is assumed, and nobody asks about it anymore. Cloud-hosted services are designed to resemble systems we deploy to our data centers, and although the various cloud providers out there are clear and honest about their SLAs, they don’t exactly trumpet the difference between traditionally accepted SLAs and those they offer from their rooftops.

A Best Efforts SLA essentially boils down to your vendor promising to do whatever they are willing to do to make your systems available to you. There is no guarantee of uptime, availability or durability of systems, and if a system goes down, you have little or no legal recourse. Of course, it is in the interest of the vendor and their reputation to restore systems as quickly as possible, but they (not you) determine how the outage will be addressed, and how resources will be applied to resolve issues. For example, if the vendor decides that their most senior technicians should not be redirected from other priorities to address the outage, you’ll have more junior technicians handling the issue, who may potentially take longer to resolve it – a situation which is in your vendor’s self-determined best interest, not yours.

There are several instances where a cloud provider will provide an SLA better than the default of Best Efforts. An example of this is AWS S3, where Amazon is proud of their Eleven Nines of data durability. Don’t be confused by this, it is a promise that your data stored there won’t be lost, but not a promise that you’ll be able to access it whenever you want. You can find available SLAs for several AWS services, but none of them exceed Four Nines. This represents effectively 10x the potential outage time over Five Nines and applies only to the services provided by the cloud provider, not the infrastructure you use to connect to them or your applications which run on top of them. The nature of a cloud service outage is also different than one that happens in a data center. In your data center, catastrophic all-encompassing outages are rare, and your technicians will typically still have access to systems and data while your users do not. They can work on both restoring services and “Plan B” approaches concurrently. When systems fail in the cloud, oftentimes there is no access for technicians, and the work restoring services cannot begin until the cloud provider has restored access. This typically leads to more application downtime. Additionally, when systems go down in your data center, your teams can typically provide an ETA for restoration and status updates along the way. Cloud providers are notorious for not offering status updates while systems are down, and in some cases, the systems they use to report failures and provide status updates rely on the failed systems themselves – meaning you’ll get no information regarding the outage until it is resolved. Admittedly, these types of events are rare, but the possibility should still give you pause. So, you’ve decided to move your systems to the cloud, and now you’re wondering how you are going to deal with the inevitable outages. There are really only a few options available to you; first, you can do nothing and hope for the best. For some business applications, this may be the optimal (although most risky) path. Second, you can design your cloud infrastructure like your data centers have been designed for years. My last two posts explored how expensive this path is, and depending on how you design, it may not offer you the availability that you desire anyway. Third, you can implement cloud infrastructure automation and develop auto-scaling/healing designs that identify outages as they happen and often respond before your team is even aware of a problem. This option is more cost-effective than the second option, but it requires significant upfront capital and its effectiveness requires people well-versed in deploying this type of solution – people who are in high demand and hard to find right now. Finally, the ideal way to handle this challenge is to rewrite applications software to be cloud-native – modular, fault-tolerant applications that are infrastructure aware, able to self-deploy and self-re-deploy through CI/CD patterns and embedded infrastructure as code. For most enterprise applications this would be a herculean effort and a bridge too far. Over the past several decades, as we’ve made progress in IT towards total availability of services, you’ve come to rely on, take comfort in, and expect your applications/business features to be available all the time. Without proper thought, planning, and an understanding of the revolutionary nature of cloud-hosted infrastructure, that availability is likely to take a step backward. Don’t be like so many others and pay a premium for lower uptime. Be aware that there are hazards out there and bring in experienced people to help you identify the risks and mitigate them. You’re looking for people who view your moves toward the cloud as a business effort, not merely a technical one. Understand the challenges that lie ahead, make informed decisions regarding the future of your cloud estate, and above all, Cloud ConfidentlyTM!

Don’t Take Availability for Granted
Over the past several decades, as we’ve made progress in IT towards total availability of services, you’ve come to rely on, take comfort in, and expect your applications/business features to be available all the time. Without proper thought, planning, and an understanding of the revolutionary nature of cloud hosted infrastructure, that availability is likely to take a step backward. Bring in experienced people to help you identify the risks and mitigate them.

Bridging the IT Skills Gap

Bridging the IT Skills Gap

One of the most difficult challenges facing businesses today is bridging the “Skills Gap” in IT.

This is a phenomenon whereby a business’s desire to leverage new and expanding technologies is hindered by the lack of available talent and skillsets to architect, implement, and manage these technologies.

Businesses commonly navigate this gap through the use of outside consultants, relying on a team of high-performing individuals holding extensive experience within a specific domain. Engaging a consultancy to assist in the architecture and implementation of desired technologies accelerates adoption and provides breathing room for a company’s own staff to learn and practice the new technologies. This time to practice and learn should not to be overlooked and is essential to the success of any transformative process. The “Building a car while driving” metaphor comes to mind.

This skills gap is not limited to businesses looking to implement new technologies. Ironically, this conundrum has not bypassed IT vendors, and it is fairly common to see the same gap between IT Sales Professionals and the technologies they’re selling.

It is fairly common to see a gap between IT Sales Professionals and the technologies they’re selling.

The Team Approach

IT vendors separate the necessary skills to govern a sales cycle (commercial skills) with the knowledge and expertise of the products they’re selling (technical skills) by partnering a commercially skilled employee with a technically skilled employee. This approach has permeated the market to such an extent that prospective clients now expect their vendors to show up Noah’s Ark style, two by two.

There is still a struggle to find technically skilled people to accompany commercially skilled sales staff. Mitigation strategies have a certain effectiveness; teaming four or five commercially skilled individuals with a single technically skilled employee allows IT vendors to maximize the effectiveness of their expertise. Technically skilled people are not super-human, although some of the most brilliant deserve to be classified as such, and they can only support so many client conversations, differing sales cycles, and solutions. Eventually “Something’s Gotta Give”. This mitigation is merely a stop-gap solution. The underlying issue is still prevalent; all businesses need a certain number of technically skilled individuals to remain competitive. The fight for talent continues.

IT vendors are also trying to address this gap by putting a ‘paywall’ between prospective clients and technically skilled individuals, charging clients for their employees’ time and experience to assess their current needs. This solution lines up neatly with the consultancy approach. If you have something a business wants, don’t give it away for free. This also places more responsibility and pressure on the commercially skilled individual, forcing them ensure they qualify a true need with their clients before requesting a potentially limited resource. When a client opts to pay for the time and experience of a technically skilled individual, they are showing real intent to form a business relationship.

This approach fails when commercially skilled individuals don’t qualify the needs of a client in detail and overpromise on the capabilities of their technical counterparts. Again, these technologists are not superhuman and come with a caveat – ‘Magic Wand Not Included.’

It is now the responsibility of a commercially skilled individual to gain some measure of technical skill.

Without a technically skilled individual in the room when qualifying questions are being asked, commercially skilled individuals can be forgiven for misunderstanding the technical requirements of a client. In these scenarios, sales may inadvertently offer a solution to a client need that their product or service does not solve. This can lead to perpetuating the classic salesperson stereotype– just agree to anything in order to close a deal.

At effectual, our approach to sales, which we believe is shared by the majority of our peers, is to avoid this at all costs. At no point do we want to have a client conversation that starts with “But you said…”. This stems from a desire to do business openly, honestly, and with integrity. Recognizing that not every client is in need of what you are selling is vital.

The Importance of Professional Development

With this context, I’d like to address the crux of how I personally went about bridging the gap. In order to avoid overpromising, underdelivering, and starting a client relationship that is doomed from day one, it is my belief that it is now the responsibility of a commercially skilled individual to gain some measure of technical skill. In fact, I’ll phrase that more strongly: gain as much technical skill as necessary so you can speak with confidence or reply with “I don’t know but I’ll find out.”

Commercially skilled individuals may have raised their eyebrows, adopted an incredulous look, or stopped reading altogether, and that’s fine. Nowhere in a commercially skilled employee’s job description does it state they need to be technically qualified or expected to perform two job roles for a single salary. There is a logical line of thinking that emerges here. If they were hired for the skills they have already acquired, why would should they be expected to develop skills in a different discipline for a role they have no intention of performing.

My argument however is simple: why not? Broadening your horizons is never a negative, and learning new skills, gaining empathy for other people’s challenges, and respecting their achievements is never a bad thing. Meeting your current and potential clients halfway or the whole way is never a bad thing.

With technically skilled individuals being in such high demand, and without enough of them to go around, surely being able to operate without them creates a competitive advantage. I’m not suggesting that the commercially skilled pivot entirely and a embark on a drastic career change, only that a little knowledge is empowering, and a lot of knowledge is powerful. Watch as the atmosphere of client meetings change from “I’m being sold to” to “this person knows what they’re talking about.”

A little knowledge is empowering, and a lot of knowledge is powerful.

You can gain a better understanding of exactly how a product or service can assist a prospective client, identify incompatibility early on, and qualify out with confidence. It takes an expanded skill set to understand when the fit is not right, and it takes integrity to step away.

Earning the Respect of Your Clients and Your Team

Technically skilled individuals who play the yin to commercial individual’s yang will be Sales’ biggest supporters. I know this from experience and have made some wonderful friendships as a result (I hope they’re reading this). Operating with more autonomy doesn’t mean you are trying to replace them, just helping to ease their burden. When you finally do request help, they know it will be an interesting challenge, an opportunity for them to impart some knowledge on someone keen to absorb it, or to get creative with a solution because the standard approaches aren’t working.

How do they know all this? Because they know that when you ask, you’ve already ruled out the most common technical approaches. You’ve met them halfway.

I don’t presume to speak on behalf of my better qualified and more experienced technical counterparts, but I hope they’ll agree that they get the biggest kicks out of solving tough challenges, not answering the same questions over and over.

Yes, I feel it’s important for commercially skilled individuals to bridge the gap between commercial and technical skills. Selfishly, it’s for their own benefit for now, but it’ll soon become a necessity as the market-wide skills gap continues to grow.

Tom Spalding is a Strategic Account Manager at Effectual, Inc.

Cloud, All In or All Out?

Cloud, All In or All Out?

I recently spoke with a good friend of mine who is a Finance SVP with a publicly traded North American manufacturer.

He was very excited to tell me that his executive team had been strategizing over the past couple of quarters and was getting ready to publicly announce they would be moving all IT services to the cloud and would be 100% complete by Q4 2019. As our conversation progressed, I asked him why they had made this decision and he offered several reasons, some of which were more valid then others. Ultimately, with some prior knowledge of how large, diverse, and (in several significant areas) outdated their technology estate was, I asked what their IT teams thought of this initiative. I’m pretty sure my jaw actually dropped at his reply, “Outside of the CIO’s office, nobody knows yet.”

It’s not just all or nothing

There are a whole host of issues with the direction this executive team was headed, but for this particular blog I want to focus on one particularly poor decision that I see play out over and over with potential clients; they are either All In, or All Out on cloud adoption.

I would argue that the vast majority who take either of those two positions have a fundamental mis-understanding of what “Cloud” is. Since the term “Cloud” has been co-opted by nearly every vendor to mean almost anything, this misunderstanding is not surprising. For the purpose of my blog today, I’ll be referencing the key components of cloud computing: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). At their core, each of these offerings support the delivery of business application features to users. At the end of the day, delivering business application features to their users as effectively and efficiently as possible is, and should be, a primary concern of every executive. Executives hire very smart and talented teams of Architects, Analysts, Product/Program/Project Managers, Engineers and Administrators to accomplish this delivery. As these teams work with their respective vendors to understand the nature of the various applications they must operate, then design, build and configure systems to support them. To support diverse applications, these teams need diverse tools; and Cloud (IaaS, PaaS & SaaS) is only one. As powerful as the cloud may be, it is not always ideal, or in some cases, even suitable for every situation or application.

All In, or All Out on cloud adoption?

I would argue that the vast majority who take either of those two positions have a fundamental mis-understanding of what “Cloud” is.

Imagine for a moment that you are hiring the best contractor in your area to build you a home. You’ve worked with them to determine the ideal design, select the desired finishes, and come up with a budget and timeline. At what point would you think it was in your best interest to dictate to this contractor, the best in the area, what tools they may or may not use to deliver your finished home? Wouldn’t it be better to let them use the best tool for each individual job that needs to be done? If this is true, why do we as executives think it is in our best interest to sit in board rooms and determine what tools our IT teams may or may not use, without understanding the nature of the applications they need to operate? The simple answer: it is not. Rather than limiting the tools that their teams can make use of, executives, as the primary visionaries and strategists of the enterprise should develop guidelines for their teams. These guidelines will help them identify appropriate tools and strategies, ultimately helping them align themselves with the overriding executive vision. In our practice, we break these guidelines into two primary sections; Outcome and Bias statements. Outcome statements generally speak to requirements related to availability, reliability, durability and usability of applications, while Bias statements are a list of prioritized preferences for how the application is delivered. This construct provides for executive oversight, while also empowering teams to ultimately understand what is right and do it.

“Outside of the CIO’s office, nobody knows yet.”

There are many enterprises that have gone all in on cloud adoption and many who have avoided it all together. In all my experience, I have yet to encounter an enterprise that has gone All In on the cloud without making significant compromises or undergoing supernatural gymnastics to get everything in (except for businesses that were born in the cloud). Likewise, I have yet to work with a business who has completely opted out of cloud that couldn’t benefit from having some of their systems residing there. “Outside of the CIO’s office, nobody knows yet.” As you can imagine, this jaw dropping statement was not the end of our conversation. We discussed the nature of Cloud Services, and he invited me to consult a bit with several of his peers and superiors within the organization. I was able to provide a little of my perspective and insight into the path they were preparing to undertake. The verdict is still out on what their plan for cloud adoption will be, but I have not seen them make a public announcement regarding a plan for a 100% cloud adoption by EOY 2019.

IT Revolution, not Evolution

IT Revolution, not Evolution

In my previous blog, I stated:

21st Century IT is a revolution – not an evolution of what we have been doing for decades. The skills required to transform through this revolution are different than those required to operate the existing state, which are different still from those required to operate the new state.

A fundamental misunderstanding of this concept underlies almost every troubled or failing IT transformation project. It took me a few years of assisting enterprises in their cloud migrations to fully understand the ramifications of the difference between evolution and revolution as it applies to IT initiatives and how they impact business.

First, let’s consider an earlier technological revolution

The combined impact of the Personal Computer, GUI and desktop publishing was revolutionary and had a transformative impact on the enterprise. Prior to this revolution, enterprises had specially trained computer operators to handle I/O functions for the mainframes and steno pools with typewriters for document creation. As a result of this revolution, there was a fundamental change to the way business was done. Employees, managers and executives alike were able, and quickly required to generate their own documents, manage their own calendars and perform their own data I/O. Within a short time, everything changed. Successfully navigating this change required different IT staff with completely different skills. It ushered in and set the tone for the next 3-4 decades of IT practices.

Within a short time, everything changed

Successfully navigating this change required different IT staff with completely different skills. It ushered in and set the tone for the next 3-4 decades of IT practices.

It’s interesting to contrast this with the evolution of virtualization that really took hold at the turn of the century. While virtualization had significant impact on IT Departments and how compute power was provisioned in the data center, it did not significantly change what IT staffs did, or how they did it. The skill sets required after a move to virtualization were mostly the same as those required prior to the move and the virtualization of data centers was primarily performed by existing IT staff. The impacts of this transformation effort were barely felt, if recognized at all, by those outside of IT.

I’ve spoken with countless enterprise leaders who view the transformation to 21st Century IT as nothing more than a data center migration – something that is normal to the ongoing operations of an IT department. While this can technically work, it’s unlikely to provide the ideal outcomes promised and sought after. Ultimately the reality will be that an IT estate moved in this manner will most probably cost more over the long term, negatively impacting security, availability and performance. The great news is that it doesn’t have to be this way!

If you embrace this transformation as a revolution, impacting all aspects of how your enterprise does business, you’ll be taking the first important step.

A few years ago, my team and I were brought into a large financial services company. They were looking to contain IT costs and as a result, investigating the cloud as a way to accomplish that goal – but this is not the start of the story. Over the previous several decades, this enterprise had become one of a couple “800lb Gorillas” in their particular vertical. They had thousands of employees, all the major customers, massive amounts of data and an annual IT spend nearing 9 digits. A few years prior to our involvement, a couple of start-up companies with few staff, no customers, no data and extremely limited IT budgets entered their vertical and started disrupting it. Initially, these start-ups were ignored by the enterprise, then they were mocked, and ultimately, as they began to take market share away, they were feared. The enterprise started playing defense, leading to the cost cutting exercise we were brought in to assist with.

If you embrace the transformation to 21st Century IT as a revolution, impacting all aspects of how your enterprise does business, you’ll be taking the first important step.

As we worked with this client and helped them understand the revolutionary nature of the cloud and the wide-ranging impacts that it could have on the way they do business, they began to reevaluate their posture with regards to the insurgent companies. With the transformation to new technologies came a culture change. These changes positively impacted customer interactions and the speed with which our client was able to respond to feature requests. Eventually, our “800lb Gorilla” client became the disruptive innovator in their vertical. Today, aside from the name on the building and the vertical they serve, they don’t look much like they did when we first met them. The way that they do business has fundamentally changed; across their entire enterprise.

Your enterprise may or may not face similar challenges and you may not need or want such sweeping change, but regardless, understanding that your transformation is revolutionary, not evolutionary will position you well for success. Don’t be surprised if embracing the revolution doesn’t help address some of the issues you are facing.

Just be aware that it isn’t easy or free — revolution never is.

How to Tell Ahead of Time If Your IT Transformation Project is Going to Fail

How to Tell Ahead of Time If Your IT Transformation Project is Going to Fail

Yes, I know. It’s a mouthful. Someone smarter than me will probably coin something more concise. And catchier. The phrase refers to the adoption and combination of technologies such as “Continuous Integration/Continuous Delivery” (CI/CD), “Infrastructure as Code” and “Artificial Intelligence” (AI); and methodologies such as “Agile” and “DevOps;” and service models such as “Cloud Hosting.” Combined, these areas help IT organizations meet business requirements and deliver business value.

Learning from Failures

I’ve been working in the IT Ops space for more than 30 years. The last 10 of these years have been spent assisting clients of all sizes understand and ultimately make the transformation to 21st century IT models and approaches. While I have a great winning percentage overall, I am experienced enough that my scorecard also reveals a few failed projects over the decade; that is, efforts that produced neither the desired business outcomes nor technical outcomes.

Success requires the support of key leadership

There is a fundamental misunderstanding at the executive level of the potential impacts of new concepts such as DevOps, and CI/CD, and principles such as “Fail fast, fix fast.”

During my journey, I’ve been able to learn from my own mistakes as well as from those of others – I’ve observed many projects without being involved. And, while every project is unique in its own way, I’ve recognized that there are a few hallmarks that almost always foretell failure. In this week’s blog, I’ll highlight several of the most prevalent warning signs from a high level. Then, in each subsequent post over the coming months, I’ll go increasingly into more and more depth. I’ll write primarily from a business perspective in this series because most transformation projects that will ultimately fail can be identified before the first engineer is assigned. This comes from a fundamental misunderstanding – at the executive level – of the potential impacts of new concepts such as Cloud Adoption, DevOps, and CI/CD, and principles such as “Fail fast, fix fast.” A quick note before I go into this week’s list. Your particular cloud initiative is not necessarily doomed to failure because one or two of the factors described below apply. That being said, the odds can quickly tip in favor of failure as more and more applicable issues appear.

Your particular cloud initiative is not necessarily doomed to failure because one or two of the factors described below apply.

The List of Dreaded Pitfalls

1. You did not start your transformation with an agnostic application or business feature-based assessment of your current IT estate.

This, more so than any other item in this list, will – if not properly performed – lead to budget/timeline overruns, failed deployments and ultimately unhappy internal/external clients. A properly performed assessment should answer the following questions, at a minimum:

  • What are the business requirements of my IT estate?
  • Why should I transform my IT/What do I want out of it?
  • What is my current application/business feature inventory?
  • For each business feature, what is (or are) the:
    • Actual infrastructure requirements
    • Currently deployed infrastructure
    • Licensing requirements
    • Cost of operation per month
    • Business Continuity/Disaster Recovery posture
    • Actual cost (in lost productivity/revenue) of availability per hour and workday
    • Governance model
    • Security/compliance requirements
    • Scalability requirements
    • Associated development, Quality Assurance, Configuration and sandbox environments
    • Integrated applications, and
    • Ideal post-transformation destination (i.e., SAS, Cloud, AWS/Azure/GCP, Physical/Virtual Infrastructure, or other).
  • Based on the inventory above, what functionality or performance-related issues need to be proven out through Proof of Concept efforts before ultimate decision are made?
  • What is the appropriate high-level budget/timeline required to complete this work? It’s easy to get lulled into the complacent thought that you’ve been operating your infrastructure and applications for a long time, so you know it very well. In practice however, the knowledge required to operate is not the same as the knowledge required to transform.

This leads to the next red flag:

2. You consider the transformation effort to be an evolution of your previous IT models, practices and tools.

21st century IT is a revolution – not an evolution of what we have been doing for decades. The skills required to transform through this revolution are different than those required to operate the existing state, which are different still from those required to operate the new state. Many efforts have failed because the executives responsible did not understand that a cloud migration does not and should not resemble a data center migration – regardless of what the various migration tool and cloud hosting partners will tell you.

3. Without first performing an assessment to evaluate the skills or effort required, you and your senior IT staff dictated the timeline, budget and technology decisions.

This seems so absolutely ridiculous that it can’t possibly be true. Could you imagine going to a heart surgeon and demanding a transplant, without understanding the impacts or even if the transplant was needed? Of course not. But somehow enterprises do this every day with the beating heart of their businesses – a.k.a. IT – without giving it a second thought.

4. Prior to starting your transformation, you didn’t have a complete understanding of the financial and operational models behind 21st century IT in the enterprise.

Yes, you understand OpEx vs CapEx, and are maybe even able to make a solid “Net Present Value of Cash” argument regarding your future IT directions. But have you considered Provisioning vs. Capacity planning models? Do you understand the cost and value related to Infrastructure as Code? Can you articulate the risks inherent in Best Efforts Availability as opposed to Five Nines? How will you manage costs in a world where a single button click can result in thousands of dollars in Monthly Recurring Costs?

5. You view the transformation as a strictly technical effort.

This is a big one. To make IT more efficient through transformation, you have to address two areas: The How and the What. Assigning the work to your IT organization addresses the How. Involving business resources drives the definition of the What. Without both groups working together, operational efficiencies won’t be realized, budgets will be blown, and opportunities lost. If some of the issues listed above are factors in your current transformation effort, it’s never too late to try to resolve them. In the coming months, I’ll expand on the thoughts above and share some anonymous war stories while providing pointers on how to avoid pitfalls in the business of 21st century IT transformation.

21st century IT is a revolution – not an evolution

The skills required to transform through this revolution are different than those required to operate the existing state, which are different still from those required to operate the new state.