How A Cloud Partner Helps You Maximize AWS Cloud Services

How A Cloud Partner Helps You Maximize AWS Cloud Services

Working with an experienced cloud partner can fill in knowledge gaps so you get the most out of the latest cloud services

Cloud services have been widely adopted for years, which is long enough for users to gain confidence in their understanding of the technology. Yet as cloud continues to grow and develop, customer knowledge does not always grow proportionately. When users become overconfident, they can unknowingly overlook new Amazon Web Services (AWS) technologies and services that could have a significant impact on positive business outcomes.

What cloud customers are overlooking

  • Control is not necessarily security
    In the early days of the cloud, many companies were reluctant to offload everything to cloud services due to security concerns. Today, many CTOs and IT Directors are still unsure how security translates to the cloud and remain hesitant about giving up control.

    AWS provides a proven, secure cloud platform for migrating, building, and managing workloads. However, it takes knowledge and expertise to take individual services and architect them into a solution that maintains, or even heightens security. A partner well-versed in AWS services and advanced cloud technologies can identify and deploy tools and services that strengthen your security posture and add value to your business.
  • Keeping up with cloud innovation is an investment in continual learning
    This can be a tall order for organizations with limited internal resources or cloud knowledge. Partnering with cloud experts who stay constantly informed about new AWS services – and know how to implement them – gives you immediate access to cloud innovation. It also frees up your developers and engineers to focus on core initiatives.
  • Aligning business and IT delivers better results
    Internal teams that pitch a cloud-forward strategy often face hesitancy from business leaders. This is because executives have historically made decisions about how to allocate and manage IT resources, leaving developers to work within the parameters they are presented. However, involving solutions architects and cloud engineers in decision-making brings a crucial technical perspective that uncovers additional options with better results.

    Bridging this gap is a matter of translation, as what makes sense to an in-house developer might seem like jargon to executives in other business units. Because our engineers understand both business and technology, we can bring clarity to modernization initiatives by acting as a translator between business and IT – preventing major communication and technical headaches down the line.  

The benefits of pairing managed and professional services

If your cloud partner is capable of handling larger professional services projects such as migrations, app development, and modernization as well as the ongoing maintenance of managed services, you will be far more successful at optimizing resources, improving security, reducing stress, and realizing cost savings.

There are several advantages of pairing professional and managed services:

  • Reduce operational overhead and optimize workloads
    Allowing a partner to directly manage more systems reduces your operational overhead and optimizes workloads. This guarantees your business will not get bogged down with redundant operations or pay for more computing power than is truly needed.

    For instance, you may be paying high colocation costs to house everything in your data center. By engaging a partner that offers both professional and managed services, you can move a workload safely from on-premises to the cloud with the same functionality, make it more secure, maintain compliance, and have confidence it is being optimized for performance and cost.
  • Upgrade and modernize more efficiently
    Having professional services and managed services under one roof makes it easier and more efficient to upgrade or modernize. Changes to infrastructure go much smoother with a trusted cloud partner at the wheel who has access to customer systems. Without access, the partner has to navigate the back and forth between client-controlled systems and new professional services before any real progress can take place.

    The goal is not to scrap an entire in-house system, but to develop a smooth transition where managed and professional services work in harmony. With the right context, and the right cloud partner, you can translate the ROI of pairing professional services and managed services so your executives are onboard with cost-saving proposals and your developers have a seat at the table.

In summary, you can maximize the benefits of cloud services by engaging a partner with the technical expertise, business experience, and deep knowledge of AWS services to support your modernization efforts.

Connect with our Modernization EngineersTM to find out how we can help you unlock the full power of the cloud.

Jeff Finley is a Senior Cloud Architect at Effectual, Inc. 

Modernizing with VMware Cloud on AWS

Modernizing with VMware Cloud on AWS

VMware Cloud on AWS transforms modernization into a steady, consistent marathon you can win on your first attempt

Modernization is something most businesses want to do, but it often takes a backseat to other business priorities. When leaders consider short term quick wins through application migrations vs longer term modernization initiatives – the appeal of speed frequently wins the decision making process.

However, much like training for and running a marathon, successful modernization requires gradual activity over a long period of time, as well as a high level of persistence – you can’t train for a marathon by running 400 meter sprints 6 times a day 3 months before the start. If you do try to make IT Transformation a sprint, you may fast-track the wrong priorities and end up with high costs, stressed staff, and poor results.

Expanding your options

Let’s say your organization has only a six month window to modernize due to an upcoming data center contract expiration. You believe it will take you a full year to modernize 100% of your workloads – potentially faster with accelerators like an influx of capital or a new CIO joining the company. What do you do?

In this scenario, you usually have two choices:

  • Attempt to accelerate the modernization or try to “sprint” without misjudging priorities
  • Reluctantly sign the renewal and ask to extend the timeline to a year to give yourself the best chance to succeed

Fortunately, VMware Cloud (VMC) on AWS offers a far better third choice:

  • Do not sign the renewal, and instead remove cost from the business by rapidly and safely moving your workloads to a like-for-like technical home within the six-month window

A rapid migration with long term results

Recently, one of our global enterprise customers had to evacuate a data center in three months. Their lease renewal was coming up and they did not want the long-term commitment. With the exit date set in stone, VMware Cloud on AWS was the best option for hitting their goal of cancelling the data center contract. 

With the flexibility needed to analyze each workload to the depth required, they successfully evacuated the data center in time and secured an additional 12 months to migrate over to native AWS. As a result, the company saved $1.8 million over a span of 18 months.

VMware Cloud on AWS allows you to migrate applications quickly with very low risk, with the opportunity to modernize over a longer period of time.

Analyzing the best execution venue for workloads

Ideally, you should be analyzing the best execution venue for your workloads on a regular basis. However, most IT professionals only consider the technologies currently available. While the venue may never change, it is important to be both aspirational and pragmatic about this analysis to ensure you are exploring all possibilities. 

Taking the time to evaluate your best execution venue can help you find the lowest total cost of ownership (TCO) with the maximum business value. For example, your analysis may reveal that VMware technology that is reliable, performant, and well-supported is a better choice than microservices with event-driven compute and orchestrated containers.

In the data center renewal scenario above, if 100% of the applications are running in VMware Cloud on AWS and it is possible to perfectly map the utilization requirement, you may find that 75% are eligible for a complete refactoring. This would dramatically increase performance and reduce cost, leaving the remaining 25% of applications running as a dependable baseline. 

Common roadblocks to modernizing applications

During a marathon, there’s a big difference between stubbing your toe and falling down. A small misstep won’t prevent you from continuing on. But if you hit the ground, picking yourself back up and staying in the race is a lot harder. The same is true with a modernization project.

Working with an experienced partner can keep you running smoothly along your path to modernization. Here are three common pitfalls to watch for:

1: Out of Scope Responsibilities

Have you ever heard someone in IT say, “That’s not my responsibility?” 

Companies that try to tackle modernization on their own end up managing numerous vendors across departments. When an issue arises, vendors can often only confirm whether the issue is their responsibility. If it falls out of their scope, it is then up to the business to figure out how to solve the bigger problem.

This comes up often on troubleshooting calls. Someone will inevitably claim they can’t help because it is outside of their scope (“That’s a networking issue, and I’m a database guy so you’ll have to talk with someone else”). The statement may be factual, but it also reveals whether a vendor is a true partner to your organization – or simply passing the buck.

Make sure you find a trusted partner that strives towards business outcomes, and will assume responsibility as broadly as needed to get your issues resolved.

2: Forgetting about the Use Case

Organizations often start a migration or transformation by drilling down to the lowest common denominator of CPU/ RAM/Storage. However, beginning modernization with an understanding of what your users require will result in a solution that meets their needs. 

In addition, if you are only considering CPU/RAM/Storage and then try to pick up your application and put it on top of the cloud, you may miss critical business considerations such as licensing and networking. Remember, not everything maps one-to-one from a migration point of view compared to on-premises. 

Take time to evaluate what your workload truly needs before starting to modernize. This will ensure a smarter architecture and ensure a better user experience.

3: Underestimating the Benefits of VMware Cloud on AWS

VMware Cloud on AWS is an essential strategic solution for any organization utilizing VMware and embracing modernization. You may want to rush to the latest cloud native technologies, but gaining access to innovation requires thoughtful planning and execution. 

VMware Cloud on AWS can accelerate your digital transformation, making you more efficient, more secure, and more productive in the long run. 

Slow and steady wins the modernization race, and having the right team on your side can ensure you get across the finish line. Learn more about VMware Cloud on AWS.

Tom Spalding is Chief Growth Officer at Effectual, Inc. 

How Private NAT Gateways Make for Easier Designs and Faster Deployments

How Private NAT Gateways Make for Easier Designs and Faster Deployments

NAT Gateways have historically been used to protect resources deployed into private subnets in virtual private clouds (VPCs). If you have data that resides in resources deployed on a private subnet in a VPC that needed to access information outside the VPC (on the internet or on premise) and you wanted to exclude incoming connections to those resources, you’d use a NAT Gateway. The NAT Gateway would provide access to get what you needed and allow that traffic to return, but still wouldn’t allow something that originated from the outside get in. 

The core functionality of a NAT Gateway is allowing that one-way request origination flow.

Earlier this month, AWS announced that you can now launch NAT Gateways in your Amazon VPC without associating an internet Gateway to your VPC. The private NAT Gateway allows you to route directly to Virtual Private Gateways or Transit Gateways without an Internet Gateway in the path for resources that need to reach out to internal tools, like a data center, VPC, or something else on-prem.

This might seem like a modest bit of news, but it will lead to improved performance on both the business and engineering levels and demonstrates the constant innovation AWS provides its customers. 

Innovation continues at the fundamental level 

Continuous innovation has always been at the core of how AWS approaches problem solving. This is true for the bleeding edge of technology and for the fundamental building blocks of well-established disciplines, such as networking. The idea of Network Address Translation (NAT) isn’t anything new; it’s been a core building block for years. 

In the past, though, you would have done it on your own server, deploying, configuring, and maintaining a NAT instance. AWS brought the NAT Gateway into the fold; the differentiator being that this was a managed service offering that lets you use infrastructure as code or simply make a few clicks in your console to attach a NAT Gateway to a private subnet in a VPC, so you don’t have to worry about the underlying infrastructure. There are no third-party tools or complex configuration to worry about. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

We see the same type of excitement and attention to detail both with major launches and when introducing new ways to use a product or service that already exists. It’s that combination of innovation for fundamental core offerings to make them easier to use plus bleeding edge innovation that really highlights the depth of expertise across AWS and its partners. 

With the private NAT Gateway, AWS is instilling a new feature into something that’s fundamental, making it easier for companies and individuals to be more efficient and productive. 

Learn more about AWS Private NAT Gateways here

A boost in efficiency

Before the private NAT Gateway, every NAT Gateway had a hard dependency on an Internet Gateway being attached to the VPC for egress. Communication to on premise resources via NAT Gateway required routing through an Internet Gateway or some other networking construct that added a layer of complexity to protecting those private resources in a private VPC. The real benefit of this feature add is the ease of protecting those resources that need to reach out. 

At its core, the private NAT Gateway is simplifying that outgoing request pattern — and it leads to a boost in efficiency. 

For businesses, the NAT Gateway allows a consistent managed service offering from AWS to protect those private resources that need outbound connectivity from a private subnet in a VPC. Prior to the private NAT Gateway, you would have needed to solve for that idea using a third-party tool or a more complex networking architecture. 

Security departments now have a more secure pattern that is less prone to misconfiguration since an Internet Gateway is no longer a dependency. This makes understanding how the organization is applying NAT Gateways and how operations teams are managing them easier. They are able to standardize on a private NAT Gateway approach, reproducing the pattern for the networking path for these outgoing requests consistently.

For individual engineers, a private NAT Gateway simplifies design and deployment because of its inherent ease — a few clicks in your console or lines in infrastructure as code, rather than relying on a more cumbersome third-party tool or a more complex configuration. AWS is extending the functionality of the managed service part of a NAT Gateway to the specific use case of handling private subnet outgoing traffic. This addition makes design easier, it makes deployment faster, and it makes the entire subject more repeatable, because you’re just consuming that managed NAT Gateway service from AWS.

Why this is worth a look

As an engineer, I certainly understand the mindset of wanting to minimize complexity. Enterprise users have NAT Gateways deployed with a dependency on an Internet Gateway and likely have more complex routing solutions in place to protect against unintended incoming requests via that Internet Gateway. Those solutions might be working just fine, and that’s great.  

But from my vantage point, I strongly encourage you to take another look at your egress-only internet and NAT Gateways architecture for private subnets. You could be missing an opportunity to greatly streamline how you work.

At worst, you can simplify how you use your “egress-only” communications. At best, you’ll eliminate a third-party tool and save money while freeing up more of your individual time.

That’s worth taking a second look at the way you’re operating. We should be regularly evaluating our deployments anyway, but it especially applies in networking complexity and simplification. 

I look forward to the improved ease of use for my clients with private NAT Gateways, and am confident you’ll find a similar model of success with your deployments.   

The Many Use Cases of App Modernization

The Many Use Cases of App Modernization

At some point, every company will face the need to adapt and modernize their existing applications. Some may struggle with security issues or meeting compliance requirements. Others may need to resolve challenges with reliability and performance. Or perhaps a company has to scale rapidly to respond to market demand.

While each use case requires a different approach, application modernization services can provide the expertise to overcome those business challenges and build a path for future innovation.

Here’s a look at four unique company experiences with app modernization, and how adopting modern cloud solutions had a positive impact on each business.

Scaling SaaS-based Solutions 

If you’ve tried to buy a house in the past decade, you know it’s a fast-moving process. You’re looking at a home at the same time as several other people, and effective communication with your realtor or real estate agent is a must.

About Time Tours, a Pacific Northwest startup, seeks to redefine the way the real estate industry plans, organizes, and coordinates those home tours. The company saw ways they could streamline scheduling and communication while effectively capturing feedback from homebuyers and agents. With home prices at an all-time high, the company realized it was about time to get to market as quickly as possible to capitalize on strong demand.

Though they had solid market expertise, About Time Tours only had a general idea for their business and a basic app concept. They turned to us for guidance in developing a SaaS-based solution that could scale to a production-ready launch. After analyzing customer pain points to enhance the user experience, we defined a Multiple Virtual Storage (MVS) using the AWS SaaS Enablement Framework, which evaluates both security compliance and cost models.

The app was also built 100% on serverless, which can handle increases in web traffic. Now, as About Time Tours scales, they know they’ll meet their business objectives and deliver a high-quality customer experience—and with their new pay-as-you-go model, the business is set for sustainability and growth.

See more on About Time Tours here

Refactoring for increased security and reliability

After spending nine months gathering user feedback, Warm Welcome, a SaaS based startup delivering highly personalized email video messages to support customer onboarding and retention, knew the features it wanted. It had developed a concise pricing model and fleshed out its go-to-market strategy. However, the Proof Of Concept (POC) phase introduced a few challenges, particularly around reliability and security. For the solution to be production-ready, the company needed to address those challenges.

AWS Well-Architected allows cloud architects to build secure, high-performing, and efficient infrastructure for their applications and workloads. For companies in the startup phase, like Warm Welcome, strong knowledge of the Well-Architected Framework can be a major differentiator when implementing customer feedback.

We worked with Warm Welcome to conduct a Well-Architected Framework Review and better refactor the company’s POC. The end result was two-fold. First, we set up a continuous integration/continuous delivery (CI/CD) pipeline with parallel environments to increase agility and lower risks. Second, we built a more secure environment that can easily scale using EC2, Elastic Beanstalk, and Auto Scaling groups. 

By becoming more comfortable with the Well-Architected Framework, Warm Welcome can better implement feedback and provide a boost for companies onboarding and retaining new hires—now in more secure and reliable environments.

Learn more about Warm Welcome here

Streamlining DevOps for rapid code deployment

Verdant Web Technologies provides management software solutions for facility administrators and staff to track, access, and update critical Environmental Health & Safety compliance and sampling information. As their company and product both matured, Verdant was running into an issue many companies experience: inefficiencies in product changes and upgrades.

The company had six different code bases unique to each client and was pushing them out manually across more than 10 web servers, with its SQL scripts running on multiple databases. That process quickly led to overwhelm within Verdant, which was limiting itself from writing new features.

Tightening up these inefficiencies was an important step. Using a build server and rapid deployment tools, we automated the development process and streamlined DevOps. Further, migrating to the AWS Cloud created a faster, more reliable solution, that also provided additional security and lower costs, both in hardware and customer acquisitions.

Once on AWS, Verdant could implement additional solutions to improve their customer experience. For example, Elastic Beanstalk supports continuous development and innovation by managing multiple application environments for the company’s development/testing/release cycle. And because of this new streamlined DevOps, Verdant can iterate more quickly, keeping customers at their happiest. 

See more on Verdant here

Automatically scale your production environment

MAK Grills manufactures precision BBQ and smokers and offers owners full operational control of their grills through remote devices. No more standing by the grill during a gathering at your home — you can get into a back-and-forth discussion about the latest show you’ve binge watched or how crummy (or great) the nearest pro sports team is while still ensuring delicious meals for everyone.

The initial launch of the app ran successfully, but thanks to an outsourced firmware provider, the app began experiencing performance issues as MAK Grills grew and started serving more customers.

These types of growing pains are common for companies on legacy systems, so the first step was to stabilize the production environment. We analyzed MAK Grills’ Microsoft server to identify what was causing the performance issues. From there, we refactored the architecture based on new performance requirements using Amazon RDS for SQL Server, Jenkins build server, and Amazon Auto Scaling groups.

Additionally, we moved logs from their IIS server to Amazon CloudWatch Logs, which enables MAK to review logs for issues quickly and effectively at no extra cost.

With these changes in place, MAK Grills can now more easily handle its expanding customer base. The production environment is built for streamlining operations, so the company can focus on delivering tasty results for customers.

Learn more about MAK Grills here

App modernization looks different for every company. Find out how we can help you design and develop new solutions using today’s most advanced cloud services.

Leveraging Amazon EC2 F1 Instances for Development and Red Teaming in DARPA’s First-Ever Bug Bounty Program

Leveraging Amazon EC2 F1 Instances for Development and Red Teaming in DARPA’s First-Ever Bug Bounty Program

This past year, Effectual’s Modernization Engineers partnered with specialized R&D firm Galois to support the launch of DARPA’s first public bug bounty program – Finding Exploits to Thwart Tampering (FETT). The project represents a highly unique use case showcasing Effectual’s application expertise, and was approved this week to be featured on the AWS Partner Network (APN) Blog.

Authored by Effectual Cloud Architect Kurt Hopfer, the blog will reach both AWS customers and technologists interested in learning how to solve complex technical challenges and accelerate innovation using AWS services.

Read the full post on the AWS APN Blog

In 2017, the Defense Advanced Research Projects Agency (DARPA) engaged research and development firm Galois to lead the BESSPIN project (Balancing Evaluation of System Security Properties with Industrial Needs) as part of its System Security Integrated through Hardware and Firmware (SSITH) program.

The objective was to develop tools and techniques to measure the effectiveness of SSITH hardware security architectures, as well as to establish a set of “baseline” Government Furnished Equipment (GFE) systems-on-chip (SoCs) without hardware security enhancements.

While Galois’s initial work on BESSPIN was carried out entirely using on-premises FPGA resources, the pain points of scaling out to a secure, widely-available bug bounty program soon emerged.

It was clear that researchers needed to be able to stress test SSITH hardware platforms without having to acquire their own dedicated hardware and infrastructure. Galois leveraged Amazon EC2 F1 instances to scale infrastructure, increase efficiencies, and accelerate FPGA development.

The company then engaged AWS Premier Consulting Partner Effectual to ensure a secure and reliable AWS environment, as well as to develop a serverless web application that allowed click-button FPGA SoC provisioning to red team researchers for the different processor variants.

The result was DARPA’s first public bug bounty program—Finding Exploits to Thwart Tampering (FETT).

Learn more –> 

 

Empowering Marketers and Driving Customer Engagement with Amazon Pinpoint

Empowering Marketers and Driving Customer Engagement with Amazon Pinpoint

In an increasingly virtual world of remote work, online learning, and digital interfacing, successful customer engagement can differentiate you from competitors and provide deeply valuable insights into the features and innovations important to your users. A well-designed, well-managed user experience not only helps you gain market share, but also uncovers new revenue opportunities to grow your business.

At Effectual, we begin projects with an in-depth discovery process that includes persona development, customer journey mapping, user stories, and UX research to design solutions with engaging, meaningful user experiences. Post-launch, your ability to capture, iterate, and respond to user feedback is just as essential to your success.

In our experience, many SaaS-based companies simply miss this opportunity to stay engaged with their customers. Reasons for this include the complexity and cost of designing, deploying, and managing customized marketing campaigns across multiple channels as well as the lack of real time data analytics to inform them. The result is a tidal wave of generic emails, poorly-timed push notifications, and failed initiatives that impact customer retention and engagement.

Amazon Pinpoint is a scalable outbound and inbound marketing communications service that addresses these challenges and empowers marketers to engage with customers throughout their lifecycle. The service provides data insights and a marketing dashboard inside the Amazon Web Services (AWS) admin console for creating and managing customized communications, leveraging automation, data analytics, filters, and integrations with other AWS products and third-party solutions.

Easy to use and scale

  • Manage campaigns from a user friendly marketing dashboard
  • Scale reliably in a secure AWS environment

Targeted customer groups

  • Segment audiences from mobile and web application data or existing customer list

Customized messaging across email, SMS, push notifications

  • Personalize content to engage customers using static and dynamic attributes
  • Create customer journeys that automate multi-step campaigns, bringing in endpoints from your app, API or directly from a CSV
  • Engage customers with targeted emails and push notifications from the AWS admin portal using rich text editor and customizable templates

Built-in analytics

  • Set up customer endpoints by user email, phone #, or UserID # to track user behavior within your app
  • Use real time web analytics and live data streams to capture immediate feedback
  • Measure campaign data and delivery results against business goals

Integrations with other AWS services and third-party solutions

For marketers, Amazon Pinpoint is a powerful tool for improving digital engagement – particularly when integrated with other AWS services that utilize machine learning and live stream analytics. Organizations that invest in designing engaging user experiences for their solutions will only benefit from continually improving and innovating them.

Have an idea of project to discuss? Contact us to learn more about using Amazon Pinpoint to improve your customer engagement.

App Modernization: Strategic Leverage for Managing Rapid Change

App Modernization: Strategic Leverage for Managing Rapid Change

The last few months of the COVID crisis have made this even more evident, dramatically exposing security faults and the limitations of outdated monolithic applications and costly on-premises infrastructure. This lack of modernization is preventing many businesses and agencies from adapting to new economic realities and finding a clear path forward.

Applications architected for the cloud provide flexibility to address scalability and performance challenges and to explore new opportunities without requiring heavy investment.

Whether improving efficiencies with a backend process or creating business value with a new customer-facing app, modernizing your IT solutions helps you respond quickly to changing conditions, reduce your compliance risk, and optimize costs to match your needs. Applications that are already architected to take advantage of the cloud also provide flexibility to address scalability and performance challenges as well as to explore new opportunities without disrupting budgets and requiring heavy investment.

First, what defines technologies that are NOT modern?

  • Inflexible monolithic architectures
  • Inability to scale up or down with changes in demand
  • Security only implemented on the outside layer, not at the component layer
  • Costly on-premises infrastructure
  • Legacy hardware burdens
  • Waterfall development approaches

Maintaining legacy technologies is more expensive than modernizing them

Some of the most striking examples of the complexity, costs, and failures associated with legacy technologies have recently been seen in the public sector. In fact, some state unemployment systems have failed to handle the overwhelming increase in traffic and demand, impacting those in greatest need of assistance. There are those that are already taking measures within the public sector. Beth Cappello, acting CIO of the US Department of Homeland Security, recently stated that had her predecessors not taken steps to modernize their infrastructure and adopt cloud technologies, the ability for DHS personnel to remain connected during the pandemic would have been severely impacted.

Many government applications run on 30+ year-old mainframe computers using an antiquated programming language, creating a desperate need for COBOL developers to fix the crippled technologies. What the situation reveals is the dire need to replatform, refactor, and rearchitect these environments to take advantage of the scalability, reliability, and performance of the cloud.

Benefits of modernization:

  • Security by design
  • Resilient microservices architecture
  • Automated CI/CD pipeline
  • Infrastructure as code
  • Rapid development, increased pace of innovation
  • Better response to customer feedback and market demands
  • Flexible, pay-as-you-go pricing models
  • Automated DevOps processes
  • Scalable managed services (ie: Serverless)
  • In-depth analytics and data insights

The realities of preparing for the unknown

As a result of shelter-in-place orders since early March, we have seen both the success of customers who have modernized as well as the struggles of those still in the process of migrating to the cloud.

Food for All is a customer with a farm-to-table grocery app that experienced a 400x increase in revenue as people rushed to sign up for their service during the first few weeks of the pandemic. Because we had already built their architecture for the Amazon Web Services (AWS) cloud, the company’s technology environment was able to scale easily to meet demand. In addition, they have a reliable DevOps environment that allowed them to immediately onboard more developers to begin building and publishing new features based on user feedback.

Unfortunately, other customers have not been able to adapt as quickly.

When one of our retail clients lost a large number of customers in the wake of COVID, they needed help scaling down their environment as rapidly as possible to cut their costs on AWS. However, the inherited architecture had been written almost 10 years ago, making it expensive and painfully time-consuming to implement adjustments or changes. As a result, the company is currently weighing whether to turn off their app and lose revenue or invest in modernizing it to recover their customers.

In fact, many early cloud adopters have not revisited their initial architectures to ensure they are taking advantage of the hundreds of new features and services released by AWS each year

For another large enterprise customer, the need to reduce technology costs meant laying off a third of their payroll. Though our team is helping them make progress on refactoring their AWS workloads, they were still unable to scale down 90% of their applications in time to avoid such a difficult decision. The situation has significantly increased their urgency to modernize.

The need for a cloud-first modernization service provider

With AWS now 14 years old, it is important to realize that modernization is just as important to early adopters as it is for the public sector’s legacy workloads. In fact, many early cloud adopters have not revisited their initial architectures to ensure they are taking advantage of the hundreds of new features and services released by AWS each year (during Andy Jassy’s 2019 re:Invent keynote alone, he announced 30 new capabilities in 3 hours). For these reasons, and many more, our Modernization Engineers help customers make regular assessments of their cloud infrastructure and workloads to maintain a forward-looking, modern IT estate.

Whether migrating out of an on-premise data center or colo, rearchitecting an existing cloud workload, or developing with new cloud-native features,it has never been more important to implement a modern cloud strategy. This is particularly true for optimizing services across your organization and embracing security as a core pillar.

According to Gartner, 99% of cloud security failures through 2025 will be the customer’s fault. Clearly, no organization wants to be a part of this statistic. Ongoing management of your critical workloads is a worthy investment that ensures your mission-critical assets are secure. The truth is that if security isn’t done right, it simply doesn’t matter.

We work frequently with customers looking to completely exit their data center infrastructure and migrate to an OPEX model in the cloud. In these engagements, we identify risks and dependencies using a staged approach to ensure the integrity of data and functionality of applications. However, this migration or “evacuation” is not an end state. In fact, it is often the first major milestone on a client’s journey toward continuous improvement and optimization. It is also nearly impossible to do efficiently without modern technology and the cloud.

Modern cloud management mitigates risk and enables modernization

While some workloads and applications may be considered cloud-ready for a relatively straightforward lift and shift migration, they can usually benefit from refactoring, rearchitecting, or replatforming based on a thorough assessment of usage patterns. Cloud adoption on its own will only go so far to improve performance and organizational flexibility.

Effectual is a Modernization Service Provider that understands how to modernize applications, their metrics, operational costs, security implications, and compliance requirements

A modern digital strategy allows you to unlock the true capabilities of the cloud, increasing scalability, agility, efficiency, and one of the most critical benefits of any modernization initiative – improved security. Modernized technologies can also utilize cutting edge security protocols and continuous compliance tools that are simply not available with physical infrastructure.

Unlike traditional MSPs (Managed Service Providers) who manage on-premises servers in physical data centers, Effectual is a cloud-first Modernization Service Provider that understands how to modernize applications, their metrics, operational costs, security implications, and compliance requirements. When our development team finishes a project, our customers can Cloud Confidently™ knowing that their environment is in experienced hands for ongoing management.

Most importantly, the path to modernization is not necessarily linear, whether you are developing an application specifically for the cloud, refactoring or rearchitecting as part of a data center migration, or updating and securing an existing cloud environment. New ideas, priorities, and changes to the world we live in require that we adapt, innovate, and rethink our approach to solving business challenges in even the most uncertain times.

When your organization or team needs the power to pivot, we have the Modernization Engineers, systems, tools, and processes to support that change.

Ready to begin your modernization journey?
Contact us to get started.

Ryan Comingdeer is the Chief Cloud Architect at Effectual.

Using Proofs of Concept to Increase Your ROI

Using Proofs of Concept to Increase Your ROI

Not so long ago, R&D departments had to fight for internal resources and justify capital expenditures in order to explore new technologies. Developing on premise solutions was expensive and time-consuming, and decisions were focused on ensuring success and avoiding failure.

In the past 5 years, cloud platforms have radically sped up the pace of innovation, offering companies of all sizes the ability to build, test, and scale solutions at minimal cost. Technology is now a tool to differentiate yourself from your competitors, increase your margins, and open up new markets.

Small investments, big payoffs

By committing only a small portion of your budget to R&D, you can now leverage plug and play cloud services to experiment and test Proofs of Concept (POCs) with potentially huge bottom line payoffs. For large companies, utilizing POCs requires a shift away from risk-averse waterfall development to an agile approach that embraces failure as a path to innovation.

Enterprise organizations can learn from entrepreneurs, who’ve been natural first adopters when it comes to cloud solutions. Startups aren’t afraid of using pay-as-you-go services to build quick POCs for validating markets, testing technical features, and collecting customer feedback. Far more comfortable with agile development, successful early stage companies like Effectual customer Warm Welcome are adept at taking calculated risks and viewing failure as an invitation for learning.

In contrast, enterprise customers may struggle at first to embrace an agile approach and accept failure as an opportunity for insight. As established businesses, they also make the mistake of assuming reputation alone will ensure successful outcomes and often downplay the importance of customer feedback. However, this changes quickly after companies gain experience with POCs and understand the value of testing their assumptions before committing to building out final solutions.

POC vs MVP: What’s the difference?

A Proof of Concept is the first phase of designing a software application. A POC allows you to quickly solve a business challenge for a specific use case in order to:

  • Evaluate tradeoffs
  • Measure costs
  • Test technical functionality
  • Collect user feedback 
  • Determine market acceptance

POCs are timed-boxed (defined by # of hours), with clear KPIs (key performance indicators) for measuring your results. This keeps costs low and provides rapid insights into what changes need to be made before you invest significant resources to scale it.

POCs are rolled out to a controlled, focused group of users (“friends and family”) with the goal of quickly figuring out cost and technical issues. It’s not uncommon to go through 3-4 POCs before finding the one you’re ready to advance. Failure is an accepted and necessary part of this process.

For example, one of our large retail customers has dedicated $4k/month to its backlog pipeline for R&D. At the beginning of the year, we sat down with their team to identify 4-5 business problems the company wanted to tackle. For one particular POC, we developed and tested two different POCs (one cloud-based, one on-premise) before finding a hybrid solution that was the right compromise between cost and functionality.

To minimize risk, they rolled out their hybrid POC to a single store location in order to collect user feedback. Only after making recommended changes did the company commit to moving forward with an MVP at multiple locations across several states. Within 18 months, they have seen a significant return on their investment in both higher sales and increased customer retention. 

A Minimum Viable Product (MVP) is a featured-boxed solution that turns your proven concept into a functional basic product you can test with a wider user base. While it resides outside of a critical business path, an MVP usually requires greater investment and takes longer to evaluate. The goal of an MVP is to:

  • Increase speed to market
  • Establish loyal users
  • Prove market demand
  • Collect broader customer feedback

Organizations of any size can use Proof of Concept to ensure the fast delivery of a final product that meets the needs of customers and provides a measurable return on investment. Learn more about how a POC can drive your business forward.

Have an idea or project to discuss? Contact us to learn more.

AWS IoT Solutions Accelerate the Potential of Edge Computing

AWS IoT Solutions Accelerate the Potential of Edge Computing

IoT is revolutionizing consumer markets, large-scale manufacturing and industrial applications at an incredible pace. In virtually every industry, these technologies are becoming synonymous with a competitive advantage and winning corporate strategies.

We’re witnessing the same trend with our own customers, as companies integrate IoT solutions with their offerings and deploy edge devices to improve customer experience, reduce costs, and expand opportunities.

Installing these smart devices and collecting data is relatively easy. However, processing, storing, analyzing, and protecting massive volumes of that data is where (Internet of) Things gets complicated.

As an AWS Premier Consulting Partner, Effectual guides customers on how to leverage Amazon Web Services (AWS) innovation for their IoT solutions. This includes building performant, cost-effective cloud architecture based on the 5 Pillars of the Well-Architected Framework that scale quickly and securely process real-time streaming data. Most importantly, we apply AI and machine learning (ML) to provide clients with meaningful analytics that drive informed business decisions.

Two of the most common AWS services we deploy for IoT projects are AWS Lambda and AWS DynamoDB.

AWS Lambda: Serverless computing for continuous scaling
A fully-managed platform, AWS Lambda runs code for your applications or backend services without requiring any server management or administration. It also scales automatically to workloads with a flexible consumption model where you pay for only the computing resources you consume. While Lambda is an excellent environment for any kind of rapid, scalable development, it’s ideal for startups and growing companies who need to conserve resources while scaling to meet demand.

We successfully deployed AWS Lambda for a project with Oregon-based startup Wingo IoT. With an expanding pipeline of industrial customers and limited resources, Wingo needed a cost-efficient, flexible architecture for instant monitoring and powerful analytics. We used Lambda to build a custom NOC dashboard with a comprehensive view of real-time operations.

DynamoDB: Fast access with built-in security
We use DynamoDB with AWS Services such as Amazon Kinesis and AWS Lambda to build key-value and document databases with unlimited capacity. Offering low latency and high performance at scale, DynamoDB can support over 10 trillion requests a day with secure backup and restore capabilities.

When Effectual client Dialsmith needed a transactional database to handle the thousands of records per second created by its video survey tool, we used DynamoDB to solve its capacity constraints. The service also provided critical backup and restore capabilities for protecting its sensitive data.

In our experience, AWS IoT solutions both anchor and accelerate the potential of edge computing. Services such as AWS Lambda and Amazon DynamoDB can have a lasting, positive impact on your ability to scale profitability. Before you deploy IoT technologies or expand your fleet of devices, we recommend a thorough evaluation of the cloud-based tools and services available to support your growth.

Have an idea or project to discuss? Contact us to learn more.

5 Reasons Your Development Team Should be Using the Well-Architected Framework

5 Reasons Your Development Team Should be Using the Well-Architected Framework

Amazon Web Services (AWS) offers the most powerful platforms and innovative cloud technologies in the industry, helping you scale your business on demand, maximize efficiencies, minimize costs, and secure data. But in order to take full advantage of what AWS has to offer, you need to start with a clear understanding of your workload in the cloud.

How to Build Better Workloads
Whether you’re working with an internal team or an outsourced consulting partner, the AWS Well-Architected Framework is an educational tool that builds awareness of steps and best practices for architecting for the AWS Cloud. We begin all of our development projects with a Well-Architected Review to give clients full visibility into their workload. This precise, comprehensive process provides them essential insights for comparing strategies, evaluating options, and making informed decisions that add business value. Based on our experience, using well-architected best practices and design principles helps you:

1 – Plan for failure
One of the primary Well-Architected design principles is to architect for failure. This means knowing how to mitigate risks, eliminate downtime, prevent data loss, and protect against security threats. The Well-Architected process uncovers potential security and reliability vulnerabilities long before they happen so you can either avoid them or build a plan proactively for how you’ll respond if they do. This upfront effort can save you considerable time and resources. For example, having a disaster recovery plan in place can make it far easier for you to spin up another environment if something crashes.

Clients who plan for failure shrink their Recovery Time Objective (downtime) and their Recovery Point Objective (data loss) by 2000%.

2 –  Minimize surprises
Mitigating your risks also means minimizing surprises. The Well-Architected Framework offers an in-depth and comprehensive process for analyzing your choices and options as well as for evaluating how a given decision can impact your business. In our Well-Architected Reviews, we walk you through in-depth questions about your workload to create an accurate and holistic view of what lies ahead. When the review answers and recommendations are shared with all departments and stakeholders of a workload, they’re often surprised by the impacts of decisions on costs, performance, reliability and security.

3 – Understand the trade-offs of your decisions
Building well-architected workloads ensures you have options for responding to changing business requirements or external issues, with a structure for evaluating the trade-offs of every one of those choices. If you feel your application isn’t performant, you may have 10 different possible solutions for improving performance. Each one has a tradeoff, whether it be cost, maintainability, or more. The Well-Architected Framework can help your team decide the best option.

Identifying and executing refactoring options based on modern technologies and services can save up to 60% of architecture costs.

As an organization, you should never feel boxed in when it comes to options for improving your workload. The process and questions presented in the Well-Architected Framework can help both your technical and business departments look at all options and identify which ones will have the most positive business impact.

In 75% of the workloads we encounter, the technology department is making the decisions, which means there is no input from business stakeholders as to impacts.

4 – Develop KPIs to monitor the overall health of your application
Choosing KPIs that integrate both technical and business indicators gives you valuable insights into your application’s health and performance. With a Well-Architected approach, you can automate monitoring and set up alarms to notify you of any deviance from expected performance. Once you’ve established this baseline, you can start exploring ways to improve your workload.

KPI’s should be driven by business and should include all areas of your organization including Security, Finance, Operations, IT and Sales. The Well-Architected Framework provides a well-rounded perspective of workload health.

After a Well-Architected Review, it’s common to have 90% of KPIs defining the health of your application come from other business departments – not just from the IT team.

5 – Align your business requirements with engineering goals
Following well-architected best practices also facilitates a DevOps approach that fosters close collaboration between business managers and engineers. When the two are communicating effectively, they understand both the engineering concepts and the business impacts of their decisions. This saves time and resources and leads to more holistic solutions.

To fully leverage the AWS Cloud, make sure your development team has a strong foundation in the Well-Architected Framework. You’ll be able to build better workloads for the cloud that can take your business further, faster.

Have an idea or project to discuss? Contact us to learn more.

Enabling IT Modernization with VMware Cloud on AWS

Enabling IT Modernization with VMware Cloud on AWS

Cloud and virtualization technologies offer a broad range of platform and infrastructure options to help organizations address their operational needs, no matter how complex or unique, and reduce their dependence on traditional data centers.

As the demand for cloud and cloud-compatible services continues to grow across departments within organizations, cloud adoption rates are steadily rising and IT decision makers are realizing that they no longer need to be solely reliant on physical data centers. This has led countless organizations to shrink their data center footprints.

The benefits unlocked by VMC on AWS can have significant impacts on your organization…including the impressive performance of a VMware environment sitting on top of the AWS backbone.

VMware Cloud on AWS is unique in bridging this gap, as it utilizes the same skill sets many organizations have in-house to manage their existing VMware environments. Sure, there are considerations when migrating, but ultimately the biggest change in moving to VMware Cloud (VMC) on AWS is the underlying location of the software defined data center (SDDC) within vCenter. The benefits unlocked by VMC on AWS can have significant impacts on your organization – eliminating the need to worry about the security and maintenance of physical infrastructure (and the associated hands on hardware to address device failure) as well as the impressive performance of a VMware environment sitting on top of the AWS backbone.

Technology That Suits Your Needs

Full and partial data center evacuations are becoming increasingly common and, while there are instances of repatriation (organizations moving workloads from the cloud back to the data center), the majority of organizations are sticking with “cloud-first” policies to gain and maintain business agility. Sometimes, however, even a company that’s begun their IT modernization efforts may still have systems and applications hosted on-premises or in a data center.

This may seem to indicate some hesitance to fully adopt the cloud, but it’s usually due to long-term strategy, technical barriers to native cloud adoption, or misconceptions about cloud security and compliance requirements. It’s rare to find an organization that isn’t loaded with technical debt, fully committed to specific software, tied to lengthy data center commitments – or all of the above.

Mission-critical legacy applications may not be compatible with the cloud, and organizations lack the resources or expertise to refactor those applications so that they can properly function in a native cloud environment. Or perhaps there’s a long-term digital strategy to eventually move all systems and applications to the cloud but, in the meantime, they’re still leashed to the data center. Scenarios like these, and many more, are ideal for VMware Cloud on AWS, which allows organizations to easily migrate legacy VMware workloads with minimal refactoring or rearchitecting, or extend their existing data center systems to the cloud.

New, But Familiar

VMware Cloud on AWS was developed in collaboration between VMware, a pioneer and global leader in server virtualization, and AWS, the leading public cloud provider, to seamlessly extend on-premises vSphere environments to SDDCs built on AWS. VMC on AWS makes it easier for organizations to begin or expand their public cloud adoption by enabling lift and shift migration capabilities for applications running in the data center or on-premises VMware environments.

VMC on AWS also has a relatively minimal learning curve for in-house operations staff because, despite being hosted on AWS, it’s still VMware vSphere at its core and the environments are managed using the vCenter management console. This familiar toolset allows IT teams to begin utilizing the cloud without any major workforce retraining and upskilling initiatives because they can still use VMware’s suite of server virtualization and management tools.

The Right Tools for the Job

The vSphere suite of server virtualization products and vCenter management console may be familiar, but they’re far from outdated or limited. VMware continues to invest in the future, strengthening its cloud and virtualization portfolio by enhancing their existing offerings and developing additional services and tools to further enable IT modernization and data center evacuations.

These efforts mean we can expect VMware to continue playing a major role in helping organizations achieve and maintain agility by ensuring secure workload mobility across platforms, from public cloud to private cloud to hardware.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies mesh well.

HCX, which essentially consists of a series of integrations that establish connectivity across systems and platforms and allows workloads to be migrated without any code or configuration changes, is regularly updated to enhance its functionality. VMware HCX can be used to perform live migrations using vMotion and bulk migration for up to 100 VMs at a time. VMware HCX can also provide a secure, accelerated network extension which, beyond providing a seamless migration experience and minimizing operational impacts usually associated with migrating workloads, helps improve the environment’s resiliency through workload rebalancing. This same functionality plays a critical role in disaster recovery and business continuity by replicating data across multiple locations.

A Thoughtful Approach to Modernization

Whether an organization is prioritizing the optimization of spend, revenue growth, streamlining operations, or revitalizing and engaging their workforce, a mature and robust digital strategy should be at the heart of the “how.” Cloud adoption will not solve these business challenges on its own – that requires forethought, planning, and expertise.

It can be challenging to make the right determinations about what’s best for your own unique business needs without a clear understanding of those needs. And for organizations still relying on old school hardware-based systems, the decision to remain with on-premises deployments, move to the cloud, or lift and shift to a platform like VMC on AWS requires a comprehensive assessment of their applications, hardware, and any existing data center/real estate commitments.

Internal teams may not have the specific technical expertise, experience, or availability to develop suitable digital strategies or perform effective assessments, especially as they focus on their primary day to day responsibilities. As an AWS Premier Consulting Partner with the VMware Master Services Competency in VMware Cloud on AWS, Effectual has established its expertise in VMware Cloud on AWS, making us an ideal partner to help ease that burden.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies, which may be at very different stages of their respective lifecycles, mesh well. They need to develop an appropriate modernization strategy and determine the best fit for each application and workload. The right partner can play a critical role in successfully overcoming these challenges.

Hetal Patel is a Senior VMware Technical Lead and co-founder at Effectual, Inc.

Considerations for AWS Control Tower Implementation

Considerations for AWS Control Tower Implementation

AWS Control Tower is a recently announced, console-based service that allows you to govern, secure, and maintain multiple AWS accounts based on best practices established AWS.

What resources do I need?

The first thing to understand about Control Tower is that all the resources you need will be allocated to you by AWS. We will need AWS Organizations established, an account factory to create accounts per LOB, and Single Sign On (SSO) to name a few. Based on the size of your entity or organization, those costs may vary. In the Control Tower precursor, AWS Landing Zones, we found that costs for this collection of services could range near $500-$700 monthly for large customers (50+ accounts), as deployed. Control Tower will probably be a similar cost, possibly higher depending on the size of your organization. I will address later in this post on how to go and use Control Tower once you have an account set up a Brownfield situation. In a perfect world, it would be nice to set up the Control Tower and in a Greenfield scenario, but sadly, 99% of the time, that’s not the case.

If you’re a part of an organization that has multiple accounts in different lines of business, this service is for you.

What choices do I need to make?

In order to establish a Cloud Enablement Team to manage Control Tower, you need to incorporate multiple stakeholders. In a large organization, that might entail different people for roles such as:

  1. Platform Owner
  2. Product Owner
  3. AWS Solution Architect
  4. Cloud Engineer (Automation)
  5. Developer
  6. DevOps
  7. Cloud Security

You want to be as inclusive as possible in order to get the most breadth of knowledge. These are the people that will be making the decisions you need to migrate to the cloud and then most importantly, thrive once present and remain engaged. We have the team, so now what can we do to make Control Tower work the best for us?

Decisions for the Team

1. Develop a RACI

This is one of the most crucial aspects of Operations. If you do not have accountability or responsibility, then you don’t have management. Everyone must be able to delineate their tasks from the rest of the team. Finalizing everyone’s role in the workflow then will solve a lot of issues before they happen.

2. Shared Services

In the shared services model, we need to understand what resources are going to the cloud and what will stay. Anything from Active Directory to DNS to one-off internal applications will have to be figured out in a way to accommodate functionality and keep the charge back model healthy. One of Control Tower’s most redeeming and worthy qualities is knowing what each LOB is costing and how they are helping the organization overall.

3. Charge Backs

Since the account factory (previously called Account Vending Machine) is established, each LOB will have its own account. In order to see what the LOB costs are, you must have an account. AWS does not do pricing based on VPC, but by account. Leveraging Control Tower, tagging, and third-party cost management resources all can combine to give an accurate depiction of the costs incurred by a specific line of business.

4. Security

Security will have all logs dumped from each account into a centralized log bucket that can be pointed to the tool of choice to analyze those logs. Other parties may perform audits to read your logs using ready only functions in an account that has nothing else, another feature of Control Tower. The multi-account strategy not only allows for better governance but also now helps in case of compromise. If one account has been compromised, then the blast radius for all the other accounts is minimal. Person X may have accessed a bucket in a specific account, but they did not access it anywhere else. The most important thing to remember is that you cannot treat cloud security like data center security.

There are plenty of choices to make as it relates to Control Tower moving forward for an organization, but if you plan correctly and make wise decisions, then you can secure your environment and keep your billing department happy. Hopefully, this has helped you see what it takes in the real world to prepare. Good luck out there!

Network Virtualization – The Missing Piece of Digital Transformation

Network Virtualization – The Missing Piece of Digital Transformation

The cloud revolution continues to impact IT, changing the way digital content is accessed and delivered. It should come as no surprise that this revolution has affected the way we approach modern networking.

When it comes down to it, the goal of digital transformation is the same for all organizations, regardless of industry: increase the speed at which you’re able to respond to market changes and evolving business requirements, improve your ability to adopt and adapt to new technology, and enhance overall security. Digital strategies are maturing, becoming more thoughtful and effective in the process, as organizations understand that the true value of cloud adoption and increased virtualization isn’t just about cost savings.

Technology is more fluid than ever, and dedicated hardware is limiting individual progress and development more and more every day. Luckily, cloud and virtualized infrastructure have helped lay the groundwork for change, giving companies the opportunity to more readily follow the flow of technological progress but, in the same way that a chain is only as strong as its weakest link, these same companies are only as agile their most rigid component. And that rigid chokepoint, more often than not, is hardware-based network infrastructure.

A lack of network agility was even noted by Gartner as being one of the Top 10 Trends Impacting Infrastructure and Operations for 2019.

A Bit of History
We likely wouldn’t have the internet like we do today if not for the Department of Defense needing a way to connect large, costly research computers across long distances to enable the sharing of information and software. Early computers had no way to connect and transmit data to each other.
The birth of ARPANET in 1969, the world’s first packet-based network, and it’s ensuing expansion was monumental in creating the foundation for the Information Age.

The Case for Virtualization

While some arguments can still be made about whether a business might benefit more from traditional, hardware-based solutions or cloud-based options, there’s an inarguable fact right in front of us: software moves faster than hardware. This is what drove industries toward server and storage virtualization. However, network infrastructure still tends to be relegated to hardware, with the same manual provisioning and configuration processes that have been around for decades. The challenge of legacy, hardware-based network infrastructure is a clear obstacle that limits an organization’s ability to keep up with changing technologies and business requirements.

The negative effect of hardware-based networking goes beyond the limitation of speed and agility. Along with lengthy lead times, the process of scaling, modifying, or refreshing network infrastructure can require a significant amount of CapEx since you have to procure the hardware, and a significant amount of OpEx since you have to manually configure the newly acquired network devices. In addition, manual configuration is well-known to be error-prone, which can lead to connectivity issues (further increasing deployment lead time) and security compromises.

Networking at the Speed of Business and Innovation

As organizations move away from silos in favor of streamlined and automated orchestration, approaches to network implementation need to be refreshed. Typical data center network requests can take days, even weeks to fulfill since the hardware needs to be procured, configured (with engineers sometimes forced to individually and painstakingly configure each device), and then deployed.

Software-defined networking (SDN), however, changes all of that. With properly designed automation, right-sized virtual network devices can be programmatically created, provisioned, and configured within seconds. And due to the reduced (or even fully eliminated) need for manual intervention, it’s easier to ensure that that newly deployed devices are consistently and securely configured to meet business and compliance requirements.

Automation allows networking to match the pace of business by relying on standardized, pre‑defined templates to provide fast and consistent networking and security configurations. This lessens the strain and burden on your network engineers.

Network teams have focused hard on increasing availability, with great improvements. However for future success, the focus for 2019 and beyond must incorporate how network operations can be performed at a faster pace.

Source: Top 10 Trends Impacting Infrastructure & Operations for 2019, Gartner

Embracing Mobility

Modern IT is focused on applications, and the terminology and methods for implementing network appliances reflect that – but those applications are no longer tied to the physical data center. Sticking to a hardware-focused networking approach severely restricts the mobility of applications, which is a limitation that can kill innovation and progress.

Applications are not confined to a single, defined location and maturing digital and cloud strategies have led to organizations adopting multiple public and private clouds to achieve their business requirements. This had led to an increase in applications being designed to be “multi-cloud ready.” Creating an agile network infrastructure that extends beyond the on-premises locations, matching the mobility of those applications, is especially critical.

Network capabilities have to be able to bridge the gap from functioning consistently across all locations, whether they’re hardware-based legacy platforms, virtual private cloud environments, or pure public cloud environments.

This level of agility is beneficial for all organizations, even if they’re still heavily invested in hardware and data center space, because it allows them to begin exploring, adopting, and benefiting from public cloud use. Certain technologies, like VMware Cloud on AWS, already enable organizations to bridge that gap and begin reaping the benefits of Amazon’s public cloud, AWS.

According to the RightScale 2019 State of the Cloud Report from Flexera, 84% of enterprise organizations have adopted a multi-cloud strategy, and 58% have adopted a hybrid cloud strategy, utilizing both public and private clouds. On average, the respondents reported using nearly five clouds on average.

A Modern Approach to Security

Digital transformation creates fertile ground for new opportunities – both business opportunities and opportunities for bad actors. Since traditional approaches to cybersecurity weren’t designed for the cloud, cloud adoption and virtualization have contributed to a growing need to overhaul information security practices.

Traditional, classical network security models focused on the perimeter – traffic entering or leaving the data center – but, as a result of virtualization, the “perimeter” doesn’t exist anymore. Applications and data are distributed, so network security approaches have to focus on the applications themselves. With network virtualization, security services are elevated to the virtual layer, allowing security policies to “follow” applications, maintaining a consistent security configuration to protect the elastic attack surface.

But whether your network remains rooted in hardware or becomes virtualized, the core of your security should still be based on this: Security must be an integral part of your business requirements and infrastructure. It simply cannot be bolted on anymore.

Picking the Right Tools and Technology for the Job

Choosing the right tools and technology to facilitate hybrid deployments and enable multi‑platform solutions can help bridge the gap between legacy systems and 21st century IT.  This level of interoperability and agility help make cloud adoption just a little less challenging.

Addressing the networking challenges discussed in this post, VMware Cloud on AWS has an impressive set of tools that enable and simplify connectivity between traditionally hosted on-premises environments and the public cloud. This interconnectivity makes VMware Cloud on AWS an optimal choice for a number of different deployment use cases, including data center evacuations, extending on-premises environments to the public cloud, and improving disaster recovery capabilities.

Developed in partnership with Amazon, VMware Cloud on AWS allows customers to run VMware workloads in the cloud, and their Hybrid Cloud Extension (HCX) enables large-scale, bi-directional connections between on-premises environments and the VMware Cloud on AWS environment. In addition, VMware’s Site Recovery Manager provides simplified one-click disaster recovery operations with policy-based replication, ensuring operational consistency.

If you’re interested in learning more about VMware Cloud on AWS or how we can help you use the platform to meet your business goals, check out our migration and security services for VMware Cloud on AWS.

Ryan Boyce is the Director of Network Engineering at Effectual, Inc.

Next Up: Machine Learning on AWS

Next Up: Machine Learning on AWS

If you have been to AWS’s re:Invent, then you know the tremendous amount of excitement that cloud evangelists experience during that time of the year.

The events that AWS hosts in Las Vegas provide a surreal experience for first-timers and are sure to excite even the most seasoned of veterans. Let’s talk about one of the exciting technologies that are sure to change the world as we know it, or at least the businesses we are familiar with – Amazon Machine Learning.

Introduced on April 9, 2015, Amazon Machine Learning (ML) has received a surge of attention in recent years given its capability to provide highly reliable and accurate predictions with a large dataset. From using Amazon ML to track next-generation stats in the NFL, to analyzing real-time race data in Formula 1, to enhancing fraud detection at Capital One, ML is changing the way we share experiences and interact with the world around us.

During re:Invent 2018, AWS made it clear that ML is here to stay and has announced many offerings that support the development of ML solutions or services. But you may be wondering: What exactly is Amazon ML?

According to AWS’s definition:

“Amazon Machine Learning is a machine service that allows you to easily build predictive applications, including fraud detection, demand forecasting, and click prediction. Amazon Machine Learning uses powerful algorithms that can help you create machine learning models by finding patterns in existing data and using these patterns to make predictions from new data as it becomes available.”

We, as a society, are at the point where machines are actively providing decisions for many of our day-to-day interactions with the world. If you’ve ever shopped as a Prime member on Amazon.com, you have already experienced an ML algorithm that is in tune with your buying preferences.

In our Engineer’s Corner, our very own Kris Brandt Amazon Web Service As A Data Lake, discusses the critical initial step towards implementing an ML project, Data Lake creation. In this blog, Kris explores what a Data Lake is and provides some variations to its implementation. The development of a robust data lake is requisite for implementing an ML project that provides the business value expected from the service capabilities. ML runs on data and having plenty of it provides a foundation for an exceptional outcome.

Utilizing existing data repositories, we can work with business leaders to develop those cases for leveraging the data and the ML for strategic growth. You can connect with the Effectual team by emailing sales@effectual.com.

Because of ML’s proliferation throughout the market, AWS announced these ML solution opportunities during re:Invent 2018:

AWS Lake Formation
“This fully managed service will help you build, secure, and manage a data lake,” according to AWS. It allows you to point it at your data sources, crawl the sources, and pull the data into Amazon Simple Storage Service (S3). “Lake Formation uses Machine Learning to identify and de-duplicate data and performs format changes to accelerate analytical processing. You will also be able to define and centrally manage consistent security policies across your data lake and the services that you use to analyze and process the data,” says AWS.

Amazon Textract
“This Optical Character Recognition (OCR) service will help you to extract text and data from virtually any document. Powered by Machine Learning, it will identify bounding boxes, detect key-value pairs, and make sense of tables, while eliminating manual effort and lowering your document-processing costs,” according to AWS.