Batter Up: DevOps as a Team Sport

Batter Up: DevOps as a Team Sport

For DevOps to work well, developers, operations staff, security teams, finance, and other business leaders must work together

A DevOps team is not much different than a baseball team. There are individuals with unique skills – developers, security teams, system admins, and business owners – who need to work together to make it successful. Unfortunately, in many workplaces these job functions are siloed, creating obstacles to achieving a high-functioning DevOps culture.

The solution is building strong communication across all functions of your organization so your team knows what to expect and how to respond. If you were a baseball coach, you’d want your catcher to know in advance if the pitcher is about to throw of 95 MPH fastball or an 82 MPH curveball, and the same holds true across IT. 

When a professional baseball player goes into a slump at the plate, he may change out the bat or adjust swing mechanics with the help of a hitting instructor. For a software development organization, a slump may mean being slow to market with new features, or even worse, pushing out buggy software.

Regardless, that team needs to change things up and reach for some new strategies such as deploying a new automated testing tool or replacing a flaw in the software delivery process with an agile development methodology. However, this results in only temporary improvements.

Many organizations fail to expand DevOps beyond the initial Proof of Concept stage. While technology choices can play a part in improvement, it is more often a lack of organizational buy-in and an inability to break free from legacy development methodologies that prevent a DevOps team from winning.

Put Me in Coach: Creating a Winning DevOps Team

In baseball, every hitter has a unique approach. Similarly, DevOps is not a one-size-fits-all solution. Instead, it requires aligning your company mission with industry best practices and integrating CI/CD tools to increase your organization’s time to value. If only it were as simple as switching out your bat!

Understanding how to access the full benefit of DevOps is a thoughtful process that includes requirements gathering, executive sponsorship, vendor selection, tool configuration, testing, organization design changes, staff training, and more. Bringing in an expert partner as a DevOps “coach” can further improve your team’s odds of success.

If you do engage a partner, they should be able to respond to the specific needs of your team and offer flexible solutions that fit your objectives. Maybe you prefer a particular git repository, project management tool, or code deployment solution over another. Or perhaps you are a federal government agency with FedRAMP requirements to address. Every scenario is unique.

Getting support from the right provider allows your development team to focus on what they do best rather than being slowed down by vendor management issues, pipeline software patching, and tool configuration.

Meeting Security & Compliance

As PCI, SOX, FISMA, HIPAA, HITRUST, GDPR, and the many regional and state-level security and data privacy laws continue to evolve, compliance issues can introduce risk to a data-driven business.

A partner with DevOps expertise can be a significant help when it comes to security controls, industry regulations, and government mandates. This means working with experts who know how to make sense of regulations and tune the code pipeline to perform static and dynamic code analysis.

Statistics for Measuring DevOps Success  

It is easy to compare baseball players based on their statistics. For example, Hall of Fame hitters have high on base percentages, score runs, and hit for average. In the case of DevOps, there are key performance indicators that determine team performance.

For example, a decrease in support tickets may be a simple way of noting progress, but DevOps requires a more comprehensive set of measurements. High-achieving DevOps teams will view consistent or even accelerated deployment frequency as indicators of success – and will expect the volume of changes to trend upwards. In contrast, this team will be watching for the length of time it takes from development to deployment and the ratio of unsuccessful deployments and recovery time as stats that should be trending downward. 

Precise planning is equally important; accuracy of development estimates is another stat to measure over time. In each of these instances, you need a partner with experience tracking your stats. This kind of coaching can help your organization fix the flaws that are preventing these metrics from trending in the right direction. 

Post-Game Wrap Up

A DevOps methodology can produce higher-quality work faster, but it is hard to maintain momentum. Organizational silos, lack of coaching, and other roadblocks can all cause significant inefficiencies. 

For DevOps to work well, developers, operations staff, security teams, finance, and other business leaders must work together. A well-maintained CI/CD toolset that allows for consistent, secure, and compliant software is also important.

An experienced partner like Effectual can provide the necessary platform for strategic management and coaching. Combining DevOps experience, expertise, and execution with the flexibility to deliver support and follow through, we help you manage the expectations of your business leaders so software engineers can focus on what they do best.

Contact us to learn how we can help your team achieve DevOps success.

Al Sadowski is Senior VP of Product Management at Effectual, Inc. 

VMware Cloud on AWS: Solving the Challenges of Load Balancers, Active Directory, and Disaster Recovery

VMware Cloud on AWS: Solving the Challenges of Load Balancers, Active Directory, and Disaster Recovery

VMware Cloud on AWS is an enterprise-grade platform. Most customers on VMware Cloud on AWS have load balancing, Active Directory, or authentication built heavily into their application stacks. And at the enterprise level, an attack or failure can be harmful to customer data and prevent them from accessing their information for hours or days at a time. 

There are unique requirements for each of these individual services depending on the specific use case of the environment. At Effectual, we gather the necessary information for each service to ensure requirements are met when we migrate our customer workloads. For example, what’s the most efficient way to build load balancing to handle spikes in traffic? How can we transition Active Directory to be more cloud-centric? What areas within disaster recovery is a company overlooking? 

These are important considerations every organization must make on its cloud journey, and if they’re overlooked during migration, they can pose significant challenges. Let’s explore each component more in-depth.

Load Balancers

Load balancing ensures that none of your servers bear the brunt of network traffic alone. Today’s modern applications can’t run without load balancers and advances in security have improved those applications — though they still require attention.

What could go wrong:

  • Surprise hidden costs
    Security on-premises and security on VMware Cloud on AWS are different – a load balancer sitting locally today would be on AWS after migrating. When that traffic leaves the boundaries of VMware Cloud on AWS and goes to AWS native, it introduces a split-cost model. If you’re not keeping track of spend from all sources, you could be surprised by hidden costs.
  • Overspending on licensing fees
    You could also be overspending on licensing fees. In some cases, load balancer and security mechanism licenses can be transferred over so make sure you understand the agreements on each license before moving forward with any migration – then monitor ongoing costs for upgrades.
  • Troubleshooting that costs you time and money
    If your physical hardware, load balancers, or termination points fail, or if your software-based load balancers scale beyond initial capacity, it can cause significant delays and require your team to troubleshoot on the spot. When that troubleshooting leads to hours of manual labor, it impacts your focus, increases costs, and opens the door to potential vulnerabilities. Therefore, if you’ve moved over to a new environment and the functionality isn’t working as desired, it may require a complete reworking.

Benefits of Load Balancers on VMware Cloud on AWS

When we work with customers, we migrate their workloads to VMware Cloud on AWS in a way that minimizes the impact to the underlying workload and their business operations. We can also ensure security with proper firewalling.

In addition, VMware Cloud on AWS forces updates, which mitigates potential vulnerabilities that could impact underlying workloads. While DDoS attacks are common in the world of cybercrime, having modern virtual load balancers, firewalls, and logging can complement a secure, efficient, and cost-effective solution.

Software load balancers with VMware Cloud on AWS are also more flexible and easier to scale. They’re compatible with more environments and can add or drop virtual servers due to demand, offering an automatic response to network traffic changes.

The advanced load balancing of VMware Cloud on AWS has tangible business results, too:

  • 41% less time spent troubleshooting
  • 43% more efficient application delivery controller management, and
  • Zero specialized hardware required

Active Directory Requirements

Active Directory (AD) is typically available for on-premises Microsoft environments, but you can integrate AWS Directory Services with Virtual Machines (VMs) running on VMware Cloud on AWS. Your AD will likely contain users, computers, applications, shared folders, and other objects – all with their unique attributes. 

What could go wrong:

  • The directory can’t read the AD
    Sometimes, a company will replicate an AD from one place and expect it to function in another environment. However, that doesn’t always work — the IP addresses or networking may have changed, so the internals of the AD would also change, depending on where it’s being migrated to. 

    If the directory service can’t read the AD, it will prevent logging on, authentication, and any services dependent on the directory. This can also happen due to software glitches or unwanted changes in the AD schema, either by accident or a malicious internal actor. 

Benefits of Active Directory Requirements on VMware Cloud on AWS

VMware Cloud on AWS helps you avoid these issues by transitioning to a different kind of cloud-based authentication mechanism. You can also extend the AD into the migration location prior to migration, so the VMs or workloads have something to authenticate to when they are migrated.

Using AD on VMware also allows you to synchronize server clocks in all environments. For networks that rely on time-sensitive updates, you can create consistency across your environments 

Disaster Recovery

As much as we’d like to expect perfection, we must be prepared for risks. Even with an operational disaster recovery solution in place, there are still circumstances where it can fail.

What could go wrong:

  • Vulnerable internet-facing assets
    Per the Verizon 2021 Data Breach Investigations Report, the median random organization with an internet presence has 17 internet-facing assets. All of those assets are open to attack, whether they’re human-induced or caused by a natural disaster.
  • Ransomware or other attacks
    Often, the government or a B2B partner will give an automation mandate that says an enterprise must be recoverable in a certain number of hours or else they won’t do business together. However, even without a mandate, an enterprise can be hit by ransomware or another attack. 

  • Troubleshooting that takes focus away from other tasks
    An on-premises solution has an isolated environment for each component. If something goes awry with that workload, it typically requires the brainpower of several people to fix it. If your team is not able to focus on their other tasks, each minute of troubleshooting is another minute where data is vulnerable.
  • Servers that have not been rebooted
    We have seen enterprise customers that haven’t rebooted their servers in three to five years. This represents serious security risks. The Verizon 2021 Data Breach Investigations Report states that 20% of companies that experienced a breach had vulnerabilities dating back to 2010.

Benefits of Disaster Recovery on VMware Cloud on AWS

In the cloud, as with many things, time equals cost. The more automation you can do, the quicker the time to operation. 

The VMware Cloud on AWS platform provides seamless disaster recovery service. It’s very easy to configure and replicate within the AWS realm to test failure and prove time and again that, should something happen to the primary workload, it’s recoverable in a timely manner.

To maximize your benefits, you need proper tuning, best practices, and a thorough understanding of what your workload consumes the most. All these elements are addressed by VMware Cloud on AWS — a hyperconverged platform where storage, networking, and compute are all bundled together. Instead of waiting for a disaster to hit, you can proactively predict failure. If needed, VMware Cloud on AWS simply replaces the node and it’s back to business as usual.

Finally, the platform maintains a 99.9% SLA uptime of its infrastructure and ensures stability and security with forced upgrades that reduce the possibility of an attack.

The value of a developing a single source of truth 

Think about a previous technology role you’ve had. You learned things along the way that were unique to you. Maybe it was a process for running tests, or a method for tagging and categorizing data. Before you left your company, you may have shared some of your experience with your teammates during calls or written some of it down, but chances are you did not transfer much of your knowledge before departing.

This scenario happens regularly. People leave organizations for new opportunities and take their technical knowledge with them. And with how quickly technology changes, even documentation that does exist may become antiquated after a few years.

Our goal is to understand what a company has, how it’s configured, and what actions can be taken against it. We capture all the anomalies and differences from what customers have done manually and replicate in a test environment. As things change, we then update the document.

When you have a single source of truth in place, it not only helps you stay calm if a disaster does occur, it also provides clear guidance across all teams so you can coordinate an immediate and effective response. Overall operations move more smoothly and efficiently, and your team has more time to focus on more improvements within your business.
 

Summary

VMware Cloud on AWS is a powerful platform for addressing challenges with load balancing, Active Directories, and Disaster Recovery. Working with a partner that understands how to utilize and deploy its solutions will make your next cloud project even more successful.

Learn how we can help you Cloud Confidently® with VMware Cloud on AWS.

Hetal Patel is the Senior VMware Technical Lead at Effectual, Inc. 

How A Cloud Partner Helps You Maximize AWS Cloud Services

How A Cloud Partner Helps You Maximize AWS Cloud Services

Working with an experienced cloud partner can fill in knowledge gaps so you get the most out of the latest cloud services

Cloud services have been widely adopted for years, which is long enough for users to gain confidence in their understanding of the technology. Yet as cloud continues to grow and develop, customer knowledge does not always grow proportionately. When users become overconfident, they can unknowingly overlook new Amazon Web Services (AWS) technologies and services that could have a significant impact on positive business outcomes.

What cloud customers are overlooking

  • Control is not necessarily security
    In the early days of the cloud, many companies were reluctant to offload everything to cloud services due to security concerns. Today, many CTOs and IT Directors are still unsure how security translates to the cloud and remain hesitant about giving up control.

    AWS provides a proven, secure cloud platform for migrating, building, and managing workloads. However, it takes knowledge and expertise to take individual services and architect them into a solution that maintains, or even heightens security. A partner well-versed in AWS services and advanced cloud technologies can identify and deploy tools and services that strengthen your security posture and add value to your business.
  • Keeping up with cloud innovation is an investment in continual learning
    This can be a tall order for organizations with limited internal resources or cloud knowledge. Partnering with cloud experts who stay constantly informed about new AWS services – and know how to implement them – gives you immediate access to cloud innovation. It also frees up your developers and engineers to focus on core initiatives.
  • Aligning business and IT delivers better results
    Internal teams that pitch a cloud-forward strategy often face hesitancy from business leaders. This is because executives have historically made decisions about how to allocate and manage IT resources, leaving developers to work within the parameters they are presented. However, involving solutions architects and cloud engineers in decision-making brings a crucial technical perspective that uncovers additional options with better results.

    Bridging this gap is a matter of translation, as what makes sense to an in-house developer might seem like jargon to executives in other business units. Because our engineers understand both business and technology, we can bring clarity to modernization initiatives by acting as a translator between business and IT – preventing major communication and technical headaches down the line.  

The benefits of pairing managed and professional services

If your cloud partner is capable of handling larger professional services projects such as migrations, app development, and modernization as well as the ongoing maintenance of managed services, you will be far more successful at optimizing resources, improving security, reducing stress, and realizing cost savings.

There are several advantages of pairing professional and managed services:

  • Reduce operational overhead and optimize workloads
    Allowing a partner to directly manage more systems reduces your operational overhead and optimizes workloads. This guarantees your business will not get bogged down with redundant operations or pay for more computing power than is truly needed.

    For instance, you may be paying high colocation costs to house everything in your data center. By engaging a partner that offers both professional and managed services, you can move a workload safely from on-premises to the cloud with the same functionality, make it more secure, maintain compliance, and have confidence it is being optimized for performance and cost.
  • Upgrade and modernize more efficiently
    Having professional services and managed services under one roof makes it easier and more efficient to upgrade or modernize. Changes to infrastructure go much smoother with a trusted cloud partner at the wheel who has access to customer systems. Without access, the partner has to navigate the back and forth between client-controlled systems and new professional services before any real progress can take place.

    The goal is not to scrap an entire in-house system, but to develop a smooth transition where managed and professional services work in harmony. With the right context, and the right cloud partner, you can translate the ROI of pairing professional services and managed services so your executives are onboard with cost-saving proposals and your developers have a seat at the table.

In summary, you can maximize the benefits of cloud services by engaging a partner with the technical expertise, business experience, and deep knowledge of AWS services to support your modernization efforts.

Connect with our Modernization EngineersTM to find out how we can help you unlock the full power of the cloud.

Jeff Finley is a Senior Cloud Architect at Effectual, Inc. 

Modernizing with VMware Cloud on AWS

Modernizing with VMware Cloud on AWS

VMware Cloud on AWS transforms modernization into a steady, consistent marathon you can win on your first attempt

Modernization is something most businesses want to do, but it often takes a backseat to other business priorities. When leaders consider short term quick wins through application migrations vs longer term modernization initiatives – the appeal of speed frequently wins the decision making process.

However, much like training for and running a marathon, successful modernization requires gradual activity over a long period of time, as well as a high level of persistence – you can’t train for a marathon by running 400 meter sprints 6 times a day 3 months before the start. If you do try to make IT Transformation a sprint, you may fast-track the wrong priorities and end up with high costs, stressed staff, and poor results.

Expanding your options

Let’s say your organization has only a six month window to modernize due to an upcoming data center contract expiration. You believe it will take you a full year to modernize 100% of your workloads – potentially faster with accelerators like an influx of capital or a new CIO joining the company. What do you do?

In this scenario, you usually have two choices:

  • Attempt to accelerate the modernization or try to “sprint” without misjudging priorities
  • Reluctantly sign the renewal and ask to extend the timeline to a year to give yourself the best chance to succeed

Fortunately, VMware Cloud (VMC) on AWS offers a far better third choice:

  • Do not sign the renewal, and instead remove cost from the business by rapidly and safely moving your workloads to a like-for-like technical home within the six-month window

A rapid migration with long term results

Recently, one of our global enterprise customers had to evacuate a data center in three months. Their lease renewal was coming up and they did not want the long-term commitment. With the exit date set in stone, VMware Cloud on AWS was the best option for hitting their goal of cancelling the data center contract. 

With the flexibility needed to analyze each workload to the depth required, they successfully evacuated the data center in time and secured an additional 12 months to migrate over to native AWS. As a result, the company saved $1.8 million over a span of 18 months.

VMware Cloud on AWS allows you to migrate applications quickly with very low risk, with the opportunity to modernize over a longer period of time.

Analyzing the best execution venue for workloads

Ideally, you should be analyzing the best execution venue for your workloads on a regular basis. However, most IT professionals only consider the technologies currently available. While the venue may never change, it is important to be both aspirational and pragmatic about this analysis to ensure you are exploring all possibilities. 

Taking the time to evaluate your best execution venue can help you find the lowest total cost of ownership (TCO) with the maximum business value. For example, your analysis may reveal that VMware technology that is reliable, performant, and well-supported is a better choice than microservices with event-driven compute and orchestrated containers.

In the data center renewal scenario above, if 100% of the applications are running in VMware Cloud on AWS and it is possible to perfectly map the utilization requirement, you may find that 75% are eligible for a complete refactoring. This would dramatically increase performance and reduce cost, leaving the remaining 25% of applications running as a dependable baseline. 

Common roadblocks to modernizing applications

During a marathon, there’s a big difference between stubbing your toe and falling down. A small misstep won’t prevent you from continuing on. But if you hit the ground, picking yourself back up and staying in the race is a lot harder. The same is true with a modernization project.

Working with an experienced partner can keep you running smoothly along your path to modernization. Here are three common pitfalls to watch for:

1: Out of Scope Responsibilities

Have you ever heard someone in IT say, “That’s not my responsibility?” 

Companies that try to tackle modernization on their own end up managing numerous vendors across departments. When an issue arises, vendors can often only confirm whether the issue is their responsibility. If it falls out of their scope, it is then up to the business to figure out how to solve the bigger problem.

This comes up often on troubleshooting calls. Someone will inevitably claim they can’t help because it is outside of their scope (“That’s a networking issue, and I’m a database guy so you’ll have to talk with someone else”). The statement may be factual, but it also reveals whether a vendor is a true partner to your organization – or simply passing the buck.

Make sure you find a trusted partner that strives towards business outcomes, and will assume responsibility as broadly as needed to get your issues resolved.

2: Forgetting about the Use Case

Organizations often start a migration or transformation by drilling down to the lowest common denominator of CPU/ RAM/Storage. However, beginning modernization with an understanding of what your users require will result in a solution that meets their needs. 

In addition, if you are only considering CPU/RAM/Storage and then try to pick up your application and put it on top of the cloud, you may miss critical business considerations such as licensing and networking. Remember, not everything maps one-to-one from a migration point of view compared to on-premises. 

Take time to evaluate what your workload truly needs before starting to modernize. This will ensure a smarter architecture and ensure a better user experience.

3: Underestimating the Benefits of VMware Cloud on AWS

VMware Cloud on AWS is an essential strategic solution for any organization utilizing VMware and embracing modernization. You may want to rush to the latest cloud native technologies, but gaining access to innovation requires thoughtful planning and execution. 

VMware Cloud on AWS can accelerate your digital transformation, making you more efficient, more secure, and more productive in the long run. 

Slow and steady wins the modernization race, and having the right team on your side can ensure you get across the finish line. Learn more about VMware Cloud on AWS.

Tom Spalding is Chief Growth Officer at Effectual, Inc. 

Enabling IT Modernization with VMware Cloud on AWS

Enabling IT Modernization with VMware Cloud on AWS

Cloud and virtualization technologies offer a broad range of platform and infrastructure options to help organizations address their operational needs, no matter how complex or unique, and reduce their dependence on traditional data centers.

As the demand for cloud and cloud-compatible services continues to grow across departments within organizations, cloud adoption rates are steadily rising and IT decision makers are realizing that they no longer need to be solely reliant on physical data centers. This has led countless organizations to shrink their data center footprints.

The benefits unlocked by VMC on AWS can have significant impacts on your organization…including the impressive performance of a VMware environment sitting on top of the AWS backbone.

VMware Cloud on AWS is unique in bridging this gap, as it utilizes the same skill sets many organizations have in-house to manage their existing VMware environments. Sure, there are considerations when migrating, but ultimately the biggest change in moving to VMware Cloud (VMC) on AWS is the underlying location of the software defined data center (SDDC) within vCenter. The benefits unlocked by VMC on AWS can have significant impacts on your organization – eliminating the need to worry about the security and maintenance of physical infrastructure (and the associated hands on hardware to address device failure) as well as the impressive performance of a VMware environment sitting on top of the AWS backbone.

Technology That Suits Your Needs

Full and partial data center evacuations are becoming increasingly common and, while there are instances of repatriation (organizations moving workloads from the cloud back to the data center), the majority of organizations are sticking with “cloud-first” policies to gain and maintain business agility. Sometimes, however, even a company that’s begun their IT modernization efforts may still have systems and applications hosted on-premises or in a data center.

This may seem to indicate some hesitance to fully adopt the cloud, but it’s usually due to long-term strategy, technical barriers to native cloud adoption, or misconceptions about cloud security and compliance requirements. It’s rare to find an organization that isn’t loaded with technical debt, fully committed to specific software, tied to lengthy data center commitments – or all of the above.

Mission-critical legacy applications may not be compatible with the cloud, and organizations lack the resources or expertise to refactor those applications so that they can properly function in a native cloud environment. Or perhaps there’s a long-term digital strategy to eventually move all systems and applications to the cloud but, in the meantime, they’re still leashed to the data center. Scenarios like these, and many more, are ideal for VMware Cloud on AWS, which allows organizations to easily migrate legacy VMware workloads with minimal refactoring or rearchitecting, or extend their existing data center systems to the cloud.

New, But Familiar

VMware Cloud on AWS was developed in collaboration between VMware, a pioneer and global leader in server virtualization, and AWS, the leading public cloud provider, to seamlessly extend on-premises vSphere environments to SDDCs built on AWS. VMC on AWS makes it easier for organizations to begin or expand their public cloud adoption by enabling lift and shift migration capabilities for applications running in the data center or on-premises VMware environments.

VMC on AWS also has a relatively minimal learning curve for in-house operations staff because, despite being hosted on AWS, it’s still VMware vSphere at its core and the environments are managed using the vCenter management console. This familiar toolset allows IT teams to begin utilizing the cloud without any major workforce retraining and upskilling initiatives because they can still use VMware’s suite of server virtualization and management tools.

The Right Tools for the Job

The vSphere suite of server virtualization products and vCenter management console may be familiar, but they’re far from outdated or limited. VMware continues to invest in the future, strengthening its cloud and virtualization portfolio by enhancing their existing offerings and developing additional services and tools to further enable IT modernization and data center evacuations.

These efforts mean we can expect VMware to continue playing a major role in helping organizations achieve and maintain agility by ensuring secure workload mobility across platforms, from public cloud to private cloud to hardware.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies mesh well.

HCX, which essentially consists of a series of integrations that establish connectivity across systems and platforms and allows workloads to be migrated without any code or configuration changes, is regularly updated to enhance its functionality. VMware HCX can be used to perform live migrations using vMotion and bulk migration for up to 100 VMs at a time. VMware HCX can also provide a secure, accelerated network extension which, beyond providing a seamless migration experience and minimizing operational impacts usually associated with migrating workloads, helps improve the environment’s resiliency through workload rebalancing. This same functionality plays a critical role in disaster recovery and business continuity by replicating data across multiple locations.

A Thoughtful Approach to Modernization

Whether an organization is prioritizing the optimization of spend, revenue growth, streamlining operations, or revitalizing and engaging their workforce, a mature and robust digital strategy should be at the heart of the “how.” Cloud adoption will not solve these business challenges on its own – that requires forethought, planning, and expertise.

It can be challenging to make the right determinations about what’s best for your own unique business needs without a clear understanding of those needs. And for organizations still relying on old school hardware-based systems, the decision to remain with on-premises deployments, move to the cloud, or lift and shift to a platform like VMC on AWS requires a comprehensive assessment of their applications, hardware, and any existing data center/real estate commitments.

Internal teams may not have the specific technical expertise, experience, or availability to develop suitable digital strategies or perform effective assessments, especially as they focus on their primary day to day responsibilities. As an AWS Premier Consulting Partner with the VMware Master Services Competency in VMware Cloud on AWS, Effectual has established its expertise in VMware Cloud on AWS, making us an ideal partner to help ease that burden.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies, which may be at very different stages of their respective lifecycles, mesh well. They need to develop an appropriate modernization strategy and determine the best fit for each application and workload. The right partner can play a critical role in successfully overcoming these challenges.

Hetal Patel is a Senior VMware Technical Lead and co-founder at Effectual, Inc.

11 AWS Snowball Planning Considerations

11 AWS Snowball Planning Considerations

Data transfer/migration is a key consideration in any organization’s decision to move into the cloud.

If a sound strategy is applied, migration of on-premise data to the cloud is usually a seamless process. When an organization fails to do so, however, it risks running into challenges stemming from deficiencies in technical resources, inadequate planning, and/or incompatibility with legacy systems, to name a few.

Data transfer via AWS Snowball is no exception. If performed incorrectly or out of order, some of the seemingly insignificant tasks related to the data migration process can become substantial obstacles that adversely affect a timeline.  The AWS Snowball device can be simple to use if one is familiar with other AWS data transfer services and/or follows all of the steps provided in the AWS Snowball User Guide. However, neglecting a single step can greatly encumber an otherwise ordinary data transfer process.

According to AWS on its service:

“AWS Snowball is used to transport terabytes or petabytes of data to and from AWS, or who want to access the storage and compute power of the AWS Cloud locally and cost effectively in places where connecting to the internet might not be an option.”

AWS

When preparing to migrate data from on-premises storage into AWS via a Snowball device, an organization should be aware of the importance of 11 easily overlooked tasks and considerations associated with planning for the data move. They are as follows:

1. Understanding the specifics of the data being moved to the cloud.

Ensure that it is compatible and can transfer seamlessly to the cloud via AWS Snowball. Follow a cloud migration model to help layout specific details and avoid surprises during the data transfer process.

2. Verifying and validating the amount of data being transferred.

Snowball is intended for large data transfers (over 10 terabytes). Using it for smaller data transfers is not a cost-effective option.

3. Verifying that the workstation meets the minimum requirement for the data transfer.

It should have a 16-core processor, 16 MB of RAM, and a RJ45 or SPF+ network connection.

4. Performing a data transfer test on the workstation an organization plans to use to complete the task.

This will not only equip the organization with an understanding of the amount of time needed to perform the transfer but will provide an opportunity to try various methods of transferring data. Additionally, it will assist with estimating the time the Snowball device will need to be in the organization’s possession, as well as its associated cost.

NOTE: The Snowball Client must be downloaded and installed before this step is performed.

5. Creating a specific administrative IAM user account for the data transfer process via the management console.

This account will be used to order, track, create and manage Snowball Import/Export jobs and return the device to AWS.

NOTE: It is important to avoid using personal IAM user accounts if individuals will be responsible for ordering the device and performing the data transfer.

6. Following the “Object Key Naming convention” when creating S3 buckets.

It is also important to confirm that the selected S3 bucket name aligns with the expectations of the stakeholders.

7. Confirming the point of contact/s and shipping address for the Snowball device.

This is especially important if the individual ordering the device is different from the one performing the data transfer.

8. Setting up SNS notifications to help track the stages of the snowball job.

This will keep the stakeholders informed of the shipping status and the importing of data to the S3 bucket.

9. Being aware of how holidays could affect the progress or process of the data-transfer timeline.

This is important because additional costs are accrued 10 days after the Snowball is delivered.

10. Considering the organization’s administrative processes that might hinder or delay the data transfer process.

By factoring in internal processes (e.g., Change Request management, stakeholder buy-in, technical change moratoriums, etc.) into the timeframe it will take to receive the device, start the job, and ship it back to AWS can help prevent unnecessary fees.

NOTE: The Snowball device has no additional cost if it is returned within 10 days from the date it is received. Following that time, however, a daily late fee of $15 is applied until the date AWS receives it.

11. Keeping the original source data intact till the data import is confirmed.

It is very important that source data remain intact until the Snowball device has been returned to AWS, the data import has been completed, and the customer has validated the data in the S3 bucket(s).

Transferring data from on-premises to an AWS Snowball can be an uneventful endeavor when thorough planning is done in advance of ordering the device. Taking these 11 planning tasks and considerations into account is essential to eliminating some of the potential headaches and stress occasionally associated with this type of activity.

Refer to AWS Snowball Documentation for additional information and specific instructions not covered in this article.

If you or your organization has more questions, reach out to us at sales@effectual.com.

The Reality of the Cloud for Financial Services – Part 1

The Reality of the Cloud for Financial Services – Part 1

According to a report from Markets and Markets, the financial services cloud market is set to reach a total market of $29.47 billion by 2021, growing 24.4 percent.

The study further found that most of this growth will be right here in North America. At the forefront of this trend, Capital One has made public declarations around their all-in cloud, all AWS strategy. Their significant presence on the expo floor at AWS re:Invent was further evidence of this commitment. The growth of cloud adoption in this sector is driven by a simple truth: Financial services are being transformed by the cloud.

Still, some Financial Services companies struggle with the challenge presented by the maze of legacy services built up over decades of mergers and acquisitions. It’s all too common for financial services organizations to still be using IBM mainframes from the 1960s alongside newer technologies. This can make the prospect of cloud transformation seem daunting, to say the least.

However, the question is not if your organization will transition to the cloud, but when. Cloud transformation benefits greatly outweigh the risks — and your competitors are already moving. The longer you wait, the further behind you fall.

Benefits of Cloud Migrations from Legacy Solutions

There are specific benefits to financial services organizations migrating legacy environments to the cloud:

  • Efficiency: A cloud migration offers opportunity for increased efficiency at decreased operational costs. For financial services organizations looking to revolutionize their IT procurement model, a cloud migration from legacy environments is a quick and easy win, opening the door to more modern tools and services.
  • Decreased Storage Costs: The regulatory requirements for data retention can create enormous costs for financial services. Moving to AWS cloud storage solutions can significantly reduce costs, while still meeting stringent data security requirements. A well architected cloud storage solution will meet your needs for virtually unlimited scalability while improving cost transparency and predictability.
  • Increased Agility: To improve their competitive edge, Financial Services organizations are seeking ways to become more agile. A well-planned cloud transformation will result in applications and platforms that effortlessly scale up and down to meet both internal and customer-facing needs. What’s more, a successful cloud transformation means access to new and better tools and advanced resources.
  • Improved Security: Cloud solutions offer access to enterprise-level equipment and the security that comes along with that. This is normally only within the budget of very large organizations. Cloud services provide increased redundancy of data, even across wide geographical areas. They also offer built-in malware protection and best-in-class encryption capabilities.

In my next post, we’ll discuss the “culture transformation” that will enable Financial Services to maximize the return on their cloud transformation investment.

Robb Allen is the CEO of Effectual, Inc.

A Tale of Two Models: Provisioning vs. Capacity

A Tale of Two Models: Provisioning vs. Capacity

A couple of weeks ago, I wrote about current IT trends being ‘revolutionary’ as opposed to ‘evolutionary’ in nature.

Today, I want to expand on that concept and share one of the planning models that make cloud systems in particular and automated infrastructure in general more cost effective and efficient. When talking to clients I refer to this as “The Provisioning vs. Capacity Model”. First let’s look at the Provisioning Model, which, with some adaptation, has underpinned infrastructure decisions for the last five decades of IT planning. The basic formula is fairly complex, but looks something like this:

((((CurrentPeakApplicationRequirements * GrowthFactor) * HardwareLifespan) + FudgeFactor) * HighAvailability) * DisasterRecovery

Let’s look at a practical example of what this means. As an IT leader, being asked to host a new application, I would work with the app vendor and/or developers to understand the compute, storage and networking configurations they recommend per process/user. Let’s say that we determine that a current processor core can support 10 concurrent users and a single user creates roughly 800K of data per day.

I would then work with the business to identify the number of users we expect to begin with, their estimate for peak concurrent users and what expected annual growth will be. Ultimately, we project that we will start with 20 users who may all be using the system at the same time. Within the first year, they anticipate scaling to 250 users, but only 25% of them will be expected to be using the system concurrently. By year five (our projected hardware lifespan) they are projecting to have 800 users, 300 of whom may be using the system at any given time. I can now calculate the hardware requirements of this application:  

YearUsersStorage (GB)Concurrent UsersCores
125049.59636
2450138.8513514
3600257.8722823
4700396.7325926
5800555.4230030

Being an experienced IT leader, I ‘know’ that these numbers are wrong, so I’m going to pad them. Since the storage is inconsequential in size (I’ll likely use some of my heavily over provisioned SAN), from here on out, I’ll focus on compute.  The numbers tell me that I’ll need 2 servers each with 4 quad core processors for a total of 32 processors. Out of caution I would probably increase that to 3 servers. Configuring memory would follow a similar pattern. Because the application is mission critical it’ll be deployed in a Highly Available (HA) configuration, so I’ll need a total of six servers in case the there is a failure with the first three. This application will also require infrastructure in our DR site, so we’ll replicate those six servers there for a total order of twelve servers. In summary, on day one, this business would have a dozen servers in place to support 20 users.

The Provisioning Model can lead to overkill

Under the provisioning model a Highly Available solution with sufficient Disaster Recovery infrastructure could result in a large server deployment to support a very small number of users.

I know what you’re thinking, “This is insanity, if my IT people are doing this, they are robbing me blind!” No, they aren’t robbing you blind, they are following a “Provisioning” model of IT planning. The reason they plan this way is simple; it usually takes months from the time that an infrastructure need is identified to the time that it is deployed in production. It looks something like this in most enterprises:

  • 1-2 weeks – Identify a need and validate requirements
  • 1 week – Solicit quotes from 3 approved vendors (if the solution comes from a non-approved vendor, add 3 months to a year for vendor approval)
  • 2-3 weeks – Generate a Capital Request with documented Business Justification
  • 2 weeks – Submit Capital Request to Finance for approval
  • 2-3 weeks – Request a PO from purchasing & submit to vendor
  • 2-3 weeks – Wait for vendor to deliver hardware & Corporate receiving to move equipment to configuration lab
  • 3-4 weeks – Manually configure solution (Install O/S & Applications, request network ports, firewall configurations, etc)
  • 2 weeks – Install and Burn-In

The total turnaround time here is 15-20 weeks. Based on the cost, time, pain and labor it takes to provision new infrastructure, we want to do it right and be prepared for the future, and there is no quick fix if we aren’t. Using a provisioning model, the ultimate cost in deploying a solution is not in the hardware being deployed, but rather in the process of deploying it.

The upshot of all this is; Most of your IT infrastructure is sitting idle or nearly idle most if not all of the time. As we assess infrastructure, it is not uncommon for us to see utilization numbers below 10%. Over the past 15 years as configuration management, CI/CD, virtualization and containerization technologies have been adopted by IT, the math above has changed, but because those technologies are evolutionary in nature, the planning process hasn’t. In the Provisioning model, we are always planning for and paying for capacity that we will need in the future, not what we need today. Enter Cloud Computing, Infrastructure Automation, Infrastructure as Code (IaC) and AI. Combined, these technologies have ushered in a revolutionary way to plan for IT needs. IaaS and PaaS platforms provide nearly limitless compute and storage capability with few geographic limitations. Infrastructure Automation & IaC allow us to securely and flawlessly deploy massive server farms in minutes. AI and Machine Learning can be leveraged to autonomously monitor utilization patterns, identify trends and predictively trigger scaling activities to ensure sufficient compute power is delivered “Just in Time” to meet demand, then scaled back as demand wanes. In cases where IaaS and PaaS providers experience localized outages, the same combination of IaC and AI can deploy your infrastructure in an unaffected region, likely before most of your user base or IT is even aware that an outage has occurred. Software updates and patches can be deployed without requiring system outages. The possibilities and opportunities are truly mind boggling. Taking advantage of these capabilities requires a complete change in the way our IT teams think about planning and supporting the applications our users consume. As I mentioned above, the incremental hardware costs of over-provisioning in the data center is inconsequential when compared with the often un accounted for cost of deploying that hardware. In forward looking IT, where IaaS and PaaS are billed monthly and provided on a cost per deployed capacity model, and infrastructure can be nearly instantly deployed, we need to abandon the Provisioning Model and adopt the Capacity Model. Before I proceed, you need to understand that these three pillars; IaaS/PaaS, Infrastructure Automation, and AI must all be in place to effectively take advantage of the cost savings and efficiency of the Capacity Model while still delivering secure, reliable services to your users. Merely moving (often referred to as “Lift and Shift”) your servers to the cloud and optimizing them for utilization may provide some initial cost savings, but at significant risk to security, availability and reliability of services.

3 Pillars of the Capacity Model

IaaS/PaaS, Infrastructure Automation, and AI must all be in place to effectively take advantage of the cost savings and efficiency of the Capacity Model.

Following the Capacity Planning model, we try to align deployed infrastructure to utilization requirements as closely as we can, hour by hour. You may have noticed in my Provisioning example above I was primarily concerned and planning for the required capacity at the end of the lifespan of the infrastructure supporting the application. I was also building to a standard that no system would ever exceed 35%-40% utilization. In the new capacity planning model, I want every one of my services running at as close to 90% utilization as possible. Ideally with only enough headroom to support increase in utilization for as long as it takes to spin up a new resource (typically only a few minutes). As demand wanes, I want to be able to intelligently terminate services as they become idle. I use the word “Intelligently” here for a reason; it’s important to understand that many of these resources are billed by the hour, so if I automatically spin up and terminate a resource in 15 minutes, I am billed for a full hour – If I do it 3 times in a single hour, I’m billed for 3 hours. Let’s look at a sample cost differential between Provisioning and Capacity modelling in the cloud. For this exercise, I’m just using the standard rack rates for AWS infrastructure. I am not applying any of the discounting mechanisms that are available and using simple calculations to illustrate the point.

It’s also important to remember that Revolution is neither free or easy; developing and refining the technologies to support this potential savings for this new application will cost $50k-$100k over the five years.

Provisioning Model – 5 Year Costs:

YearInstanceCost/HourQtyHour/MonthAnnual Cost
1c5.xlarge0.1748720$70,502.40
2c5.xlarge0.1748720$70,502.40
3c5.xlarge0.1748720$70,502.40
4c5.xlarge0.1748720$70,502.40
5c5.xlarge0.1748720$70,502.40
Total Cost: $352,512.00

Capacity Model – 5 Year Costs:

YearInstanceCost/HourQtyHour/MonthAnnual Cost
1c5.xlarge0.172410$1,672.80
2c5.xlarge0.174293$2,390.88
3c5.xlarge0.176255$3,121.20
4c5.xlarge0.177245$3,498.60
5c5.xlarge0.178237$3,867.84
Total Cost: $14,551.32

In the model above, for simplicity of understanding, I only adjusted the compute requirements on a yearly basis, in reality with the ability to dynamically adjust both instance size and quantity hourly based on demand, actual spend would likely be closer to $8k over 5 years. It’s also important to remember that Revolution is neither free or easy; developing and refining the technologies to support this potential savings for this new application will cost $50k-$100k over the five years depending on the application requirements. At the end of the day, or at the end of five years, following the capacity model may result in spending well less than half the cost of the provisioning model, but you would have enjoyed much higher security, reliability and availability of applications with a significantly lower support cost. To wrap up this very long post, yes, it is true that massive cost savings can be realized through 21stCentury IT Transformation, but it will require a Revolution in the way you think about supporting your business applications. Without people experienced in these very new technologies, you’re not likely to be happy with the outcome. Finally, if you encounter anyone who leads the charge to cloud with words like “Lift and Shift”, please don’t be hesitant to laugh in their face. If you don’t you may end up spending $350,000+ for what could otherwise cost you $8,000.