Empowering Marketers and Driving Customer Engagement with Amazon Pinpoint

Empowering Marketers and Driving Customer Engagement with Amazon Pinpoint

In an increasingly virtual world of remote work, online learning, and digital interfacing, successful customer engagement can differentiate you from competitors and provide deeply valuable insights into the features and innovations important to your users. A well-designed, well-managed user experience not only helps you gain market share, but also uncovers new revenue opportunities to grow your business.

At Effectual, we begin projects with an in-depth discovery process that includes persona development, customer journey mapping, user stories, and UX research to design solutions with engaging, meaningful user experiences. Post-launch, your ability to capture, iterate, and respond to user feedback is just as essential to your success.

In our experience, many SaaS-based companies simply miss this opportunity to stay engaged with their customers. Reasons for this include the complexity and cost of designing, deploying, and managing customized marketing campaigns across multiple channels as well as the lack of real time data analytics to inform them. The result is a tidal wave of generic emails, poorly-timed push notifications, and failed initiatives that impact customer retention and engagement.

Amazon Pinpoint is a scalable outbound and inbound marketing communications service that addresses these challenges and empowers marketers to engage with customers throughout their lifecycle. The service provides data insights and a marketing dashboard inside the Amazon Web Services (AWS) admin console for creating and managing customized communications, leveraging automation, data analytics, filters, and integrations with other AWS products and third-party solutions.

Easy to use and scale

  • Manage campaigns from a user friendly marketing dashboard
  • Scale reliably in a secure AWS environment

Targeted customer groups

  • Segment audiences from mobile and web application data or existing customer list

Customized messaging across email, SMS, push notifications

  • Personalize content to engage customers using static and dynamic attributes
  • Create customer journeys that automate multi-step campaigns, bringing in endpoints from your app, API or directly from a CSV
  • Engage customers with targeted emails and push notifications from the AWS admin portal using rich text editor and customizable templates

Built-in analytics

  • Set up customer endpoints by user email, phone #, or UserID # to track user behavior within your app
  • Use real time web analytics and live data streams to capture immediate feedback
  • Measure campaign data and delivery results against business goals

Integrations with other AWS services and third-party solutions

  • Integrate Amazon Pinpoint with intelligent services such as Amazon Connect, Amazon Personalize, and Amazon Forecast
  • Use with existing marketing solutions such as Salesforce and other third-party application

For marketers, Amazon Pinpoint is a powerful tool for improving digital engagement – particularly when integrated with other AWS services that utilize machine learning and live stream analytics. Organizations that invest in designing engaging user experiences for their solutions will only benefit from continually improving and innovating them.

Have an idea of project to discuss? Contact us to learn more about using Amazon Pinpoint to improve your customer engagement.

App Modernization: Strategic Leverage for Managing Rapid Change

App Modernization: Strategic Leverage for Managing Rapid Change

The last few months of the COVID crisis have made this even more evident, dramatically exposing security faults and the limitations of outdated monolithic applications and costly on-premises infrastructure. This lack of modernization is preventing many businesses and agencies from adapting to new economic realities and finding a clear path forward.

Applications architected for the cloud provide flexibility to address scalability and performance challenges and to explore new opportunities without requiring heavy investment.

Whether improving efficiencies with a backend process or creating business value with a new customer-facing app, modernizing your IT solutions helps you respond quickly to changing conditions, reduce your compliance risk, and optimize costs to match your needs. Applications that are already architected to take advantage of the cloud also provide flexibility to address scalability and performance challenges as well as to explore new opportunities without disrupting budgets and requiring heavy investment.

First, what defines technologies that are NOT modern?

  • Inflexible monolithic architectures
  • Inability to scale up or down with changes in demand
  • Security only implemented on the outside layer, not at the component layer
  • Costly on-premises infrastructure
  • Legacy hardware burdens
  • Waterfall development approaches

Maintaining legacy technologies is more expensive than modernizing them

Some of the most striking examples of the complexity, costs, and failures associated with legacy technologies have recently been seen in the public sector. In fact, some state unemployment systems have failed to handle the overwhelming increase in traffic and demand, impacting those in greatest need of assistance. There are those that are already taking measures within the public sector. Beth Cappello, acting CIO of the US Department of Homeland Security, recently stated that had her predecessors not taken steps to modernize their infrastructure and adopt cloud technologies, the ability for DHS personnel to remain connected during the pandemic would have been severely impacted.

Many government applications run on 30+ year-old mainframe computers using an antiquated programming language, creating a desperate need for COBOL developers to fix the crippled technologies. What the situation reveals is the dire need to replatform, refactor, and rearchitect these environments to take advantage of the scalability, reliability, and performance of the cloud.

Benefits of modernization:

  • Security by design
  • Resilient microservices architecture
  • Automated CI/CD pipeline
  • Infrastructure as code
  • Rapid development, increased pace of innovation
  • Better response to customer feedback and market demands
  • Flexible, pay-as-you-go pricing models
  • Automated DevOps processes
  • Scalable managed services (ie: Serverless)
  • In-depth analytics and data insights

The realities of preparing for the unknown

As a result of shelter-in-place orders since early March, we have seen both the success of customers who have modernized as well as the struggles of those still in the process of migrating to the cloud.

Food for All is a customer with a farm-to-table grocery app that experienced a 400x increase in revenue as people rushed to sign up for their service during the first few weeks of the pandemic. Because we had already built their architecture for the Amazon Web Services (AWS) cloud, the company’s technology environment was able to scale easily to meet demand. In addition, they have a reliable DevOps environment that allowed them to immediately onboard more developers to begin building and publishing new features based on user feedback.

Unfortunately, other customers have not been able to adapt as quickly.

When one of our retail clients lost a large number of customers in the wake of COVID, they needed help scaling down their environment as rapidly as possible to cut their costs on AWS. However, the inherited architecture had been written almost 10 years ago, making it expensive and painfully time-consuming to implement adjustments or changes. As a result, the company is currently weighing whether to turn off their app and lose revenue or invest in modernizing it to recover their customers.

In fact, many early cloud adopters have not revisited their initial architectures to ensure they are taking advantage of the hundreds of new features and services released by AWS each year

For another large enterprise customer, the need to reduce technology costs meant laying off a third of their payroll. Though our team is helping them make progress on refactoring their AWS workloads, they were still unable to scale down 90% of their applications in time to avoid such a difficult decision. The situation has significantly increased their urgency to modernize.

The need for a cloud-first modernization service provider

With AWS now 14 years old, it is important to realize that modernization is just as important to early adopters as it is for the public sector’s legacy workloads. In fact, many early cloud adopters have not revisited their initial architectures to ensure they are taking advantage of the hundreds of new features and services released by AWS each year (during Andy Jassy’s 2019 re:Invent keynote alone, he announced 30 new capabilities in 3 hours). For these reasons, and many more, our Modernization Engineers help customers make regular assessments of their cloud infrastructure and workloads to maintain a forward-looking, modern IT estate.

Whether migrating out of an on-premise data center or colo, rearchitecting an existing cloud workload, or developing with new cloud-native features,it has never been more important to implement a modern cloud strategy. This is particularly true for optimizing services across your organization and embracing security as a core pillar.

According to Gartner, 99% of cloud security failures through 2025 will be the customer’s fault. Clearly, no organization wants to be a part of this statistic. Ongoing management of your critical workloads is a worthy investment that ensures your mission-critical assets are secure. The truth is that if security isn’t done right, it simply doesn’t matter.

We work frequently with customers looking to completely exit their data center infrastructure and migrate to an OPEX model in the cloud. In these engagements, we identify risks and dependencies using a staged approach to ensure the integrity of data and functionality of applications. However, this migration or “evacuation” is not an end state. In fact, it is often the first major milestone on a client’s journey toward continuous improvement and optimization. It is also nearly impossible to do efficiently without modern technology and the cloud.

Modern cloud management mitigates risk and enables modernization

While some workloads and applications may be considered cloud-ready for a relatively straightforward lift and shift migration, they can usually benefit from refactoring, rearchitecting, or replatforming based on a thorough assessment of usage patterns. Cloud adoption on its own will only go so far to improve performance and organizational flexibility.

Effectual is a Modernization Service Provider that understands how to modernize applications, their metrics, operational costs, security implications, and compliance requirements

A modern digital strategy allows you to unlock the true capabilities of the cloud, increasing scalability, agility, efficiency, and one of the most critical benefits of any modernization initiative – improved security. Modernized technologies can also utilize cutting edge security protocols and continuous compliance tools that are simply not available with physical infrastructure.

Unlike traditional MSPs (Managed Service Providers) who manage on-premises servers in physical data centers, Effectual is a cloud-first Modernization Service Provider that understands how to modernize applications, their metrics, operational costs, security implications, and compliance requirements. When our development team finishes a project, our customers can Cloud Confidently™ knowing that their environment is in experienced hands for ongoing management.

Most importantly, the path to modernization is not necessarily linear, whether you are developing an application specifically for the cloud, refactoring or rearchitecting as part of a data center migration, or updating and securing an existing cloud environment. New ideas, priorities, and changes to the world we live in require that we adapt, innovate, and rethink our approach to solving business challenges in even the most uncertain times.

When your organization or team needs the power to pivot, we have the Modernization Engineers, systems, tools, and processes to support that change.

Ready to begin your modernization journey?
Contact us to get started.

Ryan Comingdeer is the Chief Cloud Architect at Effectual.

Enabling IT Modernization with VMware Cloud on AWS

Enabling IT Modernization with VMware Cloud on AWS

Cloud and virtualization technologies offer a broad range of platform and infrastructure options to help organizations address their operational needs, no matter how complex or unique, and reduce their dependence on traditional data centers.

As the demand for cloud and cloud-compatible services continues to grow across departments within organizations, cloud adoption rates are steadily rising and IT decision makers are realizing that they no longer need to be solely reliant on physical data centers. This has led countless organizations to shrink their data center footprints.

The benefits unlocked by VMC on AWS can have significant impacts on your organization…including the impressive performance of a VMware environment sitting on top of the AWS backbone.

VMware Cloud on AWS is unique in bridging this gap, as it utilizes the same skill sets many organizations have in-house to manage their existing VMware environments. Sure, there are considerations when migrating, but ultimately the biggest change in moving to VMware Cloud (VMC) on AWS is the underlying location of the software defined data center (SDDC) within vCenter. The benefits unlocked by VMC on AWS can have significant impacts on your organization – eliminating the need to worry about the security and maintenance of physical infrastructure (and the associated hands on hardware to address device failure) as well as the impressive performance of a VMware environment sitting on top of the AWS backbone.

Technology That Suits Your Needs

Full and partial data center evacuations are becoming increasingly common and, while there are instances of repatriation (organizations moving workloads from the cloud back to the data center), the majority of organizations are sticking with “cloud-first” policies to gain and maintain business agility. Sometimes, however, even a company that’s begun their IT modernization efforts may still have systems and applications hosted on-premises or in a data center.

This may seem to indicate some hesitance to fully adopt the cloud, but it’s usually due to long-term strategy, technical barriers to native cloud adoption, or misconceptions about cloud security and compliance requirements. It’s rare to find an organization that isn’t loaded with technical debt, fully committed to specific software, tied to lengthy data center commitments – or all of the above.

Mission-critical legacy applications may not be compatible with the cloud, and organizations lack the resources or expertise to refactor those applications so that they can properly function in a native cloud environment. Or perhaps there’s a long-term digital strategy to eventually move all systems and applications to the cloud but, in the meantime, they’re still leashed to the data center. Scenarios like these, and many more, are ideal for VMware Cloud on AWS, which allows organizations to easily migrate legacy VMware workloads with minimal refactoring or rearchitecting, or extend their existing data center systems to the cloud.

New, But Familiar

VMware Cloud on AWS was developed in collaboration between VMware, a pioneer and global leader in server virtualization, and AWS, the leading public cloud provider, to seamlessly extend on-premises vSphere environments to SDDCs built on AWS. VMC on AWS makes it easier for organizations to begin or expand their public cloud adoption by enabling lift and shift migration capabilities for applications running in the data center or on-premises VMware environments.

VMC on AWS also has a relatively minimal learning curve for in-house operations staff because, despite being hosted on AWS, it’s still VMware vSphere at its core and the environments are managed using the vCenter management console. This familiar toolset allows IT teams to begin utilizing the cloud without any major workforce retraining and upskilling initiatives because they can still use VMware’s suite of server virtualization and management tools.

The Right Tools for the Job

The vSphere suite of server virtualization products and vCenter management console may be familiar, but they’re far from outdated or limited. VMware continues to invest in the future, strengthening its cloud and virtualization portfolio by enhancing their existing offerings and developing additional services and tools to further enable IT modernization and data center evacuations.

These efforts mean we can expect VMware to continue playing a major role in helping organizations achieve and maintain agility by ensuring secure workload mobility across platforms, from public cloud to private cloud to hardware.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies mesh well.

HCX, which essentially consists of a series of integrations that establish connectivity across systems and platforms and allows workloads to be migrated without any code or configuration changes, is regularly updated to enhance its functionality. VMware HCX can be used to perform live migrations using vMotion and bulk migration for up to 100 VMs at a time. VMware HCX can also provide a secure, accelerated network extension which, beyond providing a seamless migration experience and minimizing operational impacts usually associated with migrating workloads, helps improve the environment’s resiliency through workload rebalancing. This same functionality plays a critical role in disaster recovery and business continuity by replicating data across multiple locations.

A Thoughtful Approach to Modernization

Whether an organization is prioritizing the optimization of spend, revenue growth, streamlining operations, or revitalizing and engaging their workforce, a mature and robust digital strategy should be at the heart of the “how.” Cloud adoption will not solve these business challenges on its own – that requires forethought, planning, and expertise.

It can be challenging to make the right determinations about what’s best for your own unique business needs without a clear understanding of those needs. And for organizations still relying on old school hardware-based systems, the decision to remain with on-premises deployments, move to the cloud, or lift and shift to a platform like VMC on AWS requires a comprehensive assessment of their applications, hardware, and any existing data center/real estate commitments.

Internal teams may not have the specific technical expertise, experience, or availability to develop suitable digital strategies or perform effective assessments, especially as they focus on their primary day to day responsibilities. As an AWS Premier Consulting Partner with the VMware Master Services Competency in VMware Cloud on AWS, Effectual has established its expertise in VMware Cloud on AWS, making us an ideal partner to help ease that burden.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies, which may be at very different stages of their respective lifecycles, mesh well. They need to develop an appropriate modernization strategy and determine the best fit for each application and workload. The right partner can play a critical role in successfully overcoming these challenges.

Effectual provides Strategy and IdeationOptimizationMigration and Modernization, and Modern Cloud Management services for enterprises and public sector organizations using AWS and VMware Cloud on AWS.

Hetal Patel is a Senior VMware Technical Lead and co-founder at Effectual, Inc.

A Rundown on re:Invent 2019 Pt 2

A Rundown on re:Invent 2019 Pt 2

Members of the engineering team had the opportunity to attend Amazon Web Services’ annual re:Invent conference in Las Vegas.

Every year, AWS announces dozens of customer-sought features at the event (and some leading up to the event in what the community has dubbed “pre:Invent”). In this blog- a second in a series of two on re:Invent – we’ll touch on new announcements from this year’s conference:

  1. Amazon excited data scientists with the announcement of Amazon SageMaker Studio which provides an easier experience for building, training, debugging, deploying and monitoring machine learning models with an integrated development environment (IDE).
  2. Amazon Athena federated queries turn almost any data source into a query-able data repository, opening opportunities to gather insights based on data from many different sources in different formats.
  3. Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities by using machine learning, statistical analysis, and graph theory.
  4. Automate code reviews with Amazon CodeGuru, a machine learning service which helps development teams identify the most expensive lines of code in their applications and receive intelligent recommendations on how to fix or improve their code.
  5. Amazon Simple Storage Service (S3) adds additional security measures and flexibility to share data with others by introducing Amazon S3 Access Points.

With all the new features coming out of re:Invent, it was difficult for us to pick our top picks, but our team is quickly becoming experts in all the new features and already utilizing them in delivering first-class cloud infrastructure to our clients.

A Rundown on re:Invent 2019 Pt 1

A Rundown on re:Invent 2019 Pt 1

Members of the engineering team had the opportunity to attend Amazon Web Services’ annual re:Invent conference in Las Vegas.

Every year, AWS announces dozens of customer-sought features at the event (and some leading up to the event in what the community has dubbed “pre:Invent”). This blog is the first in a two-part series related to re:Invent announcements from the 2019 conference:

  1. AWS Identity and Access Management (IAM) Access Analyzer provides an easy way to check permissions across the many policies provided at the resource level, principal level, and across accounts.
  2. A feature requested by customers since Amazon Elastic Kubernetes Service (EKS) was announced last year, AWS Fargate support for Amazon Elastic Kubernetes Service will revolutionize the way organizations use the popular Kubernetes container management tools in the cloud and radically reduce the maintenance required for running Kubernetes on AWS.
  3. A pre:Invent announcement that you might have missed if you blinked, CloudFormation Registry and third-party resource support adds the ability to manage virtually any third-party application resource using CloudFormation, an infrastructure as code tool helping organizations iterate faster with repeatable cloud resource definitions stored as code.
  4. Andy Jassy rocked the re:Invent stage in 2018 when he announced AWS Outposts, a new offering to take AWS’ computing capacity into your own data center. This service was made available in 2019, opening a wealth of potential for applications which need to stay local for regulatory or performance purposes.
  5. The Amazon Builder’s Library is a curated list of content written by Amazon’s own technical leaders to illustrate how Amazon builds world-class services and infrastructure.

With all the new features coming out of re:Invent, it was difficult for us to pick our top picks, but our team is quickly becoming experts in all the new features and already utilizing them in delivering first-class cloud infrastructure to our clients.

Education and the Cloud

Education and the Cloud

As cloud computing continues to grow within the State and Local Government industry, it has become increasingly popularized in the Education industry.

AWS started an initiative called AWS Educate to provide students and educators with the training and resources needed for cloud-related learning. Cloud computing skills are in high demand throughout the state of Texas, especially as an increasing number of state and local government agencies are embarking on migrating to the cloud. It has been a slow process for the government to migrate to the cloud, but the education sector is ahead of the process. This is due to high demand for the students, teachers, faculty, staff, and parents needing access to critical information using any devices from anywhere. Educators can benefit by migrating to the cloud: it’s cost efficient, offers stable data storage, development and test environments, easier collaboration for users, enhanced security without add-on applications, simple application hosting, minimizes resource costs, and fast implementation and time-to-value.

With all the capabilities of Cloud environments, the Education industry still has a long way to go. There are certain school districts and even Higher Education institutions, that do not have the amount of access as some of their counterparts. Cloud vendors could make a difference and solidify cloud adoption by offering Cloud education to urban neighborhood schools with laptops, computers, and access to training and certifications. As a start, the three major Cloud providers offer cloud education assistance to students:

When it comes to the rapid advancement in the IT industry, I encourage other young minorities, including my daughter, to pursue a career in the technology industry. Children are the future and Cloud platforms will be the leading solution across all markets.

We offer a bundled package for new users which includes an assessment of their current infrastructure which can be beneficial to any Higher Education Institution or K-12 organization. We can build the future together and keep rising to greater heights!

Reach out to Thy Williams, [email protected], to learn more about our capabilities and discuss our starter package.

It’s Time for Pharma to Embrace the Cloud

It’s Time for Pharma to Embrace the Cloud

Pharmaceutical companies have long been familiar with the pros and cons of cloud vs. non-cloud environments. The same discussions took place when companies in other industries began transitioning from on-premises to outsourced providers.

However, the pharmaceutical industry, and the data they manage, is within the scope of the Federal Drug Administration (FDA). With the FDA’s purview increasing over the years, and the globalization of their compliance oversight (now including over 150 countries exporting FDA‑regulated products to the United States), they have put more of the onus of following regulations on the pharmaceutical companies.

Enhancing Security Through Strategy and Architecture

In response to the FDA asking questions regarding the Code of Federal Regulations Title 21, Part 11 (CFR 21 Part 11), companies complying with GxP regulations have to ask themselves: “What risk level is acceptable for my business?” Often, this becomes a paralyzing exercise fraught with an unwillingness to change or, at best, test runs of cloud technology in safe areas like dev/test that are ultimately never implemented. This inaction leaves them behind more agile competitors who have clear, well-documented policies around adopting cloud technologies without adding significant risk. Lacking a defined cloud initiative does something that many companies may find surprising – it increases their risk and vulnerability as bad actors, security attacks, and attempts at gaining access to sensitive data become more sophisticated.

“What risk level is acceptable for my business?”
This often becomes a paralyzing exercise fraught with unwillingness to change.

Well-architected cloud environments are the best solution to keep up with those security challenges. According to Gartner, “…through 2020, public cloud infrastructure as a service (IaaS) workloads will suffer at least 60 percent fewer security incidents than those in traditional data centers.” This additional security is the result of the major cloud platform providers (AWS, Azure, Google, and Alibaba) having a virtually unlimited budget and tight controls of the underlying platform. While they do provide a secure platform, it is still up to the users to architect secure environments. Gartner also states: “through 2022, at least 95 percent of cloud security failures will be the customer’s fault.”

The Way Forward

So, what can you do to ensure that your FDA-regulated business remains competitive and secure in a world where change is constant, and breaches happen daily? The first step is also one of the most important: Secure the understanding and sponsorship of the entire executive team.

There should be unanimous and clear support from the executive team, and a realistic understanding of the benefits of a cloud solution. Without their support, any adoption challenges may cause the project to stall, create doubt, or even lead to abandoning your cloud initiatives altogether.

Once you have the executive team’s support, a company-wide policy for cloud initiatives needs to be developed. This policy should be created by those with a deep knowledge of cloud computing to take full advantage of the appropriate cloud services for your business requirements. At this point, engaging with a managed service provider or consultant can be highly beneficial and ensure that your cloud initiatives are realistic and follow best practices for cost, security, and compliance requirements.

Developing Effective Adoption Policies

At minimum, a cloud adoption policy should address security and compliance requirements, workload elasticity and scaling demands, departmental ownership and responsibilities, risk assessment and remediation methodologies, and critical dependencies. In addition, you should also consider addressing storage retention, disaster recovery, or business continuity. The process of developing these comprehensive adoption policies allows your organization to gain a better understanding of how the cloud fits into each aspect of your business, while providing clear goals for your teams to pursue.

Having a clearly defined objective is best practice for implementing a cloud solution, but being too focused on the minutiae can lead to tunnel vision and increases the likelihood of creating an inflexible adoption plan. Designing a plan that functions more as a framework or a set of guidelines than a codified set of instructions, in a sense mirroring the flexible nature of the cloud, will help prevent your teams from losing sight of the advantages of cloud services or hindering innovation.

Another common pitfall to cloud adoption is the tendency to apply current, non-cloud policy to your cloud adoption initiatives. Adherence to legacy IT policies will prove challenging to cloud adoption and could make it impossible to fully realize the advantages of moving to a cloud solution. And outdated approaches could even result in greater costs, poor performance, and poorly secured environments. These risks can all be addressed with appropriate cloud-based policies that foster cloud-first approaches to new initiatives.

Becoming a secure, cloud-enabled organization requires consistent diligence from your internal teams and continuous adherence to the company cloud policy. In the end, the most significant risks to the security of your infrastructure are tied to your own policies and oversight, and the continued security of your cloud and data will require the involvement and cooperation of your entire organization. Clear communication and targeted training will help your teams understand their role in organizational security.

An Outsider’s Expertise

If you’re not sure about the effectiveness of your approach to cloud adoption, bringing in a third party to assist with policy creation or implementation can help save time and money while ensuring that best practice security is built into your approach. Outside organizations can also provide valuable assistance if you’ve already implemented cloud solutions, so it’s never too late to get guidance and insight from experts who can point out where processes or solutions can be improved, corrected, or optimized to meet your specific business requirements.

These third-party engagements have proven to be so useful that AWS has created the Well-Architected Framework and an associated Well-Architected Review program that gives their clients an incentive to have a certified third party review and then optimize their AWS solution (learn more about Effectual’s Well-Architected Review offering). Organizations such as the Society of Quality Assurance and the Computer Validation & Information technology Compliance (CVIC) (disclosure: I am a member of the CVIC) are also discussing these issues to provide guidance and best practices for Quality Assurance professionals.

Outside professional and managed services can provide an immense level of assistance through an objective assessment of your organization’s needs. Their focused expertise on all things cloud will lighten the load on your internal IT teams, help ease any fears you may have about cloud adoption, discover potential savings, and provide guidance to fortify the security of your cloud solution.

Mark Kallback is a Senior Account Executive at effectual, Inc.

Considerations for AWS Control Tower Implementation

Considerations for AWS Control Tower Implementation

AWS Control Tower is a recently announced, console-based service that allows you to govern, secure, and maintain multiple AWS accounts based on best practices established AWS.

What resources do I need?

The first thing to understand about Control Tower is that all the resources you need will be allocated to you by AWS. We will need AWS Organizations established, an account factory to create accounts per LOB, and Single Sign On (SSO) to name a few. Based on the size of your entity or organization, those costs may vary. In the Control Tower precursor, AWS Landing Zones, we found that costs for this collection of service could range near $500-$700 monthly for large customers (50+ accounts), as deployed. Control Tower will probably be a similar cost, possibly higher depending on the size of your organization. I will address later in this post on how to go and use Control Tower once you have an account set up a Brownfield situation. In a perfect world, it would be nice to setup the Control Tower and in a Greenfield scenario, but sadly, 99% of the time, that’s not the case.

If you’re a part of an organization that has multiple accounts in different lines of business, this service is for you.

What choices do I need to make?

In order to establish a Cloud Enablement Team to manage Control Tower, you need to incorporate multiple stakeholders. In a large organization, that might entail different people for roles such as:

  1. Platform Owner
  2. Product Owner
  3. AWS Solution Architect
  4. Cloud Engineer (Automation)
  5. Developer
  6. DevOps
  7. Cloud Security

You want to be as inclusive as possible in order to get the most breadth of knowledge. These are the people that will be making the decisions you need to migrate to the cloud and then most importantly, thrive once present and remain engaged. We have the team, so now what can we do to make Control Tower work the best for us?

Decisions for the Team

1. Develop a RACI

This is one of the most crucial aspects of Operations. If you do not have accountability or responsibility, then you don’t have management. Everyone must be able to delineate their tasks from the rest of the team. Finalizing everyone’s role in the workflow then this will solve a lot of issues before they happen.

2. Shared Services

In the shared services model, we need to understand what resources are going to the cloud and what will stay. Anything from Active Directory to DNS to one-off internal applications will have to be figured out in a way to accommodate functionality and keep the charge back model healthy. One of Control Tower’s most redeeming and worthy qualities is knowing what each LOB is costing and how they are helping the organization overall.

3. Charge Backs

Since the account factory (previously called Account Vending Machine) is established, each LOB will have its own account. In order to see what the LOB costs are, you must have an account. AWS does not do pricing based on VPC, but by account. Leveraging Control Tower, tagging, and third-party cost management resources all can combine to give an accurate depiction of the costs incurred by a specific line of business.

4. Security

Security will have all logs dumped from each account into a centralized log bucket can be pointed to the tool of choice to analyze those logs. Other parties may perform audits to read your logs using ready only functions in an account that has nothing else, another feature of Control Tower. The multi-account strategy not only allows for better governance, but also now helps in case of compromise. If one account has been compromised, then the blast radius for all the other accounts is minimal. Person X may have accessed a bucket in a specific account, but they did not access it anywhere else. The most important thing to remember is that you cannot treat cloud security like data center security.

There are plenty of choices to make as it relates to Control Tower moving forward for an organization, but if you plan correctly and make wise decisions, then you can secure your environment and keep your billing department happy. Hopefully this has helped you see what it takes in the real world to prepare. Good luck out there!

Network Virtualization – The Missing Piece of Digital Transformation

Network Virtualization – The Missing Piece of Digital Transformation

The cloud revolution continues to impact IT, changing the way digital content is accessed and delivered. It should come as no surprise that this revolution has affected the way we approach modern networking.

When it comes down to it, the goal of digital transformation is the same for all organizations, regardless of industry: increase the speed at which you’re able to respond to market changes and evolving business requirements, improve your ability to adopt and adapt to new technology, and enhance overall security. Digital strategies are maturing, becoming more thoughtful and effective in the process, as organizations understand that the true value of cloud adoption and increased virtualization isn’t just about cost savings.

Technology is more fluid than ever, and dedicated hardware is limiting individual progress and development more and more every day. Luckily, cloud and virtualized infrastructure have helped lay the groundwork for change, giving companies the opportunity to more readily follow the flow of technological progress but, in the same way that a chain is only as strong as its weakest link, these same companies are only as agile their most rigid component. And that rigid chokepoint, more often than not, is hardware-based network infrastructure.

A lack of network agility was even noted by Gartner as being one of the Top 10 Trends Impacting Infrastructure and Operations for 2019.

A Bit of History
We likely wouldn’t have the internet like we do today if not for the Department of Defense needing a way to connect large, costly research computers across long distances to enable the sharing of information and software. Early computers had no way to connect and transmit data to each other.
The birth of ARPANET in 1969, the world’s first packet-based network, and it’s ensuing expansion was monumental in creating the foundation for the Information Age.

The Case for Virtualization

While some arguments can still be made about whether a business might benefit more from traditional, hardware-based solutions or cloud-based options, there’s an inarguable fact right in front of us: software moves faster than hardware. This is what drove industries toward server and storage virtualization. However, network infrastructure still tends to be relegated to hardware, with the same manual provisioning and configuration processes that have been around for decades. The challenge of legacy, hardware-based network infrastructure is a clear obstacle that limits an organization’s ability to keep up with changing technologies and business requirements.

The negative effect of hardware-based networking goes beyond the limitation of speed and agility. Along with lengthy lead times, the process of scaling, modifying, or refreshing network infrastructure can require a significant amount of CapEx since you have to procure the hardware, and a significant amount of OpEx since you have to manually configure the newly acquired network devices. In addition, manual configuration is well-known to be error-prone, which can lead to connectivity issues (further increasing deployment lead time) and security compromises.

Networking at the Speed of Business and Innovation

As organizations move away from silos in favor of streamlined and automated orchestration, approaches to network implementation need to be refreshed. Typical data center network requests can take days, even weeks to fulfill since the hardware needs to be procured, configured (with engineers sometimes forced to individually and painstakingly configure each device), and then deployed.

Software-defined networking (SDN), however, changes all of that. With properly designed automation, right-sized virtual network devices can be programmatically created, provisioned, and configured within seconds. And due to the reduced (or even fully eliminated) need for manual intervention, it’s easier to ensure that that newly deployed devices are consistently and securely configured to meet business and compliance requirements.

Automation allows networking to match the pace of business by relying on standardized, pre‑defined templates to provide fast and consistent networking and security configurations. This lessens the strain and burden on your network engineers.

Network teams have focused hard on increasing availability, with great improvements. However for future success, the focus for 2019 and beyond must incorporate how network operations can be performed at a faster pace.

Source: Top 10 Trends Impacting Infrastructure & Operations for 2019, Gartner

Embracing Mobility

Modern IT is focused on applications, and the terminology and methods for implementing network appliances reflect that – but those applications are no longer tied to the physical data center. Sticking to a hardware-focused networking approach severely restricts the mobility of applications, which is a limitation that can kill innovation and progress.

Applications are not confined to a single, defined location and maturing digital and cloud strategies have led to organizations adopting multiple public and private clouds to achieve their business requirements. This had led to an increase in applications being designed to be “multi-cloud ready.” Creating an agile network infrastructure that extends beyond the on-premises locations, matching the mobility of those applications, is especially critical.

Network capabilities have to be able to bridge the gap from functioning consistently across all locations, whether they’re hardware-based legacy platforms, virtual private cloud environments, or pure public cloud environments.

This level of agility is beneficial for all organizations, even if they’re still heavily invested in hardware and data center space, because it allows them to begin exploring, adopting, and benefiting from public cloud use. Certain technologies, like VMware Cloud on AWS, already enable organization to bridge that gap and begin reaping the benefits of Amazon’s public cloud, AWS.

According to the RightScale 2019 State of the Cloud Report from Flexera, 84% of enterprise organizations have adopted a multi-cloud strategy, and 58% have adopted a hybrid cloud strategy, utilizing both public and private clouds. On average, the respondents reported using nearly five clouds on average.

A Modern Approach to Security

Digital transformation creates fertile ground for new opportunities – both business opportunities and opportunities for bad actors. Since traditional approaches to cybersecurity weren’t designed for the cloud, cloud adoption and virtualization have contributed to a growing need to overhaul information security practices.

Traditional, classical network security models focused on the perimeter – traffic entering or leaving the data center – but, as a result of virtualization, the “perimeter” doesn’t exist anymore. Applications and data are distributed, so network security approaches have to focus on the applications themselves. With network virtualization, security services are elevated to the virtual layer, allowing security policies to “follow” applications, maintaining a consistent security configuration to protect the elastic attack surface.

But whether your network remains rooted in hardware or becomes virtualized, the core of your security should still be based on this: Security must be an integral part of your business requirements and infrastructure. It simply cannot be bolted on anymore.

Picking the Right Tools and Technology for the Job

Choosing the right tools and technology to facilitate hybrid deployments and enable multi‑platform solutions can help bridge the gap between legacy systems and 21st century IT.  This level of interoperability and agility help make cloud adoption just a little less challenging.

Addressing the networking challenges discussed in this post, VMware Cloud on AWS has an impressive set of tools that enable and simplify connectivity between traditionally hosted on-premises environments and the public cloud. This interconnectivity makes VMware Cloud on AWS an optimal choice for a number of different deployment use cases, including data center evacuations, extending on-premises environments to the public cloud, and improving disaster recovery capabilities.

Developed in partnership with Amazon, VMware Cloud on AWS allows customers to run VMware workloads in the cloud, and their Hybrid Cloud Extension (HCX) enables large-scale, bi-directional connections between on-premises environments and the VMware Cloud on AWS environment. In addition, VMware’s Site Recovery Manager provides simplified one-click disaster recovery operations with policy-based replication, ensuring operational consistency.

If you’re interested in learning more about VMware Cloud on AWS or how we can help you use the platform to meet your business goals, check out our migration and security services for VMware Cloud on AWS.

Ryan Boyce is the Director of Network Engineering at effectual, Inc.

Interview with an AWS Champion Authorized Instructor

Interview with an AWS Champion Authorized Instructor

James Hirmas is co-founder of JHC Technology, an Effectual Company, and an Information Technology Subject Matter Expert with nearly 20 years of experience.

James was named an Amazon Web Services (AWS) Champion Authorized Instructor in April 2019 by demonstrating his deep understanding of AWS services and solutions and his ability to teach AWS content to others. We had the chance to discuss with James what it means to him to be an AWS Champion Instructor.

What certifications do you have?

I have earned the following AWS certifications in addition to becoming an AWS Champion Authorized Instructor:

  • AWS Developer
  • AWS SysOps Administrator
  • AWS Solutions Architect
  • AWS Professional DevOps Engineer
  • AWS Security Specialty

Why did you want to become an AWS Champion Instructor?

I pursued becoming AWS Champion Authorized Instructor to help mentor and set an example for the JHC Technology team. As one of the co-founders of JHC Technology, I strongly believe that we should always lead by example. One of the foundational principles JHC Technology is built upon, as defined in our JHC ethos, is “Never stop Learning.” At JHC Technology, we truly embrace this principle by providing continuous training to our teams and deeply entrenching everyone in cloud technology.

I also wanted to give back to the AWS community. Over the past 10 years working on the AWS platform, cloud technology has given me a tremendous opportunity to help build a successful business, reshape/redefine the technology landscape, and advance my professional career. As an AWS Champion Authorized Instructor, I work with a very diverse group of organizations from different industries and I can impart my decade-plus of real-world AWS experience to further promote, educate, and accelerate organization’s journey to AWS.

I also wanted to become an AWS Champion Authorized Instructor to work with organizations from other industries in order to better understand their requirements, challenges, and use cases so JHC Technology can stay up to date on customer landscape and technology trends.

What was the process to become an AWS Champion Instructor?

In order to become an AWS Authorized Champion Instructor, I recommend multiple years of experience deploying advanced workloads on AWS. According to the AAI program guide, you need to hold at least four valid AWS Certifications at the Associate or above level, with at least one of them being a Professional level certification. You also need to work with an AWS partner organization that is in the AWS Training Program and you will need to complete an Instructor Delivery Workshop. The workshop demonstrates technical knowledge and ability to deliver an AWS authorized course and/or complete a co-teach overseen by a qualified AWS Mentor Instructor. AWS has multiple official courses that are offered from Architecting on AWS to Security Operations on AWS. For each course offered, the AWS Authorized Champion Instructor needs to have the appropriate mix of AWS certifications.

Describe your experience as an AWS Authorized Instructor.

So far, my experience has been very rewarding. I have been able to travel to different parts of the United States and other countries to provide AWS training. I have been able to provide value with real work use cases and experiences.

It is an investment when you become an AWS Authorized Champion Instructor. I spend a lot time when I am not training reviewing course material, preparing examples to present in class, and constantly learning new AWS features and use cases.

Accelerating DevOps Cultural Adoption with GitLab

Accelerating DevOps Cultural Adoption with GitLab

One year ago, our team made an investment into a self-hosted installation of GitLab.

We had been successful in delivering a managed GitLab installation at a customer site and saw the value in taking advantage of everything the platform had to offer for our internal workloads. As an AWS DevOps Competency partner, we have a successful track record of helping organizations adopt DevOps processes and we understand that the biggest challenge is often aligning an organization’s culture with DevOps principles.

GitLab has helped us bridge that gap by demonstrating the operational excellence that can be achieved with DevOps.

GitLab’s biggest strength is that it addresses all stages of the software development lifecycle. GitLab’s features align strongly with the stages and principles our team has outlined in our DevOps process. The cornerstone of this DevOps process is that everything is delivered as code and all code is continuously version controlled, tested, and cross-checked by peers. The marriage of GitLab’s repository tools with their built-in CI platform eliminates much of the overhead of setting up continuous integration and testing. Our team has built custom pipeline templates specifically designed around deployments using AWS, CloudFormation, Docker, Kubernetes, Terraform, and other platforms. These pipeline templates allow new projects to inherit shared knowledge and hit-the-ground-running to deliver operational excellence with Agile development speed. We’ve also committed ourselves to sharing these templates and learned best practices with the community to aid others in quickly and efficiently adopting GitLab and cloud and driving new development.

Our team has designed a one-click style deployment of GitLab on AWS with high availability and security out-of-the-box. We’re using this solution to help other organizations rapidly adopt GitLab and have been successful in doing so at several government and commercial organizations. We also have a one-click GitLab Runner on AWS solution available for scalable, secure GitLab CI runners and are actively working on a one-click deployment for GitLab Runner on Azure and GCP.

GitLab has been a cornerstone of our DevOps practice, and we are just getting started. We have empowered organizations to automate software testing and deployments using GitLab as the engine, and organizations have been able to move faster and better address end-users with those abilities. We’re excited to see what organizations can do with the power that DevOps’ operational excellence gives them, and we’ve partnered with GitLab to accelerate them along that journey.

If you or your organization has more questions in regards to GitLab or our DevOps process, reach out to [email protected] to set up some time to chat about your business goals.

Next Generation MSP: Integration

Next Generation MSP: Integration

During the past nine years of delivering cloud solutions to government and industry, our team has identified a gap in the delivery of Managed Service Provider (MSP) solutions in a cloud environment.

As an AWS Premier Consulting Partner with the DevOps, Government, and Non-Profit competencies, as well as the GovCloud Skill Partner Status, there are few workloads that we haven’t helped customers migrate and manage in a cloud environment. We also deliver these solutions through Microsoft Azure, where we are a Silver Cloud Partner.

Our focus with MSP is around a structured, repeatable, five-step MSP process

  1. Evaluation
  2. Automation
  3. Optimization
  4. Monitoring
  5. Integration

We previously discussed the Evaluation, Automation, Optimization, and Monitoring phases as part of the MSP process. In this blog – the final in a series of five on Next Generation MSP (NG-MSP) – we’ll touch on Integration, a critical component of a successful MSP strategy as it brings everything together. As providers of cloud based MSP, we focus our efforts in automation, repeatability and collaboration, as those same aspects in an on premise MSP environment is where a lot of capability is lost.

With the flexible nature of cloud vs. on-premise, the automation enables an NG-MSP provider like Effectual to react quicker and with more structure because resources can be located anywhere and the infrastructure backend is handled by AWS. You begin to gain efficiencies as a result of technology giving the ability to move quicker, simpler, and cheaper than an on-premise environment. The traditional on-premise environment will require physical adjustments to infrastructure, may require additional infrastructure to be ordered, and is dependent on specific workers rather than a pool of resources coupled with packaged, scripted activities.

At the integration stage, we are implementing the Effectual toolchain with a focus on meeting Continuous Integration/Continuous Deployment (CI/CD) objectives. These tool sets include GitLab and JIRA integration, among other components. With the cloud-based tool chain in place, we can offload 70%-80% of current workloads as it relates to an on-premise environment. However, as an NG-MSP provider, we’re not changing the workflow but simplifying the process of getting to production.

We’ve done this long enough to know that NG-MSP models cannot be a one-size fits all proposition. In our cloud-model, as we get to the Integration component, we are leveraging best-in-breed toolchains to ensure security, efficiency, and an optimized infrastructure to support our customer workloads.

We stand ready to support your organization’s cloud environment with our NG-MSP services. To discuss options, please reach out to [email protected].

The Cloud-First Mindset

The Cloud-First Mindset

Across every industry, cloud-native businesses are disrupting legacy institutions that have yet to transform traditional IT platforms.

To remain competitive, industry giants need to change the way they think about technology and adopt a cloud-first mindset. Prioritizing cloud-based solutions and viewing them as the natural, default option is vital to the success of new projects and initiatives.

Migrating legacy systems to cloud has the added benefit of eliminating technical debt from older policies and processes. However, it is important to be mindful in order to avoid creating new technical debt when developing and deploying cloud systems. While adopting a cloud-first mindset may seem like an expected result of digital transformation, it requires significant changes to an organization’s culture and behavior, similar to those required for the effective adoption and implementation of DevOps methodologies.

We have to rethink the old way of doing things – cloud is the new normal

Evolving needs and capabilities

When “cloud” first entered the lexicon of modern business, it was incorrectly thought of as a cost‑cutting measure. Organizations were eager to adopt the cloud with the promise of savings – despite not fully understanding what it was or its ever-growing capabilities. These types of implementations were generally short-sighted: lacking a well-defined digital strategy and focused on immediate needs rather than long-term goals.

As adoption increased, it became apparent that adjusting approach and redefining digital strategy is necessary for success. Optimizing applications for the cloud and developing comprehensive governance policies to rein in cloud sprawl, shadow IT, and uncontrolled (and unmonitored) spend are just part of the equation.

“…spending on data center systems is forecast to be $195 billion in 2019, but down to $190 billion through 2022. In contrast, spending on cloud system infrastructure services (IaaS) will grow from $39.5 billion in 2019 to $63 billion through 2021.”

Source: Cloud Shift Impacts All IT Markets, Gartner

A cloud-first approach reshapes the way an organization thinks about technology and helps mitigate the potential to recreate unnecessary technical debt that was eliminated through digital transformation initiatives.

The human element of digital transformation

Digital transformation should extend beyond technology. It’s a long-term endeavor to modernize your business, empower your people, and foster collaboration across teams. Transforming your systems and processes will have a limited impact if you don’t also consider the way your teams think, interact, and behave. This is especially important because the significant operational changes introduced by modernizing infrastructure and applications can present challenges to employees who feel comfortable maintaining the status quo. Before you can disrupt your industry, you have to be willing to disrupt the status quo within your own organization.

The fact is that change can be difficult for a lot of people, but you can ease the transition and defuse tension by actively engaging your teams. You cannot overstate the importance of clear, two-way communication. Letting your people know what you’re planning to do and also why you’re doing it can help them understand the value of such a potentially massive undertaking. It’s also important to have a solid understanding of what your teams need and creating open lines of communication will enhance requirements gathering efforts. This level of communication ensures that whatever you implement will adequately address their needs, and ultimately improve their workflow and productivity.

The introduction of new tools and technologies, even if they’re updated versions of the ones currently in use, will generally require some level of upskilling. Helping your teams bridge the technical gap is a necessary step.

Competition at its finest

Few sectors have seen the level of disruption faced by the finance industry. FinTech disruptors, born in the cloud and free from the chains of technical debt and bureaucratic overhead, have been able to carve out their place in the market. They’ve attracted customers by creating innovative offerings and customer-focused business models, competing with legacy institutions that seemed to have an unassailable dominance that barred any new entrants.

Legacy retail banking institutions, known for being risk averse, had a tendency to implement new technology very slowly. They were plagued by long development cycles, dedicated hardware solutions, and strict compliance requirements to safeguard highly sensitive data.

When Capital One turned its attention to the cloud, they created a holistic digital strategy that wasn’t limited to tools and systems. They understood that technology was not a line item on a budget, but an investment in the company’s future, and that successfully executing their strategy would require a culture shift. They focused on attracting technologists who could enhance the company’s digital capabilities to increase employee engagement, improve cybersecurity, and improve customer experience by using the latest technologies, including artificial intelligence and machine learning. They also created a cloud training program so their employees would understand the technology, regardless of whether or not they were in technical roles, reinforcing the company’s cloud-first mindset.

FinTech disruptors, born in the cloud and free from the chains of technical debt and bureaucratic overhead, have been able to carve out their place in the market.

Understanding your options

Developing a proper cloud-first mindset is not about limiting your options by using the cloud exclusively. A digitally transformed business doesn’t adopt the latest technology simply for the sake of adoption. In fact, the latest and greatest SaaS or cloud-based offerings may not always be the best option, but you have to know how to make that determination based on the unique needs and circumstances of your business. By objectively assessing business goals, considering all options (including traditionally hosted), and prioritizing agile, flexible solutions, you can redefine your approach to problem-solving and decision-making. This mindset means that cloud is no longer the “alternative.”

We have to rethink the old way of doing things – cloud is the new normal, and hardware-based options should only be implemented if they are truly the best way to meet business goals and overcome challenges. We don’t need to abandon on-premises or traditional IT to maintain or regain competitive edge. We just need to understand that it’s not always the right choice.

This approach will help you develop a macro view of your organization’s needs and prompt you to identify and treat the underlying cause of business challenges, not just the symptoms.

Building a foundation for disruption

Becoming a disruptor in your industry is not the goal of digital transformation –that takes more than just adopting the cloud. The goal is to free your organization from the restraints of costly, outdated legacy infrastructure and monolithic applications, and enabling your teams to scale and innovate. The flexibility of cloud and SaaS-based options reduce the risks associated with developing new products and services for your customers, and instilling a culture of cloud-first thinking gives your people the freedom to explore and experiment. That’s how you drive innovation and compete against new, lean born-in-the-cloud competitors. That’s how you disrupt.

A Cloud Security Strategy for the Modern World

A Cloud Security Strategy for the Modern World

In the borderless and elastic world of the cloud, achieving your security and compliance objectives requires a modern, agile, and autonomous security strategy that fosters a culture of security ownership across the entire organization.

Traditional on-premises approaches to cybersecurity can put organizations at risk when applied to the cloud. An updated, modern strategy for cloud security should mitigate risk and help achieve your business objectives.

Cloud service providers such as Amazon Web Services (AWS) place a high priority on the security of their infrastructure and services, and are subject to regular, stringent third-party compliance audits. These CSPs provide a secure foundation, but clients are still responsible for securing their data in the cloud and complying with their data protection requirements. This theory is substantiated by Gartner, which estimates that, through 2020, workloads hosted on public cloud will have at least 60% fewer security incidents than workloads hosted in traditional data centers, and 95% of cloud security failures through 2022 will be the fault of customers.

Traditional approaches to cybersecurity weren’t designed for the cloud – it’s time for an update

Updating how you think about cybersecurity and the cloud

Despite the significant security advances made by CSPs since the birth of cloud, users still need a deep understanding of the cloud’s shared responsibilities, services, and technologies to align information security management systems (ISMS) to the cloud. Today, the majority of data breaches in the cloud are the result of customers not fully understanding their data protection responsibilities and adopting poor cloud security practices. As the public cloud services market and enterprise-scale cloud adoption continues to grow, organizations must have a comprehensive understanding of not just cloud, but cloud security specifically.

Through 2022, at least 95% of cloud security failures will be the customer’s fault

Source: Gartner

The cloud can be secure – but are your policies?

A poor grasp of the core differences between on-premises and cloud technology solutions resulted in a number of misconceptions during the early days of cloud adoption. This lack of understanding helped fuel one of the most notable and pervasive cloud myths in the past: that it lacked adequate security. By now, most people have come to realize that cloud adoption and digital transformation do not require a security tradeoff. In fact, the cloud can provide significant governance, risk, and compliance (GRC) advantages over traditional on-premises environments. A cloud-enabled business can leverage the secure foundation of the cloud to increase security posture, reduce regulatory compliance scope, and mitigate organizational responsibilities and risk.

It is common to see enterprise organizations lacking the necessary expertise to become cloud resilient. Companies can address this skills gap through prescriptive reference architectures. AWS, for example, has created compliance programs for dozens of regulatory standards, including ISO 27001, PCI DSS, SOC 1/2/3, and government regulations like FedRAMP, FISMA, and HIPAA in the United States and several European and APAC standards. Beyond these frameworks, consultants and managed service providers can work with organizations to provide guidance or architect environments to meet their compliance needs.

Regardless of the services leveraged, the cloud’s shared responsibility model ensures that the customer will always be responsible for protecting their data in the cloud.

Making the change

Similar to the challenges and benefits of implementing DevOps (discussed here by our CEO, Robb Allen), effective cloud security requires a culture change with the adoption of DevSecOps, shifting the thinking in how teams work together to support the business. By eliminating the barriers between development, operations, and security teams, organizations can foster a unified approach to security. When everyone plays a role in maintaining security, you foster collaboration and a shared goal of securely and reliably meeting objectives at the speed of the modern business.

When everyone plays a role in maintaining security, you foster collaboration and a shared goal of securely and reliably meeting objectives at the speed of the modern business.

Additionally, cloud-specific services and technologies can provide autonomous governance of your ISMS in the cloud. They can become a security blanket capable of automatically mitigating potential security problems, discovering issues faster, and addressing threats more quickly. These types of services can be crucial to the success of security programs, especially for large, dynamic enterprises or organizations in heavily regulated industries.

Implementing the right cloud tools can lead to significant reductions in security incidents and failures, giving your teams greater freedom and autonomy to explore how they use the cloud.

The way to the promised land

Security teams and organizations as a whole need to have a deep understanding of cloud security best practices, services, and responsibilities to create a strategic security plan governed by policies that align with business requirements and risk appetite. Ultimately, however, a proper cloud security strategy needs buy-in and support from key decision makers and it needs to be governed through strategic planning and sound organizational policies. Your cloud security strategy should enable your business to scale and innovate quickly while effectively managing enterprise risk.

Darren Cook is the Chief Security Officer of effectual, Inc.

The Promise of FinOps

The Promise of FinOps

Cloudability’s Cloud Economic Summit put the spotlight on the importance of accountability and cloud cost management.

Our partner, Cloudability, recently hosted the Cloud Economic Summit in San Francisco, providing a look into the current and future state of cloud cost management. Cloudability CEO Mat Ellis, CTO Erik Onnen, Co-founder J.R. Storment, 451 Research Director Owen Rogers, and AWS Worldwide Business Development lead Keith Jarrett, presented alongside speakers from Autodesk and OLX Group, addressing the need for FinOps – a disciplined approach to managing cloud costs. Supporting the event, Cloudability published a press release, “FinOps Operating Model Codifies Best Practices of the World’s Largest Cloud Spenders, Enabling Enterprises to Bring Financial Accountability to the Variable Spend of Cloud.”

“Celebrate achievement, get better every day – this is FinOps.”

—Mat Ellis, CEO of Cloudability
Cloudability Logo

Introducing the day, Mat set the stage that public cloud adoption is part of a much bigger trend seen in many industries throughout history – managing a supply chain. Milestone innovations disrupt at an astronomical scale; from the printing press, to rubber, to the internet, and now cloud computing. We’ve all felt the disruption created by cloud computing and many of us have been part of the 21st Century IT revolution. As seen at AWS re:Invent last year, the adoption of DevOps culture to foster innovation and enable competitive advantage has been embraced by large insurance organizations like Guardian and the world famous guitar manufacturer Fender.

However, with AWS now 13 years old, many cloud technology buying decisions are still based on an outdated model. There is a need for iterative, ongoing monitoring and accounting for cloud spend. Enter Cloudability. Analyzing hundreds of millions in cloud spend per month, and billions per year, Cloudability’s platform delivers keen insights and benchmarking tools that enable a clear path to cloud cost diligence and FinOps success.

Cost Management in The Cloud Age

Digging into the data behind the mission of FinOps, Owen Rogers, Research Director, 451 Research presented some stark realities about the current state of cloud cost management (Full report available here). The study found that more than half of large enterprises worry about cloud costs on a daily basis and 80% believe that poor cloud financial management has a negative impact on their business. These enterprises need a comprehensive platform to manage multi-million-dollar cloud budgets.

Owen Rogers, Research Director, 451 Research presented some stark realities about the current state of cloud cost management

Another eye-opening data point presented was that 85% of respondents overspend their budgets, with nearly 10% spending two to four times their allocated budget. Pair this with 18% of respondents that were unaware they were overspending, and the picture is not pretty. The biggest reasons cited for not addressing this issue were “too small of an overspend to resolve” and “not wanting to hinder innovation.”

While well-intentioned, the study showed that “not wanting to hinder innovation” and pushing off a responsible approach to cloud cost management does exactly what the respondents are trying to avoid: halts cloud adoption, cripples innovation, lowers quality of service, increases cost, and creates a sprawling underutilized cloud footprint.

Cloud Cost Management Directly Impacts Company Culture and Business Bottom Line

The reality is that cloud cost management directly impacts business. Thankfully, there are steps to take to mitigate the commonplace inefficiencies identified by Owen. For example, 33% of respondents are manually extracting and aggregating cloud costs in a spreadsheet – this is the epitome of anti-agile. Only 52% of instances are rightsized for their workload and, beyond that, only 52% of respondents are taking advantage of Reserved Instance discounts.

The tools and opportunities to improve the health and efficiency of your cloud environments are readily available. In fact, the 451 Research report shows average savings of 27% were achieved through the use of a cost management platform. With an expected CAGR of 17% between 2017-2022, now is the time to implement the behavioral changes that instill a culture of FinOps within your organization.

The problem is shared accountability – The solution is a FinOps culture

What became apparent in the research presented by Owen Rogers is a distinct need for IT and Finance teams to come to the table together to discuss the path forward. The good news is there are companies that are pushing the envelope and leading the way in diligent and responsible cloud cost management. Those who have embraced a FinOps culture are utilizing performance benchmarking and have a clear understanding of the fully-loaded costs of their cloud infrastructure. This is the promise that we can aspire to and it starts with collaboration between IT, finance, and individual lines of business.

There is a distinct need for IT and Finance teams to come together to discuss the path forward

FinOps high performers have near real-time visibility of all cloud spend. Individual teams understand their portion of total spend, are enabled to budget and track against targets, and utilize Reserved Instances for 80-95% of their cloud services.

Similar to having a clear understanding of household finances, this level of diligence affords more benefits than just cost savings. A remarkable side effect of FinOps culture is a 10-40% improvement in operational efficiency within your organization.

FinOps Foundation

In addition to the information presented at the Cloud Economic Summit, Cloudability launched the FinOps Foundation. Comprised of founding members from Atlassian, Nationwide, Spotify, Autodesk, letgo, and many others, the FinOps Foundation is a non-profit trade organization bringing people together to create best practices around cloud spend.

J.R. Storment, Cloudability Co-founder, takes on the role of President of The FinOps Foundation. J.R. describes the need for the organization here.

“…Why is the Foundation needed? At many companies I talk with, engineering teams spend more than needed with little understanding of cost efficiency.”

J.R. Storment, Cloudability

We are excited to see our partner defining this space and eager to participate in the FinOps Foundation. We are also looking forward to reading “Cloud Financial Management Strategies, Creating a Culture of FinOps,” their O’Reilly Media book which is slated to be published later this year.

Thanks again to Cloudability for hosting us at the event, we are looking forward to an exciting year together.

Robb Allen is the CEO of effectual, Inc.

Building Strength Through Partnership

Building Strength Through Partnership

The cloud partner ecosystem is changing. The days when organizations could act as a “Jack of all trades, master of none” are over.

Legacy IT resellers are going the way of the dinosaur in favor of partners who can deliver clear value-add with a track record of transformative success. Specialization is the order of the day. This cuts to the heart of what we mean by a partnership — and how it differs from simply being a “vendor.”

IT partnerships should allow your in-house team to remain focused on generating revenue and building the organization.

Why Specialized Partnerships Matter

Choosing the right partner is absolutely critical to executing a successful cloud transformation. We addressed this in a previous post. Every organization is necessarily limited by its own technical and human resources. The right partner brings expertise, experience, and proven processes to ensure that internal limitations don’t equal a failed transformation process.

A Successful Cloud Partnership

Let’s take a look at one of the most recent and important cloud partnerships: AWS and VMware. AWS brought their cloud platform services together with VMware’s virtualization expertise. The result was a specialized certification program, a robust migration service, a cost insight tool providing greater transparency into cloud spending, and a joint hybrid cloud product to incentivize customer adoption. Each partner brought highly specific value-add services and together they created a game-changing cloud solution for the enterprise.

Partners Versus Vendors

It’s worth exploring what we mean when we talk about being a partner as opposed to being a vendor. A vendor is easy enough to explain: It is a company providing a service. The point is, even the best vendors are not as invested in your success as a partner. They certainly wish their customer success and hope for continued business, but there is no strategic, long-term involvement, or commitment to understanding their clients’ unique business goals.

In some cases, vendors may even push templated or cookie-cutter solutions that simply don’t fit. This isn’t to say that every vendor is out to take advantage of their customers; it’s simply a recognition that a generalized vendor offering tends to be limited, in contrast to a specialized partnership.

By comparison, a successful partnership is a more intimate relationship. In these engagements you’re not just purchasing IT services – you’re working hand-in-hand to grow the efficiency and effectiveness of your IT resources.

Cloud security, migration, and cost optimization are exactly the types of endeavors that call for partners.

The key difference is a subtle but important one — collaboration. It’s often thought that a good partner will “take care of everything” for you, but this is not true, nor should it be. A true partner requires your input to understand how your business defines success, and relies on this data to make informed decisions on the technologies they deploy. It is essential for your teams to be involved in this process, as they will adopt and learn new methodologies and processes throughout the engagement.

It’s not about choosing between vendors or partners. It’s about recognizing where more generalized vendors will fulfill your needs and where specialized partners are a better fit. Simple, straightforward tasks are fine for vendors. More involved and strategic endeavors, however, require a partner. Cloud security, migration, and cost optimization are exactly the types of endeavors that call for partners.

Extending Your In-House Capabilities

IT partnerships should allow your in-house team to remain focused on generating revenue and building the organization. Leaning on strong partners can in effect be an extension of your IT team, expanding your resources and solving problems that may have required additional training or experience beyond the expertise and skill sets of your internal teams.

Cloud security, migration, and cost optimization are exactly the types of endeavors that call for partners.

Keeping your teams focused on their core responsibilities has a highly desirable secondary effect – boosting in-house morale. Not only does this improve the workplace, it makes it easier for you to attract and retain top talent.

Cloud Confidently™

At effectual, we engage you as a partner, not a vendor, which is why we specialize in cloud, and not cloud-adjacent services like data center operations. Our deep experience in cloud enablement facilitates your digital transformation. This includes helping you to determine the best implementation strategy as well as establishing metrics to quantify and measure your success. But our specialization is in security and financial optimization.

The important thing is not to be just technologists, but to be able to understand the business goals [clients are] trying to achieve through the technology.

Cloud is a rapidly evolving ecosystem. AWS rolled out 1,400 new services in 2017, another 800 throughout the first half of 2018, and an impressive number of new product and service announcements during re:Invent 2018. We understand that it can be difficult to wade through these waters to find the right solutions for your business challenges, including your specific security requirements. What’s more, your team is likely already fully committed running core applications and tools. You need a partner who can help keep your in-house team free to do what it is they do best.

RightScale’s 2018 State of the Cloud reportfound that most organizations believed they were wasting 30 percent of their cloud spend. In fact, the study found that 35 percent of their cloud spend was attributed to waste. We look for ways to help our partners get their invoices under control but also to understand what is driving their cloud costs. Finally, we help organizations properly allocate their spend, ensuring that the right applications, business units, regions, or any other grouping of your business is spending exactly what it should and no more.

We strive to understand your long- and short-term goals by working closely with your organization and provide you with strategic solutions for sustained growth. Interested in learning more? Reach out and let us know what you are looking to solve – we love the hard questions.

Robb Allen is the CEO of effectual, Inc.

Next Up: Machine Learning on AWS

Next Up: Machine Learning on AWS

If you have been to AWS’s re:Invent, then you know the tremendous amount of excitement that cloud evangelists experience during that time of the year.

The events that AWS hosts in Las Vegas provide a surreal experience for first timers and are sure to excite even the most seasoned of veterans. Let’s talk about one of the exciting technologies that are sure to change the world as we know it, or at least the businesses we are familiar with – Amazon Machine Learning.

Introduced on April 9, 2015, Amazon Machine Learning (ML) has received a surge of attention in recent years given its capability to provide highly reliable and accurate predictions with a large dataset. From using Amazon ML to track next generation stats in the NFL, to analyzing real time race data in Formula 1, to enhancing fraud detection at Capital One, ML is changing the way we share experiences and interact with the world around us.

During re:Invent 2018, AWS made it clear that ML is here to stay and has announced many offerings that support development of ML solutions or services. But you may be wondering: What exactly is Amazon ML?

According to AWS’s definition:

“Amazon Machine Learning is a machine service that allows you to easily build predictive applications, including fraud detection, demand forecasting, and click prediction. Amazon Machine Learning uses powerful algorithms that can help you create machine learning models by finding patterns in existing data and using these patterns to make predictions from new data as it becomes available.”

We, as a society, are at the point where machines are actively providing decisions for many of our day-to-day interactions with the world. If you’ve ever shopped as a Prime member on Amazon.com, you have already experienced an ML algorithm that is in tune with your buying preferences.

In our Engineer’s Corner, our very own Kris Brandt Amazon Web Service As A Data Lake, discusses the critical initial step towards implementing an ML project, Data Lake creation. In this blog, Kris explores what a Data Lake is and provides some variations to its implementation. The development of a robust data lake is requisite for implementing an ML project that provides the business value expected from the service capabilities. ML runs on data and having plenty of it provides a foundation for an exceptional outcome.

Utilizing existing data repositories, we can work with business leaders to develop those cases for leveraging the data and the ML for strategic growth. You can connect with the Effectual team by emailing [email protected].

Because of ML’s proliferation throughout the market, AWS announced these ML solution opportunities during re:Invent 2018:

AWS Lake Formation
“This fully managed service will help you build, secure, and manage a data lake,” according to AWS. It allows you to point it at your data sources, crawl the sources, and pull the data into Amazon Simple Storage Service (S3). “Lake Formation uses Machine Learning to identify and de-duplicate data and performs format changes to accelerate analytical processing. You will also be able to define and centrally manage consistent security policies across your data lake and the services that you use to analyze and process the data,” says AWS.

Amazon Textract
“This Optical Character Recognition (OCR) service will help you to extract text and data from virtually any document. Powered by Machine Learning, it will identify bounding boxes, detect key-value pairs, and make sense of tables, while eliminating manual effort and lowering your document-processing costs,” according to AWS.

11 AWS Snowball Planning Considerations

11 AWS Snowball Planning Considerations

Data transfer/migration is a key consideration in any organization’s decision to move into the cloud.

If a sound strategy is applied, migration of on-premise data to the cloud is usually a seamless process. When an organization fails to do so, however, it risks running into challenges stemming from deficiencies in technical resources, inadequate planning, and/or incompatibility with legacy systems, to name a few.

Data transfer via AWS Snowball is no exception. If performed incorrectly or out of order, some of the seemingly insignificant tasks related to the data migration process can become substantial obstacles that adversely affects a timeline.  The AWS Snowball device can be simple to use if one is familiar with other AWS data transfer services and/or follows all of the steps provided in the AWS Snowball User Guide.However, neglecting a single step can greatly encumber an otherwise ordinary data transfer process.

According to AWS on its service:

“AWS Snowball is used to transport terabytes or petabytes of data to and from AWS, or who want to access the storage and compute power of the AWS Cloud locally and cost effectively in places where connecting to the internet might not be an option.”

AWS

When preparing to migrate data from on-premises storage into AWS via a Snowball device, an organization should be aware of the importance of 11 easily overlooked tasks and considerations associated with planning for the data move. They are as follows:

1. Understanding the specifics of the data being moved to the cloud.

Ensure that it is compatible and can transfer seamlessly to the cloud via AWS Snowball. Follow a cloud migration model to help layout specific details and avoid surprises during the data transfer process.

2. Verifying and validating the amount of data being transferred.

Snowball is intended for large data transfers (over 10 terabytes). Using it for smaller data transfers is not a cost-effective option.

3. Verifying that the workstation meets the minimum requirement for the data transfer.

It should have a 16-core processor, 16 MB of RAM, and a RJ45 or SPF+ network connection.

4. Performing a data transfer test on the workstation an organization plans to use to complete the task.

This will not only equip the organization with an understanding of the amount of time needed to perform the transfer, but will provide an opportunity to try various methods of transferring data. Additionally, it will assist with estimating the time the Snowball device will need to be in the organization’s possession, as well as its associated cost.

NOTE: The Snowball Client must be downloaded and installed before this step is performed.

5. Creating a specific administrative IAM user account for the data transfer process via the management console.

This account will be used to order, track, create and manage Snowball Import/Export jobs and return the device to AWS.

NOTE: It is important to avoid using personal IAM user accounts if individuals will be responsible for ordering the device and performing the data transfer.

6. Following the “Object Key Naming convention” when creating S3 buckets.

It is also important to confirm that the selected S3 bucket name aligns with the expectations of the stakeholders.

7. Confirming the point of contact/s and shipping address for the Snowball device.

This is especially important if the individual ordering the device is different from the one performing the data transfer.

8. Setting up SNS notifications to help track the stages of the snowball job.

This will keep the stakeholders informed of the shipping status and the importing of data to the S3 bucket.

9. Being aware of how holidays could affect the progress or process of the data-transfer timeline.

This is important because additional costs are accrued 10 days after the Snowball is delivered.

10. Considering the organization’s administrative processes that might hinder or delay the data transfer process.

By factoring in internal processes (e.g., Change Request management, stakeholder buy-in, technical change moratoriums, etc.) into the timeframe it will take to receive the device, start the job, and ship it back to AWS can help prevent unnecessary fees.

NOTE: The Snowball device has no additional cost if it is returned within 10 days from the date it is received. Following that time, however, a daily late fee of $15 is applied until the date AWS receives it.

11. Keeping the original source data intact till the data import is confirmed.

It is very important that source data remain intact until the Snowball device has been returned to AWS, the data import has been completed, and the customer has validated the data in the S3 bucket(s).

Transferring data from on-premises to an AWS Snowball can be an uneventful endeavor when thorough planning is done in advance of ordering the device. Taking these 11 planning tasks and considerations into account are essential to eliminating some of the potential headaches and stress occasionally associated with this type of activity.

Refer to AWS Snowball Documentation for additional information and specific instructions not covered in this article.

If you or your organization has more questions, reach out to us at [email protected].

Next Generation MSP: Monitoring

Next Generation MSP: Monitoring

During the past nine years of delivering cloud solutions to government and industry, our team has identified a gap in the delivery of Managed Service Provider (MSP) solutions in a cloud environment.

As an AWS Premier Consulting Partner with the DevOps, Government, and Non-Profit competencies, as well as the GovCloud Skill Partner Status, there are few workloads that we haven’t helped customers migrate and manage in a cloud environment. We also deliver these solutions through Microsoft Azure, where we are a Silver Cloud Partner.

Our focus with MSP is around a structured, repeatable, five-step MSP process

  1. Evaluation
  2. Automation
  3. Optimization
  4. Monitoring
  5. Integration

We previously discussed the Evaluation, Automation, and Optimization phases as part of the MSP process. In this blog – the fourth in a series of five on Next Generation MSP (NG-MSP) – we’ll touch on Monitoring, a critical component of a successful MSP strategy from a visibility perspective. Appropriate monitoring of infrastructure as part of our NG-MSP strategy affords both Effectual and our customers the ability to be proactive in measures by collecting, analyzing, and acting on information efficiently.

Further, as we develop a custom NG-MSP playbook for our customer, we’re going to look at where we can integrate with existing Security or Network Operations Center. For us, we don’t see NG-MSP as a one-size fits all model, which is a spot where many traditional MSP providers, even cloud MSP providers, miss the mark.

Proper cloud-first, NG-MSP monitoring must focus on

  • Adherence to cost and best practices, including tooling from industry leaders such as CloudCheckr
  • The proper configuration and performance of services such as AWS’s CloudWatch
  • Incorporating logging tools such as AWS CloudTrail
  • Analyzing logs through native cloud services such as AWS’s Macie or third-party tools such as Splunk.

Next Generation MSP: Optimization

Next Generation MSP: Optimization

During the past nine years of delivering cloud solutions to government and industry, our team has identified a gap in the delivery of Managed Service Provider (MSP) solutions in a cloud environment.

As an AWS Premier Consulting Partner with the DevOps, Government, and Non-Profit competencies, as well as the GovCloud Skill Partner Status, there are few workloads that we haven’t helped customers migrate and manage in a cloud environment. We also deliver these solutions through Microsoft Azure, where we are a Silver Cloud Partner.

Our focus with MSP is around a structured, repeatable, five-step MSP process

  1. Evaluation
  2. Automation
  3. Optimization
  4. Monitoring
  5. Integration

We previously discussed the Evaluation and Automation phases as part of the MSP process. In this blog – the third in a series of five on Next Generation MSP (NG-MSP) – we’ll touch on Optimization, a critical component of a successful MSP strategy from both performances as well as a cost perspective. Optimization considers leftover infrastructure from “lift and shift” migration models and keeping abreast of best practices in instance usage, including right-sizing and auto-scaling.

Infrastructure alone isn’t always the full story as licensing plays a large driver in business cases given existing investments or necessary upcoming requirements. This ability to go beyond infrastructure and into a holistic optimization discussion is an area where Effectual continues to provide value. NG-MSP may include an ability to transition from licensed databases to license-included database technologies. Cost savings from reducing or eliminating redundant storage in favor of fault-tolerant architectures is another step forward from a performance and fiscal perspective. It’s also an opportunity to evaluate deployment models to take advantage of microservices.

The cloud-first NG-MSP optimization component continues to focus on

  • Minimal cloud footprint to meet business requirements
  • Proper scaling architectures to serve growth and demand
  • Microservice utilization where appropriate
  • Built-in security monitoring through tools such as CloudCheckr
  • Custom NG-MSP delivery models to meet organizational and compliance requirements
  • Change management through scripting to ensure efficient deployment

We’ve done this long enough to know that NG-MSP models cannot be a one-size fits all proposition, which is why optimizing as much as possible within an environment will allow efficient operation from both a performance and cost perspective.

We stand ready to support your organization’s cloud environment with our NG-MSP services. To discuss options, please reach out to [email protected].

Next Generation MSP: Automation

Next Generation MSP: Automation

During the past nine years of delivering cloud, our team has identified a gap in the delivery of MSP solutions in the cloud.

As an AWS Premier Consulting Partner with the DevOps, Government, and Non-Profit competencies, and GovCloud Skill Partner Status, there are few types of workloads we haven’t helped customers migrate and manage in a cloud environment.

Our focus with MSP is around a structured, repeatable, five-step MSP process

  1. Evaluation
  2. Automation
  3. Optimization
  4. Monitoring
  5. Integration

We previously discussed the Evaluation phase of the MSP process. In this blog – the second in a series of five on Next Generation MSP (NG-MSP) – we’ll touch on the Automation, a critical component of a successful MSP strategy. Automation should include operational workflows; infrastructure provisioning; and, patching and change management.

The cloud-first MSP automation component is focused on

  • Deploy Infrastructure as Code (IaC)
  • Establish and secure edge connectivity
  • Deploy Virtual Private Clouds (VPCs)
  • Deploy networking / security groups
  • Set-up application and data tiers
  • Move applications with scripted builds

Where applicable, it is also important that required regulatory standards are automated when possible to ensure legal compliance in relation to sensitive areas (e.g. SCI, HIPAA, NIST, SOC, etc.). In addition, Continuous Integration/Continuous Deployment (CI/CD) pipelines should be developed and implemented to support reliable delivery of code changes.

MSP models cannot be a one-size fits all proposition. Automating as much as possible within an environment will help offload some of that ongoing MSP administration, allowing those outlier components unique to different organizations to be a focus of the Effectual team.

We stand ready to support your organizations’ cloud environment with our MSP services. To discuss options, please reach out to [email protected].

Next Generation MSP: Evaluation

Next Generation MSP: Evaluation

Cloud-based Managed Service Providers (MSP) changed the Managed Services landscape as we know it.

Unlike traditional on premise-based MSPs, a cloud MSP offers an unmatched level of efficiency, scalability and potential for innovation. This improves an organization’s operations significantly. Through the optimization of processes, automation of routine tasks, monitoring of vital components and integration of simpler workflows, resources can be free to focus on what is most important for the organization.

Our focus with MSP is around a structured, repeatable, five-step MSP process

  1. Evaluation
  2. Automation
  3. Optimization
  4. Monitoring
  5. Integration

This blog is the first in a five-part series related to our delivery of Next Generation MSP (NG-MSP).

To get an agency or organization started with Effectual’s MSP program, it is essential for a proper evaluation; specifically, an assessment of the security and network environments and general cloud readiness. Assessing these areas before any MSP activities commence helps identify potential security vulnerabilities and network performance issues. It promotes validation of existing technology, people, and processes while establishing a well-defined reference point.

Our evaluation focuses on the following

  • Identify apps to move
  • Develop project schedule
  • Refine budget / cost optimization
  • Project kickoff

MSP models cannot be a one-size fits all proposition. Automating as much as possible within an environment will help offload some of that ongoing MSP administration, allowing those outlier components unique to different organizations to be a focus of the Effectual team.

We stand ready to support your organizations’ cloud environment with our MSP services. To discuss options, please reach out to [email protected].

Series: Next Generation MSP

Amazon Web Service as a Data Lake

Amazon Web Service as a Data Lake

“Cloud,” “Machine Learning,” “Serverless,” “DevOps,” – technical terms utilized as buzzwords by marketing to get people excited, interested, and invested in the world of cloud architecture.

And now we have a new one – “Data Lake.” So, what is it? Why do we care? And how are lakes better than rivers and oceans? For one, it might be harder to get swept away by the current in a lake (literally, not metaphorically).

A Data Lake is a place where data is stored regardless of type – structured or unstructured. That data can then have analytics or queries ran against them. An allegory to a data lake is the internet itself. The internet, by design, is a bunch of servers labeled by IP addresses for them to communicate with each other. Search Engine web crawlers visit websites associated with these servers, accumulating data that can then be analyzed with complex algorithms. The results allow a person to type in a few words into a Search Engine and receive the most relatable information. This type of indiscriminate data accumulation and the presentation of context-relatable results is the goal of data lake utilization.

However, for anyone who wants to manage and present data in such a manner, they first need a data store to create their data lake. A prime example of such a store is Amazon S3 (Simple Storage Service) where documents, images, files, and other objects are stored indiscriminately. Have logs from servers and services from your cloud environments? Dump them here. Do you have documentation that are related to one subject, but are in different formats? Place them in S3. The filetype does not really matter for a data lake.

ElasticSearch can load data from S3, indexing your data through algorithms you define and providing ways to read and access that data with your own queries. It is a service designed to provide customers with search capability without the need to build your own searching algorithms.

Athena is a “serverless interactive query service.” What does this mean? It means, I can load countless CSVs into S3 buckets and have Athena return queried data as a data table output. Think database queries without the database server. Practically, you would need to implement cost management techniques (such as data partitioning) to limit the ingestion costs per query as you are charged on the amount of data read in a query.

Macie is an AWS service that ingests logs and content from all over AWS and analyzes that data for security risks. From personal identity information in S3 buckets to high risk IAM Users, Macie is an example of what types of analysis and visualization you can do when you have a data lake.

These are just some examples on how to augment your data in the cloud. S3, by itself, is already a data lake – ‘infinite’, unorganized, and unstructured data storage. And the service already is hooked into numerous other AWS services. Data lake is here to stay and is a mere stepping stone to utilizing the full suite of technologies available now and in the future. Start with S3, add your data files, and use Lambda, ElasticSearch, Athena, and traditional web pages to display the results of those services. No servers, no OS configurations or security concerns; just development of queries, lambda functions, API calls, and data presentation – serverless.

Our team is building and managing data lakes and the associated capabilities for multiple organizations and can help yours as well. Reach out to our team at [email protected] for some initial discovery.

The Right Partner is Better than a Crystal Ball (7 of 7)

The Right Partner is Better than a Crystal Ball (7 of 7)

Mistakes can create amazing learning opportunities and have even led to some of the most beneficial discoveries in human history, but they can also have far-reaching, time-consuming, and costly implications.

Luckily, someone else has probably already made the mistakes you’re bound to make, so why not reap the benefits of their errors and subsequent experience?

Know Your Strengths and Limitations

One of my passions in life is building things. Working with my hands to create something new and unique fills me with a sense of accomplishment beyond description. Over the years, I’ve taken on a variety of projects to scratch this itch; from baking, to renovating my kitchen (I’m really quite proud of the custom-built cabinets I made from scratch), to customizing various vehicles I’ve owned over the years. Along the way, I’ve built several Jeeps into serious off-road machines.

In the years before YouTube, when I was first developing the skills required to lift and modify a Jeep, I often encountered situations where I wasn’t confident in my knowledge or abilities. I knew that the vehicle I was working on would need to be just as safe and reliable on the freeway as it would be out in the middle of nowhere – places where the results of my efforts would be stress tested and the results of poor workmanship could be catastrophic. Each time I encountered an area where I had limited knowledge or experience, I would do all the research I could, then find someone with the right experience who could coach me through the process (and critique my work along the way).

Fortunately, I had a ready supply of trusted friends to advise me. In my time spent driving these heavily modified vehicles, I did encounter the occasional failure, but thanks to the skills I developed under the direction of these watchful eyes, none of them ever put me at significant risk.

“The wise man learns from the mistakes of others.”

Otto von Bismarck

As enterprises modernize their infrastructure, they should look at IT as a contributor to business innovation rather than a means to an end. Alongside this shift in view, the expertise required to define strategy, architect solutions, and deliver successful outcomes is increasingly difficult to acquire. In this dearth of available talent, many of the enterprises I’ve been dealing with are struggling with the decision between:

  1. Delaying modernization efforts
  2. Plowing ahead and relying on internal resources to get trained up and tackle the challenges
  3. Bringing in a third party to perform the work of modernization.

Unfortunately, none of these options is ideal.

Choosing the Best Path

The first option, delaying modernization, limits the enterprise’s ability to deliver products and services to their stakeholders and clients in innovative ways – opening the door for disruptive competitors to supplant them. For dramatic evidence of this, look at the contrasting stories of Sears, the first company to offer ‘shop at home’ functionality, and Amazon.com, the disruptor who has supplanted them as the go-to home shopping solution. The option of delaying presents significant risk, but the risk of assigning internal resources to address issues they’re not fully prepared to handle should not be underestimated.

Plowing ahead with a team that’s facing unique challenges for the first time means you’ll be lacking the benefits of experience and hindsight. In my previous posts I’ve discussed some of the hidden traps encountered along the modernization journey, many of which can only be seen in hindsight. These traps can only be avoided once you’ve had the inevitable misfortune and experience of having fallen into them before. “Fool me once…”

It’s similar to the Isla de Muerta from the Pirates of the Caribbean movies; “…an island that cannot be found – except by those who already know where it is.” Unlike the movie, there’s nothing magical about the pitfalls that litter the path to modernization. Many of the concepts that we have accepted as IT facts for decades are invalidated by modern approaches. So, the logical decision seems simple: outsource the effort to someone who has been there before.

Finding the Right Experience and Approach

Bringing in a partner that is experienced in the many areas of IT modernization is the safest of the three options, but your mode of engagement with this third party directly relates to the benefits your enterprise will enjoy. The best advice I can offer is to look for a provider who views modernization from a business perspective, not merely as a technical effort. Most providers will tout their technical expertise (we at effectual do too), but the reality is that nearly all competent providers have people with the same certifications. Technical certifications are no longer the differentiator they used to be. When you are interviewing these teams, ask how they plan to interact across your various enterprise teams outside of IT. If they look puzzled, or don’t have an answer, you know that they are not a business solution provider.

Once you have an idea around who is able to help with your modernization efforts, you need to make a decision regarding the methodology that will best suit your enterprise. One possible route is to completely turn the effort over to the outsourced team. While this is a fairly risk-free approach that leaves you with transformed IT when the project is over, you don’t gain any of the expertise required to manage your environment moving forward. I’ve found that the greatest benefits are realized when an enterprise and their provider partner together on the solution.

Providers as Partners
Partner resources collaborate with enterprise resources to deliver solutions, while also providing training, insight, oversight, and guidance.

In this scenario, the partner team takes the lead on the migration project under the executive direction of the enterprise, just like my friends who would help with my vehicle builds. Partner resources collaborate with enterprise resources to deliver solutions, while also providing training, insight, oversight, and guidance. At the end of the day the enterprise enjoys a better purpose-built solution and develops the expertise to enhance it as additional business requirements are identified, or existing requirements evolve.

What the Future Holds

As modernization efforts start to take hold and your teams gain confidence, you should not consider the journey complete. This is the point in the revolution where a modernized organization can truly view IT and engineering as the linchpin of your competitive advantage, whether it be through cloud adoption, big data, artificial intelligence, mobility, or other current technologies. Historically, the interaction between business and IT has been a two-step process. The business conceptualizes features that would benefit some constituency, whether it be internal or external, then directs IT to build it.

In the new world, where technological capabilities are rapidly evolving and growing, the competitive advantage comes primarily from changing that two-step process. The business first asks IT “What is possible?” and then business teams collaborate with IT to deliver forward‑thinking solutions. This is the behavior that enables innovation and disruption within industries. We’ll explore this topic in depth in a future post.

Learning from the Experts

What has made me a successful builder over the years has been my good fortune to have skilled artisans available to guide and coach me through the work as I was learning how to do it. As I learn tips and tricks from experts, I begin to behave like an expert and deliver high quality work. As you look to your modernization efforts your enterprise can Cloud ConfidentlyTM and see similar growth by bringing in the right partners to help lever your team’s skills and understanding.

The Reality of the Cloud and Company Culture for Financial Services – Part 2

The Reality of the Cloud and Company Culture for Financial Services – Part 2

Cloud transformation impacts more than just tech. It also requires a significant shift in company culture.

This is the second post of two in this series. The first post can be found here: The Reality of the Cloud for Financial Services.

The introduction of new technologies impacts workflow, changes the way your teams go about doing their jobs, and how they communicate with each other and customers. Your teams need to understand the day-to-day value of transformation, and they need to feel like part of the process.

Preparing your teams for potentially radical culture changes are especially critical for financial services organizations, which have historically been hesitant to adopt new and innovative technologies due to the heavily regulated nature of the FinServ industry.

  • Technical Preparedness: It’s common for IT teams to feel a loss of control due to cloud transformation. John Dodge of CIO has said that in-house IT can start feeling like a service broker without a sense of ownership. IT will confront a learning curve with interfaces, APIs, and provider management. Technical training and growth must be prioritized.
  • Skills BuildingIT isn’t the only department that will have to acquire new skills. Technical skills are a given, and will be increasingly vital, but your team will likely also need to learn new project management skills and develop a keen understanding of the new realities of security and compliance in the cloud.
  • Frequent Communication and TransparencySome IT departments can be very resistant to change. During a cloud transformation, there can be tension when your IT team is used to getting under the hood and having total control. Frequent, transparent communication, however, can mitigate resistance in IT. A simple email won’t do the trick. Your IT team needs to know what’s happening – and, more importantly, why— far in advance of it happening. Transparent communication can and should be part of a positive feedback loop that informs the track of on-going training.

Considerations in Cloud Transformation

Proper planning enables a successful project.  Here are some key considerations to discuss internally and with your partners:

  • Compliance and Security: This is at the top of the list for a reason – third-party and government-mandated security requirements for financial services companies leave little to no room for error. Your organization, partners, and cloud service provider must all understand your regulatory and compliance requirements and address your overall cloud security posture so that you can maintain compliance. This is literally Job One.
  • Performance: Performance is of key importance to financial services, who frequently need high performance compute to power their transactions and data analysis. You need easily deployable, easily scalable high-performance processing resources for simulations and modelling, machine learning, financial analysis, and data transformations. This requires a detailed ROI and financial analysis to understand costs and avoid sticker shock.
  • Intellectual Property: What is more important to financial services than their intellectual property? Putting that anywhere outside of your own systems is a hugerisk, but a properly architected cloud solution can ensure that your data is safer in the cloud than it is in legacy solutions.

Transitioning to the cloud can give your business a competitive edge. For financial services lacking large, experienced in-house IT teams, it’s worth considering a partner and leveraging their expertise to make your transition a success.

Robb Allen is the CEO of effectual, Inc.

What I Saw at AWS re:Invent That You May Have Missed

What I Saw at AWS re:Invent That You May Have Missed

It was an interesting re:Invent for me – launching our new company, the constant feeds from AWS and several partners filled with litanies of new feature and product announcements, multiple multi-hour keynote speeches (yes, I did sit through all of them), working to discern hype from reality, fighting through the throngs of people, and trying to convince the 12-year-old child inside me that I don’t need a Deep Racer.

In the midst of all of this, there were a couple of interesting success stories that really validated my recent lines of thought regarding the business ramifications of cloud transformation.

Andy Jassy’s wide-ranging keynote included product and feature announcements related to Database, Storage, Machine Learning, Security, Blockchain, and who can forget Outposts (so you can run the cloud in your closet).

Revolutionizing the Insurance Industry

In his keynote, Andy Jassy introduces and turns the stage over to Dean Del Vecchio, EVP, CIO and Head of Enterprise Shared Services for Guardian Insurance. Guardian is a great example of a legacy enterprise that has embraced the revolution despite being in a highly regulated industry . With the words, “I’m going to start in a place that may not be expected – I’m going to talk about our workplace Strategy”, Mr. Del Vecchio had my full attention. From there he went on to talk about Guardian’s multi-year transformation that needed to take place before they moved their first workload to the cloud. They changed their office environment, modernized project methodology, trained their staff on new technologies, and ultimately revolutionized their culture. This was not a tale of a headlong sprint to cloud, it was a thoughtful, self-aware, and measured approach.

Dean Del Vecchio, EVP, CIO and Head of Enterprise Shared Services for Guardian Insurance

Guardian is a great example of a legacy enterprise that has embraced the revolution despite being in a highly regulated industry.

Once cultural change had begun to take root, Guardian stood up AWS environments and ran Proof of Concept workloads for a full year before the first workload was moved. They identified gaps during this time, and worked with vendors to develop solutions that enabled Guardian to be bold and confident in their migration. They documented their technological biases, the core being a Cloud First posture (rather than an All-In on cloud directive). Because of their approach and inherent understanding of the revolutionary nature of 21st century technologies, they were able to take a Production First approach to moving to the cloud. There were undoubtedly significant pain points and localized failures throughout this journey, but having come through the bulk of it now, Guardian sees their adoption of cloud as a competitive advantage. They have revolutionized the way they interact with their customers and are excited to begin making use of forward-looking technologies like AI, AR, and VR in the cloud to further enhance client experience.

Guardian’s journey revolutionized a 158-year-old Fortune 250 enterprise, from top to bottom.

What was most interesting to me was what their journey was not.It was not exclusively an IT effort, a rapid lift and shift migration, or even primarily technical in nature. It was definitely not shortsighted in nature. It did revolutionize a 158-year-old Fortune 250 enterprise, from top to bottom, and they are now well-positioned for another 158 years of success.

Fender Guitars – re:Invented

In Werner Vogels’ keynote, he started off by talking about minimizing blast radius, one of my favorite topics, and he wrapped up by talking about Deep Racer, the toy that my inner 12-year-old seems to think I need. In between, amongst some explanations around high-level database software design principles, he introduced Ethan Kaplan, the Chief Product Officer, Fender Digital, from Fender Musical Instruments, one of the premier guitar manufacturers in the world. Mr. Kaplan shared some of the results from research Fender conducted around their client base.

Basically, it boils down to three key points:

  1. 90% of all first-time guitar buyers quit playing within 6 months of the purchase
  2. Those who don’t quit will purchase 8-10 guitars over their lifetime
  3. Guitar players spend roughly 4x the value of their first guitar on lessons

From this data they realized that there is a significant market for guitar lessons, and if they can find a better way to improve on the traditional lesson model, they will likely sell many more guitars.

Fender Digital Took a Cloud First Approach for All New Application Development

Fender Digital Chief Product Officer, Ethan Kaplan, discussed the apps and teaching methodology created to help new guitarists see success. Now shooting 4K video 6 days a week, Fender worked alongside AWS in developing a full video processing pipeline architecture that brings terabytes of new content to their users every day.

This led them in two directions. They created a number of apps to help new players understand their new instrument, and they developed an app and teaching methodology that is more fun for the new player and helps them see musical success sooner. Combined, these convert a higher percentage of new players into lifetime players. The story I saw here is that the adoption of revolutionary 21stcentury IT technologies has enabled Fender to transform their business. They have always made great guitars and stringed instruments, and will continue to do so, but now their business is more and more about teaching people how to play. Fender is shooting 4k video on two soundstages six days a week to support this new mission, producing Terabytes of content per day. Working alongside AWS to create a video processing pipeline architecture, Fender can now automatically transcode down to cellphones and up to 4k televisions, simultaneously auto-populate their CDN and CMS, and archive raw video to glacier 24 hours a day, 7 days a week. Before long the core of their business will be developing and delivering musical training curricula with the manufacture of instruments being a sideline.

Realizing 21stCentury IT

For the past couple of months, I’ve been writing about how cloud and cloud-related technologies are revolutionary in nature, and how your decision to adopt cloud should be viewed as a business effort rather than a strictly technical effort. These two stories from re:Invent keynotes are great examples of similar success that I have seen with clients who have recognized and acted on this principle.

If you are reading this and you’ve already begun a cloud effort that looks like a data center move, it’s not too late to change course. Don’t fall into the trap of thinking that we’ll just Lift and Shift to the cloud now and transform later – this is a prime example of thinking of cloud as a technical rather than business solution. I’ve yet to see an enterprise be successful at following through on this approach. If you’re not looking at 21stcentury IT as a business change agent and competitive advantage, it won’t be long before one of your competitors is.

The Reality of the Cloud for Financial Services – Part 1

The Reality of the Cloud for Financial Services – Part 1

According to a report from Markets and Markets, the financial services cloud market is set to reach a total market of $29.47 billion by 2021, growing 24.4 percent.

The study further found that most of this growth will be right here in North America. At the forefront of this trend, Capital One has made public declarations around their all-in cloud, all AWS strategy. Their significant presence on the expo floor at AWS re:Invent was further evidence of this commitment. The growth of cloud adoption in this sector is driven by a simple truth: Financial services are being transformed by the cloud.

Still, some Financial Services companies struggle with the challenge presented by the maze of legacy services built up over decades of mergers and acquisitions. It’s all too common for financial services organizations to still be using IBM mainframes from the 1960s alongside newer technologies. This can make the prospect of cloud transformation seem daunting to say the least.

However, the question is not if your organization will transition to the cloud, but when. Cloud transformation benefits greatly outweigh the risks — and your competitors are already moving. The longer you wait, the further behind you fall.

Benefits of Cloud Migrations from Legacy Solutions

There are specific benefits to financial services organizations migrating legacy environments to the cloud:

  • Efficiency: A cloud migration offers opportunity for increased efficiency at decreased operational costs. For financial services organizations looking to revolutionize their IT procurement model, a cloud migration from legacy environments is a quick and easy win, opening the door to more modern tools and services.
  • Decreased Storage Costs: The regulatory requirements for data retention can create enormous costs for financial services. Moving to AWS cloud storage solutions can significantly reduce costs, while still meeting stringent data security requirements. A well architected cloud storage solution will meet your needs for virtually unlimited scalability while improving cost transparency and predictability.
  • Increased Agility: To improve their competitive edge, Financial Services organizations are seeking ways to become more agile. A well-planned cloud transformation will result in applications and platforms that effortlessly scale up and down to meet both internal and customer-facing needs. What’s more, a successful cloud transformation means access to new and better tools and advanced resources.
  • Improved Security: Cloud solutions offer access to enterprise-level equipment and the security that comes along with that. This is normally only within the budget of very large organizations. Cloud services provide increased redundancy of data, even across wide geographical areas. They also offer built-in malware protection and best-in-class encryption capabilities.

In my next post we’ll discuss the “culture transformation” that will enable Financial Services to maximize the return on their cloud transformation investment.

Robb Allen is the CEO of effectual, Inc.

When Best Efforts Aren’t Good Enough (6 of 7)

When Best Efforts Aren’t Good Enough (6 of 7)

“Have you tried rebooting it?”

There was a time, not so long ago, when that was the first question a technician would ask when attempting to resolve an issue with a PC or a server that evolved from PCs. This was not limited to servers; IT appliances, network equipment, and other computing devices could all be expected to behave oddly if not regularly rebooted. As enterprise IT departments matured, reboot schedules were developed for equipment as a part of routine preventative maintenance. Initially, IT departments developed policies, procedures, and redundant architectures to minimize the impact of regular reboots on clients. Hardware and O/S manufacturers did their part by addressing most of the issues that caused the need for these reboots, and the practice has gradually faded from memory. While the practice of routine reboots is mostly gone, the architectures, metrics, and SLAs remain.

Five Nines (or 99.999%) availability SLAs became the gold standard for infrastructure, and is assumed in most environments today. As business applications have become more complex, integrated, and distributed, the availability of individual systems supporting them has become increasingly critical. Fault tolerance in application development is not trivial, and in application integration efforts it is orders of magnitude more difficult, particularly when the source code is not available to the team performing the integration. These complex systems are fragile and will behave in unpredictable ways if not shut down and restarted in an orderly fashion. If a single server supporting a piece of a large distributed application fails, it can cause system or data corruption that will take significant time to resolve, impacting client access to applications. The fragile nature of applications makes Five Nines architectures very important. Today, applications hosted in data centers rely on infrastructure and operating systems that are rock solid, never failing, and reliable to a Five Nines standard or better.

As we look at cloud, it’s easy to believe that there is an equivalency between a host in your data center and an instance in the cloud. While the specifications look similar, critical differences exist that often get overlooked. For example, instances in the cloud (as well as all other cloud services) have a significantly lower SLA standard than we are used to, some are even provided on a Best Effort basis. It’s easy to understand why this important difference is missed – the hardware and operating systems we currently place in data centers are designed to meet Five Nines standards, so it is assumed, and nobody asks about it anymore. Cloud-hosted services are designed to resemble systems we deploy to our data centers, and although the various cloud providers out there are clear and honest about their SLAs, they don’t exactly trumpet the difference between traditionally accepted SLAs and those they offer from their rooftops.

A Best Efforts SLA essentially boils down to your vendor promising to do whatever they are willing to do to make your systems available to you. There is no guarantee of uptime, availability or durability of systems, and if a system goes down, you have little or no legal recourse. Of course, it is in the interest of the vendor and their reputation to restore systems as quickly as possible, but they (not you) determine how the outage will be addressed, and how resources will be applied to resolve issues. For example, if the vendor decides that their most senior technicians should not be redirected from other priorities to address the outage, you’ll have more junior technicians handling the issue, who may potentially take longer to resolve it – a situation which is in your vendor’s self-determined best interest, not yours.

There are several instances where a cloud provider will provide an SLA better than the default of Best Efforts. An example of this is AWS S3, where Amazon is proud of their Eleven Nines of data durability. Don’t be confused by this, it is a promise that your data stored there won’t be lost, but not a promise that you’ll be able to access it whenever you want. You can find availability SLAs for several AWS services, but none of them exceed Four Nines. This represents effectively 10x the potential outage time over Five Nines, and applies only to the services provided by the cloud provider, not the infrastructure you use to connect to them or your applications which run on top of them. The nature of a cloud service outage is also different than one that happens in a data center. In your data center, catastrophic all-encompassing outages are rare, and your technicians will typically still have access to systems and data while your users do not. They can work on both restoring services and “Plan B” approaches concurrently. When systems fail in the cloud, often times there is no access for technicians, and the work restoring services cannot begin until the cloud provider has restored access. This typically leads to more application downtime. Additionally, when systems go down in your data center, your teams can typically provide an ETA for restoration and status updates along the way. Cloud providers are notorious for not offering status updates while systems are down, and in some cases, the systems they use to report failures and provide status updates rely on the failed systems themselves – meaning you’ll get no information regarding the outage until it is resolved. Admittedly, these types of events are rare, but the possibility should still give you pause. So, you’ve decided to move your systems to the cloud, and now you’re wondering how you are going to deal with the inevitable outages. There are really only a few options available to you; first, you can do nothing and hope for the best. For some business applications this may be the optimal (although most risky) path. Second, you can design your cloud infrastructure like your data centers have been designed for years. My last two posts explored how expensive this path is, and depending on how you design, it may not offer you the availability that you desire anyway. Third, you can implement cloud infrastructure automation and develop auto-scaling/healing designs that identify outages as they happen and often respond before your team is even aware of a problem. This option is more cost-effective than the second option, but it requires significant upfront capital and its effectiveness requires people well-versed in deploying this type of solution – people who are in high demand and hard to find right now. Finally, the ideal way to handle this this challenge is to rewrite applications software to be cloud-native – modular, fault-tolerant applications that are infrastructure aware, able to self-deploy and self-re-deploy through CI/CD patterns and embedded infrastructure as code. For most enterprise applications this would be a herculean effort and a bridge too far. Over the past several decades, as we’ve made progress in IT towards total availability of services, you’ve come to rely on, take comfort in, and expect your applications/business features to be available all the time. Without proper thought, planning, and an understanding of the revolutionary nature of cloud hosted infrastructure, that availability is likely to take a step backward. Don’t be like so many others and pay a premium for lower uptime. Be aware that there are hazards out there and bring in experienced people to help you identify the risks and mitigate them. You’re looking for people who view your moves toward the cloud as a business effort, not merely a technical one. Understand the challenges that lie ahead, make informed decisions regarding the future of your cloud estate, and above all, Cloud ConfidentlyTM!

Don’t Take Availability for Granted
Over the past several decades, as we’ve made progress in IT towards total availability of services, you’ve come to rely on, take comfort in, and expect your applications/business features to be available all the time. Without proper thought, planning, and an understanding of the revolutionary nature of cloud hosted infrastructure, that availability is likely to take a step backward. Bring in experienced people to help you identify the risks and mitigate them.

Adopting DevOps Methodology

Adopting DevOps Methodology

The successful adoption of DevOps is about much more than accepting a new methodology – it means embracing a new culture.

This is the third post in a series of three by effectual CEO, Robb Allen. The first two posts can be read here:
     • Embracing DevOps to Solve IT Tension
     • Defusing the Tension

DevOps methodology requires seeing engineering and IT more holistically. They work together throughout the product lifecycle, with some organizations merging them into a single team.

DevOps culture decreases time to market, while increasing reliability, scale, and security through closer collaboration between development (engineering) and operations (IT). It can require a sea change in your company’s culture, but again — the costs of not adopting DevOps can be far greater.

DevOps methodology requires seeing engineering and IT more holistically.

Fostering Communication and Cooperation

  • Create a unified reporting structure for DevOps teams. Set clear expectations and guidelines, driven by fundamental values.
  • Build cross-functional teams including both engineers and IT technicians.
  • Leverage engineers to assist IT technicians in understanding complexities involved in delivering business features.
  • Leverage IT technicians to help engineers understand how involved 24/7 support of those features is.
  • Automate repetitive tasks, freeing up engineers and IT technicians to perform more creative, value-adding work while reducing human error.
  • Make feedback a fundamental feature of workflow to encourage communication-driven action. Create a culture of data sharing.

Driving Improvements through DevOps

While DevOps is generally part of product development, it can also be an invaluable tool during cloud migration. No less an authority than Bina Khimani, Global Head of Partner Ecosystem (Cloud Migrations) at Amazon Web Services, made this case in late 2017. A survey of 450 C-level and VP/director-level executives in the United States, Canada, and the UK found 54 percent already leveraged DevOps methodologies.

This is not an easy direction to move in, even with the best circumstances. It requires unwavering leadership in the truest sense of the word. However, the rewards are wide-ranging, including:

  • Continuous development with shorter development timetables.
  • Decreased complexity.
  • Faster problem resolution.
  • More productive teams with higher morale and engagement.
  • More opportunities for professional development.
  • More time spent creating and innovating, as opposed to fixing and maintaining.

The Undeniable Benefits

The 2016 State of DevOps Report from Puppet found DevOps adoption resulted in a 200-fold increase in deployment times, 24x increase in recovery times, and a 3x decrease in change failure rates.

Cloud deployment can be the perfect time to start adopting these changes in your organizational culture. A successful adoption means that not only does everyone win, but everyone will feel like they’ve won as well. That’s a value you just can’t put a price on.

Robb Allen is the CEO of effectual, Inc.

Defusing the Tension

Defusing the Tension

Knowing the source of interdepartmental tension isn’t enough.

his is the second post in a series of three by effectual CEO, Robb Allen. The first post can be read here: “Embracing DevOps to Solve IT Tension“.

A conscious and concerted effort is required to get everyone on the same page, pulling in the same direction.

  • Complementary Missions
    IT and Engineering have different purposes, but their mission statements can and should complement one another, rather than causing them to butt heads. Involve both IT and Engineering team members when crafting mission statements to make sure that one team doesn’t dominate the other. The purpose here is gathering their input and voice to create mutually beneficial missions.
  • Open Communication
    You need to encourage open, clear, and direct communication between departments, especially during disruptive periods. There’s nothing wrong with formalizing this diplomatic process. Having regular meetings where the two departments can hash out disagreements and sticking points can be worth its weight in gold, helping to diminish or eliminate tension altogether. Remember the Agile principle that face-to-face communication is the best.
  • Complementary Needs
    Emphasize points of agreement. Where they don’t exist, try to find needs and wants that are complementary. When it’s not possible to make everyone win, it might be possible for each department to feel like they’ve won enough.
  • Cooperative Culture
    Create a culture where one department sees the victories of another as successes by fostering a workplace where departments see themselves as part of a greater holistic whole rather than competing factions. Encourage self-organization in keeping with Agile principles.

Cloud migration is about more than just “lifting and shifting” applications and data or modernizing applications for the cloud, it’s about transforming the culture of your entire organization.

Cloud migration is about transforming the culture of your entire organization.

How a Cloud Migration Can Inspire a DevOps Culture

You can use your cloud migration as an opportunity to implement Agile development principles. Defusing tension is an important element of a successful cloud migration. Once you have the broader strokes squared away, you can start drilling down into the specifics.

  • Automation
    There’s at least one thing both IT and DevOps immediately have in common: they both want to identify and address manual, inefficient, error-prone tasks. Automating these tasks can free up engineering and IT to focus on more creative and value-adding work. Embracing automation is often the way to gain cross-departmental wins that increase cooperation and morale amongst these teams.
  • Framing
    How you frame a cloud migration can go a long way toward garnering goodwill from both departments. Cloud migrations can be framed as the inevitable march of progress, something we all have to “deal with,” or they can be framed as a challenge requiring input from both departments. The former is a quick way to create resentment, but the latter is a valuable way to inspire the best in your people, including healthy competition and creative cooperation.

In my next post, I’ll explore the adoption of DevOps methodology, which sees engineering and IT not as opposing factions, but two parts of a greater whole.

Robb Allen is the CEO of effectual, Inc.

Bridging a Gap

Bridging a Gap

One of the most difficult challenges facing businesses today is bridging the “Skills Gap” in IT.

This is a phenomenon whereby a business’s desire to leverage new and expanding technologies is hindered by the lack of available talent and skillsets to architect, implement, and manage these technologies.

Businesses commonly navigate this gap through the use of outside consultants, relying on a team of high-performing individuals holding extensive experience within a specific domain. Engaging a consultancy to assist in the architecture and implementation of desired technologies accelerates adoption and provides breathing room for a company’s own staff to learn and practice the new technologies. This time to practice and learn should not to be overlooked and is essential to the success of any transformative process. The “Building a car while driving” metaphor comes to mind.

This skills gap is not limited to businesses looking to implement new technologies. Ironically, this conundrum has not bypassed IT vendors, and it is fairly common to see the same gap between IT Sales Professionals and the technologies they’re selling.

It is fairly common to see a gap between IT Sales Professionals and the technologies they’re selling.

The Team Approach

IT vendors separate the necessary skills to govern a sales cycle (commercial skills) with the knowledge and expertise of the products they’re selling (technical skills) by partnering a commercially skilled employee with a technically skilled employee. This approach has permeated the market to such an extent that prospective clients now expect their vendors to show up Noah’s Ark style, two by two.

There is still a struggle to find technically skilled people to accompany commercially skilled sales staff. Mitigation strategies have a certain effectiveness; teaming four or five commercially skilled individuals with a single technically skilled employee allows IT vendors to maximize the effectiveness of their expertise. Technically skilled people are not super-human, although some of the most brilliant deserve to be classified as such, and they can only support so many client conversations, differing sales cycles, and solutions. Eventually “Something’s Gotta Give”. This mitigation is merely a stop-gap solution. The underlying issue is still prevalent; all businesses need a certain number of technically skilled individuals to remain competitive. The fight for talent continues.

IT vendors are also trying to address this gap by putting a ‘paywall’ between prospective clients and technically skilled individuals, charging clients for their employees’ time and experience to assess their current needs. This solution lines up neatly with the consultancy approach. If you have something a business wants, don’t give it away for free. This also places more responsibility and pressure on the commercially skilled individual, forcing them ensure they qualify a true need with their clients before requesting a potentially limited resource. When a client opts to pay for the time and experience of a technically skilled individual, they are showing real intent to form a business relationship.

This approach fails when commercially skilled individuals don’t qualify the needs of a client in detail and overpromise on the capabilities of their technical counterparts. Again, these technologists are not superhuman and come with a caveat – ‘Magic Wand Not Included.’

It is now the responsibility of a commercially skilled individual to gain some measure of technical skill.

Without a technically skilled individual in the room when qualifying questions are being asked, commercially skilled individuals can be forgiven for misunderstanding the technical requirements of a client. In these scenarios, sales may inadvertently offer a solution to a client need that their product or service does not solve. This can lead to perpetuating the classic salesperson stereotype– just agree to anything in order to close a deal.

At effectual, our approach to sales, which we believe is shared by the majority of our peers, is to avoid this at all costs. At no point do we want to have a client conversation that starts with “But you said…”. This stems from a desire to do business openly, honestly, and with integrity. Recognizing that not every client is in need of what you are selling is vital.

The Importance of Professional Development

With this context, I’d like to address the crux of how I personally went about bridging the gap. In order to avoid overpromising, underdelivering, and starting a client relationship that is doomed from day one, it is my belief that it is now the responsibility of a commercially skilled individual to gain some measure of technical skill. In fact, I’ll phrase that more strongly: gain as much technical skill as necessary so you can speak with confidence or reply with “I don’t know but I’ll find out.”

Commercially skilled individuals may have raised their eyebrows, adopted an incredulous look, or stopped reading altogether, and that’s fine. Nowhere in a commercially skilled employee’s job description does it state they need to be technically qualified or expected to perform two job roles for a single salary. There is a logical line of thinking that emerges here. If they were hired for the skills they have already acquired, why would should they be expected to develop skills in a different discipline for a role they have no intention of performing.

My argument however is simple: why not? Broadening your horizons is never a negative, and learning new skills, gaining empathy for other people’s challenges, and respecting their achievements is never a bad thing. Meeting your current and potential clients halfway or the whole way is never a bad thing.

With technically skilled individuals being in such high demand, and without enough of them to go around, surely being able to operate without them creates a competitive advantage. I’m not suggesting that the commercially skilled pivot entirely and a embark on a drastic career change, only that a little knowledge is empowering, and a lot of knowledge is powerful. Watch as the atmosphere of client meetings change from “I’m being sold to” to “this person knows what they’re talking about.”

A little knowledge is empowering, and a lot of knowledge is powerful.

You can gain a better understanding of exactly how a product or service can assist a prospective client, identify incompatibility early on, and qualify out with confidence. It takes an expanded skill set to understand when the fit is not right, and it takes integrity to step away.

Earning the Respect of Your Clients and Your Team

Technically skilled individuals who play the yin to commercial individual’s yang will be Sales’ biggest supporters. I know this from experience and have made some wonderful friendships as a result (I hope they’re reading this). Operating with more autonomy doesn’t mean you are trying to replace them, just helping to ease their burden. When you finally do request help, they know it will be an interesting challenge, an opportunity for them to impart some knowledge on someone keen to absorb it, or to get creative with a solution because the standard approaches aren’t working.

How do they know all this? Because they know that when you ask, you’ve already ruled out the most common technical approaches. You’ve met them halfway.

I don’t presume to speak on behalf of my better qualified and more experienced technical counterparts, but I hope they’ll agree that they get the biggest kicks out of solving tough challenges, not answering the same questions over and over.

Yes, I feel it’s important for commercially skilled individuals to bridge the gap between commercial and technical skills. Selfishly, it’s for their own benefit for now, but it’ll soon become a necessity as the market-wide skills gap continues to grow.

Tom Spalding is a Strategic Account Manager at effectual, Inc.

Embracing DevOps to Solve IT Tension

Embracing DevOps to Solve IT Tension

Tension. It can be uncomfortable, disconcerting, even a bit scary.

his is the first post in a series of three by effectual CEO, Robb Allen. The rest of the posts can be read here:
     • “Defusing the Tension
     • “Adopting DevOps Methodology

But it’s not unusual for the team supporting your digital infrastructure (IT) and your team developing new applications and refining legacy apps (software engineering) to exist in a state of tension at the best of times. Cloud migrations can heighten this tension to the extreme.

The needs and wants of Engineering and IT often come into conflict during a cloud migration. What’s more, the heightened tension can linger around long after the migration is done. That’s not just unpleasant. It can seriously impact your bottom line. So how do you mitigate the tensions between IT and Software Engineering?

Embracing DevOps methodology is a good way to move toward a world where Engineering and IT are seen as two parts of a more cohesive whole. This might require significant change in your organization. But the cost of complacency can be much higher.

Embracing DevOps methodology is a good way to move toward a world where Engineering and IT are seen as two parts of a more cohesive whole.

Understand IT Tension

Common causes of IT tension include:

  • Conflicting Missions
    The distinct missions of IT and Engineering can put them instantly into conflict. Engineering is focused on innovating and getting new applications out on time. IT is focused on providing a stable and secure environment for everyone from accounting to the front desk. So, while Engineering is one of many balanced priorities for IT, they believe they should be the top priority. This belief isn’t entirely unjustified. Satisfying clients and developing software are first on the list of Agile Methodology for a reason.
  • Lack of Communication and Understanding
    Different missions, scopes, and priorities can make communication difficult even during the best of times. In some cases, there may not even be enough of a shared language for communication. Disputes between IT and DevOps are often communicated and “negotiated” through third parties, adding further opportunity for creating tension. Lack of communication frequently leads to the blame game when things go wrong. This, in turn, deepens mistrust and animosity between the two departments.

Interdepartmental animosity between IT and Engineering doesn’t have to be a fact of life. Management can take steps to move two adversarial departments onto the same page. There might still be rivalries, but that can be healthy. Rivalries that are tempered by mutual respect inspire harder work and greater creativity. Organizations capable of this level of honesty, self-awareness and accountability, will gain a significant competitive advantage over those that are not.

This is the first post in a series of three that discusses the benefits of moving to a DevOps methodology. Next time (Defusing the Tension) we’ll explore some of the ways DevOps can address and defuse tensions within IT departments.

Robb Allen is the CEO of effectual, Inc.

Cloud: The Mirage of Massive Cost Savings (Ketchup on the side) (5 of 7)

Cloud: The Mirage of Massive Cost Savings (Ketchup on the side) (5 of 7)

“Why are you moving to the cloud?” is a question I’ve asked more times than I can count. It’s one of the first questions posed to a potential client, for multiple reasons.

The two most important reasons are; first, I want to get a little insight into how thoughtful/educated this potential client is in relation cloud, and second, I want to understand what metrics will be used to determine success and or failure of the project we are considering undertaking. Potential clients respond to this question in various ways, but almost always, one of their first answers is around saving money and/or cutting costs. When I hear this response, I ask a couple of follow-up questions to clarify how they plan on accomplishing this ambiguous goal. More often than not, they have no idea how they will recognize cost savings and many just expect it to be a natural benefit of moving their VMs to the cloud. This near universal acceptance of a broad notion, with little factual basis, reminds me of the story of ketchup. In the mid 19thcentury, a doctor took the ketchup of the time (which was basically fermented mushroom sauce or ground up fish innards –further reading on this if you are so inclined ) and added tomatoes to it. He made some somewhat dubious claims regarding the maladies that could be cured by his new ketchup, which were picked up by the press. By the later part of the 19thcentury, with the help of unscrupulous hucksters along the way, nearly everyone believed that ketchup cured all ills. While ketchup does have some definite health benefits and is a very tasty condiment; it’s rich in Vitamin C and anti-oxidants; a cure-all it is most definitely not.

The truth is, simple cloud migration, even when instances are right sized and Reserved Instances (RIs) are purchased, is unlikely to produce significant cost efficiency for infrastructure that isn’t properly architected to take advantage of cloud services. In my last post I shared an example of two different cloud deployment strategies for a sample application. The five-year total operating costs were roughly $350k for one strategy and $14k for the other. The difference; to recognize the operational cost savings of $336k over 5 years, an enterprise would need to spend roughly $100k and several months of effort upfront. Enterprises are wary of the upfront costs, 125% of the expensive model’s projected first year operating costs, or 571% of those first quarter operating costs, so they make a short-term financial decision to proceed with the $350k option. More often than not, this decision is made in the IT department, based on their limited budget visibility, not at the executive level where greater budgetary visibility and enterprise strategy is handled. Another dirty little secret about cloud cost management that doesn’t often percolate up to executive levels is that the flexibility of cloud allows your IT teams to immediately spin up services that generate significant cost with little or no financial oversight until the invoice comes due. For example, common compute instances at AWS can cost from $6 – $10 per hour with specialized services available through the marketplace costing several times that. An otherwise well-meaning IT employee (with no purchasing authority) could spin up a single $8/hour resource with no oversight, which by the time the bill has been received could total $6k-$7k in charges. While this situation would likely be recognized and addressed at the time the invoice was reviewed, dozens and dozens of smaller instances could take years of invoice cycles to clear up and over time have a much greater but less initially obvious impact. I have been involved in several remediation efforts for clients’ IT departments that were spending $500k+ annually in un-accounted for cloud services.

Cross-functional governance

The most successful way I’ve seen enterprises address this new requirement is to create a cross-functional governance committee that includes representation from finance, IT and core business units, with the charter of managing cloud related costs.

Contrast this with the IT Provisioning Model where costs were governed prior to the time of purchase. When an IT department needed additional infrastructure, a Capital Expenditure process is/was in place that required finance approval. Budgets and expenditures were relatively easily managed, and without proper authority, individual purchasing power was limited. In the revolutionary world of 21stCentury IT, we need revolutionary methods of governance. The most successful way I’ve seen enterprises address this new requirement is to create a cross-functional governance committee that includes representation from finance, IT and core business units, with the charter of managing cloud related costs. In enterprises that have Cloud Steering or Governance Committees or a Cloud Center of Excellence, this cross functional group works under their direction. My good friends at Cloudability, who have developed what is probably the most comprehensive cloud cost reporting and management toolset in the industry, refer to this committee as the Cloud Financial Office (CFO – I believe the pun is intended). This committee evaluates the needs of the business, the reporting and cost management/containment requirements of finance, and the operational/support requirements of IT, to determine the best approach for meeting all three stakeholders. They develop strategy, policies and procedures for IT, finance, and business that lead to a deployed cloud infrastructure that is manageable from a cost perspective. As I mentioned above, there are tools that support this mission, but without the insights of the entire committee to interpret and act on the data, you will not recognize the value of the tools or succeed in being cost efficient with the capital you spend on cloud-based infrastructure. Tools are not a silver bullet. Just like there’s a nugget of truth underlying the health benefits of ketchup, when thoughtfully planned, considered and executed, you can recognize significant IT cost reductions as well as several other powerful benefits from transforming your infrastructure to the cloud. On the other hand, just like drinking a bottle of ketchup a day won’t cure or prevent any maladies, ignoring the revolutionary nature of the cloud and how your enterprise must adapt in order to “Cloud ConfidentlyTM”, won’t lead to any promised savings. It will likely result in higher costs for fewer benefits than you enjoy now. As you approach cloud adoption, remember, not everyone making claims of free and instant IT savings has your particular best interests at heart. Many of them, much like the 19thcentury ketchup hucksters, benefit handsomely as you overspend in the cloud. It’s the 21stCentury, don’t drink the ketchup, Cloud ConfidentlyTM!

A Tale of Two Models: Provisioning vs. Capacity (4 of 7)

A Tale of Two Models: Provisioning vs. Capacity (4 of 7)

A couple of weeks ago, I wrote about current IT trends being ‘revolutionary’ as opposed to ‘evolutionary’ in nature.

Today, I want to expand on that concept and share one of the planning models that make cloud systems in particular and automated infrastructure in general more cost effective and efficient. When talking to clients I refer to this as “The Provisioning vs. Capacity Model”. First let’s look at the Provisioning Model, which, with some adaptation, has underpinned infrastructure decisions for the last five decades of IT planning. The basic formula is fairly complex, but looks something like this:

((((CurrentPeakApplicationRequirements * GrowthFactor) * HardwareLifespan) + FudgeFactor) * HighAvailability) * DisasterRecovery

Let’s look at a practical example of what this means. As an IT leader, being asked to host a new application, I would work with the app vendor and/or developers to understand the compute, storage and networking configurations they recommend per process/user. Let’s say that we determine that a current processor core can support 10 concurrent users and a single user creates roughly 800K of data per day.

I would then work with the business to identify the number of users we expect to begin with, their estimate for peak concurrent users and what expected annual growth will be. Ultimately, we project that we will start with 20 users who may all be using the system at the same time. Within the first year, they anticipate scaling to 250 users, but only 25% of them will be expected to be using the system concurrently. By year five (our projected hardware lifespan) they are projecting to have 800 users, 300 of whom may be using the system at any given time. I can now calculate the hardware requirements of this application:  

YearUsersStorage (GB)Concurrent UsersCores
125049.59636
2450138.8513514
3600257.8722823
4700396.7325926
5800555.4230030

Being an experienced IT leader, I ‘know’ that these numbers are wrong, so I’m going to pad them. Since the storage is inconsequential in size (I’ll likely use some of my heavily over provisioned SAN), from here on out, I’ll focus on compute.  The numbers tell me that I’ll need 2 servers each with 4 quad core processors for a total of 32 processors. Out of caution I would probably increase that to 3 servers. Configuring memory would follow a similar pattern. Because the application is mission critical it’ll be deployed in a Highly Available (HA) configuration, so I’ll need a total of six servers in case the there is a failure with the first three. This application will also require infrastructure in our DR site, so we’ll replicate those six servers there for a total order of twelve servers. In summary, on day one, this business would have a dozen servers in place to support 20 users.

The Provisioning Model can lead to overkill

Under the provisioning model a Highly Available solution with sufficient Disaster Recovery infrastructure could result in a large server deployment to support a very small number of users.

I know what you’re thinking, “This is insanity, if my IT people are doing this, they are robbing me blind!” No, they aren’t robbing you blind, they are following a “Provisioning” model of IT planning. The reason they plan this way is simple; it usually takes months from the time that an infrastructure need is identified to the time that it is deployed in production. It looks something like this in most enterprises:

  • 1-2 weeks – Identify a need and validate requirements
  • 1 week – Solicit quotes from 3 approved vendors (if the solution comes from a non-approved vendor, add 3 months to a year for vendor approval)
  • 2-3 weeks – Generate a Capital Request with documented Business Justification
  • 2 weeks – Submit Capital Request to Finance for approval
  • 2-3 weeks – Request a PO from purchasing & submit to vendor
  • 2-3 weeks – Wait for vendor to deliver hardware & Corporate receiving to move equipment to configuration lab
  • 3-4 weeks – Manually configure solution (Install O/S & Applications, request network ports, firewall configurations, etc)
  • 2 weeks – Install and Burn-In

The total turnaround time here is 15-20 weeks. Based on the cost, time, pain and labor it takes to provision new infrastructure, we want to do it right and be prepared for the future, and there is no quick fix if we aren’t. Using a provisioning model, the ultimate cost in deploying a solution is not in the hardware being deployed, but rather in the process of deploying it.

The upshot of all this is; Most of your IT infrastructure is sitting idle or nearly idle most if not all of the time. As we assess infrastructure, it is not uncommon for us to see utilization numbers below 10%. Over the past 15 years as configuration management, CI/CD, virtualization and containerization technologies have been adopted by IT, the math above has changed, but because those technologies are evolutionary in nature, the planning process hasn’t. In the Provisioning model, we are always planning for and paying for capacity that we will need in the future, not what we need today. Enter Cloud Computing, Infrastructure Automation, Infrastructure as Code (IaC) and AI. Combined, these technologies have ushered in a revolutionary way to plan for IT needs. IaaS and PaaS platforms provide nearly limitless compute and storage capability with few geographic limitations. Infrastructure Automation & IaC allow us to securely and flawlessly deploy massive server farms in minutes. AI and Machine Learning can be leveraged to autonomously monitor utilization patterns, identify trends and predictively trigger scaling activities to ensure sufficient compute power is delivered “Just in Time” to meet demand, then scaled back as demand wanes. In cases where IaaS and PaaS providers experience localized outages, the same combination of IaC and AI can deploy your infrastructure in an unaffected region, likely before most of your user base or IT is even aware that an outage has occurred. Software updates and patches can be deployed without requiring system outages. The possibilities and opportunities are truly mind boggling. Taking advantage of these capabilities requires a complete change in the way our IT teams think about planning and supporting the applications our users consume. As I mentioned above, the incremental hardware costs of over-provisioning in the data center is inconsequential when compared with the often un accounted for cost of deploying that hardware. In forward looking IT, where IaaS and PaaS are billed monthly and provided on a cost per deployed capacity model, and infrastructure can be nearly instantly deployed, we need to abandon the Provisioning Model and adopt the Capacity Model. Before I proceed, you need to understand that these three pillars; IaaS/PaaS, Infrastructure Automation, and AI must all be in place to effectively take advantage of the cost savings and efficiency of the Capacity Model while still delivering secure, reliable services to your users. Merely moving (often referred to as “Lift and Shift”) your servers to the cloud and optimizing them for utilization may provide some initial cost savings, but at significant risk to security, availability and reliability of services.

3 Pillars of the Capacity Model

IaaS/PaaS, Infrastructure Automation, and AI must all be in place to effectively take advantage of the cost savings and efficiency of the Capacity Model.

Following the Capacity Planning model, we try to align deployed infrastructure to utilization requirements as closely as we can, hour by hour. You may have noticed in my Provisioning example above I was primarily concerned and planning for the required capacity at the end of the lifespan of the infrastructure supporting the application. I was also building to a standard that no system would ever exceed 35%-40% utilization. In the new capacity planning model, I want every one of my services running at as close to 90% utilization as possible. Ideally with only enough headroom to support increase in utilization for as long as it takes to spin up a new resource (typically only a few minutes). As demand wanes, I want to be able to intelligently terminate services as they become idle. I use the word “Intelligently” here for a reason; it’s important to understand that many of these resources are billed by the hour, so if I automatically spin up and terminate a resource in 15 minutes, I am billed for a full hour – If I do it 3 times in a single hour, I’m billed for 3 hours. Let’s look at a sample cost differential between Provisioning and Capacity modelling in the cloud. For this exercise, I’m just using the standard rack rates for AWS infrastructure. I am not applying any of the discounting mechanisms that are available and using simple calculations to illustrate the point.

It’s also important to remember that Revolution is neither free or easy; developing and refining the technologies to support this potential savings for this new application will cost $50k-$100k over the five years.

Provisioning Model – 5 Year Costs:

YearInstanceCost/HourQtyHour/MonthAnnual Cost
1c5.xlarge0.1748720$70,502.40
2c5.xlarge0.1748720$70,502.40
3c5.xlarge0.1748720$70,502.40
4c5.xlarge0.1748720$70,502.40
5c5.xlarge0.1748720$70,502.40
Total Cost: $352,512.00

Capacity Model – 5 Year Costs:

YearInstanceCost/HourQtyHour/MonthAnnual Cost
1c5.xlarge0.172410$1,672.80
2c5.xlarge0.174293$2,390.88
3c5.xlarge0.176255$3,121.20
4c5.xlarge0.177245$3,498.60
5c5.xlarge0.178237$3,867.84
Total Cost: $14,551.32

In the model above, for simplicity of understanding, I only adjusted the compute requirements on a yearly basis, in reality with the ability to dynamically adjust both instance size and quantity hourly based on demand, actual spend would likely be closer to $8k over 5 years. It’s also important to remember that Revolution is neither free or easy; developing and refining the technologies to support this potential savings for this new application will cost $50k-$100k over the five years depending on the application requirements. At the end of the day, or at the end of five years, following the capacity model may result in spending well less than half the cost of the provisioning model, but you would have enjoyed much higher security, reliability and availability of applications with a significantly lower support cost. To wrap up this very long post, yes, it is true that massive cost savings can be realized through 21stCentury IT Transformation, but it will require a Revolution in the way you think about supporting your business applications. Without people experienced in these very new technologies, you’re not likely to be happy with the outcome. Finally, if you encounter anyone who leads the charge to cloud with words like “Lift and Shift”, please don’t be hesitant to laugh in their face. If you don’t you may end up spending $350,000+ for what could otherwise cost you $8,000.

Cloud, All In or All Out? (3 of 7)

Cloud, All In or All Out? (3 of 7)

I recently spoke with a good friend of mine who is a Finance SVP with a publicly traded North American manufacturer.

He was very excited to tell me that his executive team had been strategizing over the past couple of quarters and was getting ready to publicly announce they would be moving all IT services to the cloud and would be 100% complete by Q4 2019. As our conversation progressed, I asked him why they had made this decision and he offered several reasons, some of which were more valid then others. Ultimately, with some prior knowledge of how large, diverse, and (in several significant areas) outdated their technology estate was, I asked what their IT teams thought of this initiative. I’m pretty sure my jaw actually dropped at his reply, “Outside of the CIO’s office, nobody knows yet.”

Its not just all or nothing

There are a whole host of issues with the direction this executive team was headed, but for this particular blog I want to focus on one particularly poor decision that I see play out over and over with potential clients; they are either All In, or All Out on cloud adoption.

I would argue that the vast majority who take either of those two positions have a fundamental mis-understanding of what “Cloud” is. Since the term “Cloud” has been co-opted by nearly every vendor to mean almost anything, this misunderstanding is not surprising. For the purpose of my blog today, I’ll be referencing the key components of cloud computing: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). At their core, each of these offerings support the delivery of business application features to users. At the end of the day, delivering business application features to their users as effectively and efficiently as possible is, and should be, a primary concern of every executive. Executives hire very smart and talented teams of Architects, Analysts, Product/Program/Project Managers, Engineers and Administrators to accomplish this delivery. As these teams work with their respective vendors to understand the nature of the various applications they must operate, then design, build and configure systems to support them. To support diverse applications, these teams need diverse tools; and Cloud (IaaS, PaaS & SaaS) is only one. As powerful as the cloud may be, it is not always ideal, or in some cases, even suitable for every situation or application.

All In, or All Out on cloud adoption?

I would argue that the vast majority who take either of those two positions have a fundamental mis-understanding of what “Cloud” is.

Imagine for a moment that you are hiring the best contractor in your area to build you a home. You’ve worked with them to determine the ideal design, select the desired finishes, and come up with a budget and timeline. At what point would you think it was in your best interest to dictate to this contractor, the best in the area, what tools they may or may not use to deliver your finished home? Wouldn’t it be better to let them use the best tool for each individual job that needs to be done? If this is true, why do we as executives think it is in our best interest to sit in board rooms and determine what tools our IT teams may or may not use, without understanding the nature of the applications they need to operate? The simple answer: it is not. Rather than limiting the tools that their teams can make use of, executives, as the primary visionaries and strategists of the enterprise should develop guidelines for their teams. These guidelines will help them identify appropriate tools and strategies, ultimately helping them align themselves with the overriding executive vision. In our practice, we break these guidelines into two primary sections; Outcome and Bias statements. Outcome statements generally speak to requirements related to availability, reliability, durability and usability of applications, while Bias statements are a list of prioritized preferences for how the application is delivered. This construct provides for executive oversight, while also empowering teams to ultimately understand what is right and do it.

“Outside of the CIO’s office, nobody knows yet.”

There are many enterprises that have gone all in on cloud adoption and many who have avoided it all together. In all my experience, I have yet to encounter an enterprise that has gone All In on the cloud without making significant compromises or undergoing supernatural gymnastics to get everything in (except for businesses that were born in the cloud). Likewise, I have yet to work with a business who has completely opted out of cloud that couldn’t benefit from having some of their systems residing there. “Outside of the CIO’s office, nobody knows yet.” As you can imagine, this jaw dropping statement was not the end of our conversation. We discussed the nature of Cloud Services, and he invited me to consult a bit with several of his peers and superiors within the organization. I was able to provide a little of my perspective and insight into the path they were preparing to undertake. The verdict is still out on what their plan for cloud adoption will be, but I have not seen them make a public announcement regarding a plan for a 100% cloud adoption by EOY 2019.

Revolution, not Evolution (2 of 7)

Revolution, not Evolution (2 of 7)

In my previous blog, I stated:

1st Century IT is a revolution – not an evolution of what we have been doing for decades. The skills required to transform through this revolution are different than those required to operate the existing state, which are different still from those required to operate the new state.

A fundamental misunderstanding of this concept underlies almost every troubled or failing IT transformation project. It took me a few years of assisting enterprises in their cloud migrations to fully understand the ramifications of the difference between evolution and revolution as it applies to IT initiatives and how they impact business.

First, let’s consider an earlier technological revolution

The combined impact of the Personal Computer, GUI and desktop publishing was revolutionary and had a transformative impact on the enterprise. Prior to this revolution, enterprises had specially trained computer operators to handle I/O functions for the mainframes and steno pools with typewriters for document creation. As a result of this revolution, there was a fundamental change to the way business was done. Employees, managers and executives alike were able, and quickly required to generate their own documents, manage their own calendars and perform their own data I/O. Within a short time, everything changed. Successfully navigating this change required different IT staff with completely different skills. It ushered in and set the tone for the next 3-4 decades of IT practices.

Within a short time, everything changed

Successfully navigating this change required different IT staff with completely different skills. It ushered in and set the tone for the next 3-4 decades of IT practices.

It’s interesting to contrast this with the evolution of virtualization that really took hold at the turn of the century. While virtualization had significant impact on IT Departments and how compute power was provisioned in the data center, it did not significantly change what IT staffs did, or how they did it. The skill sets required after a move to virtualization were mostly the same as those required prior to the move and the virtualization of data centers was primarily performed by existing IT staff. The impacts of this transformation effort were barely felt, if recognized at all, by those outside of IT.

I’ve spoken with countless enterprise leaders who view the transformation to 21st Century IT as nothing more than a data center migration – something that is normal to the ongoing operations of an IT department. While this can technically work, it’s unlikely to provide the ideal outcomes promised and sought after. Ultimately the reality will be that an IT estate moved in this manner will most probably cost more over the long term, negatively impacting security, availability and performance. The great news is that it doesn’t have to be this way!

If you embrace this transformation as a revolution, impacting all aspects of how your enterprise does business, you’ll be taking the first important step.

A few years ago, my team and I were brought into a large financial services company. They were looking to contain IT costs and as a result, investigating the cloud as a way to accomplish that goal – but this is not the start of the story. Over the previous several decades, this enterprise had become one of a couple “800lb Gorillas” in their particular vertical. They had thousands of employees, all the major customers, massive amounts of data and an annual IT spend nearing 9 digits. A few years prior to our involvement, a couple of start-up companies with few staff, no customers, no data and extremely limited IT budgets entered their vertical and started disrupting it. Initially, these start-ups were ignored by the enterprise, then they were mocked, and ultimately, as they began to take market share away, they were feared. The enterprise started playing defense, leading to the cost cutting exercise we were brought in to assist with.

If you embrace the transformation to 21st Century IT as a revolution, impacting all aspects of how your enterprise does business, you’ll be taking the first important step.

As we worked with this client and helped them understand the revolutionary nature of the cloud and the wide-ranging impacts that it could have on the way they do business, they began to reevaluate their posture with regards to the insurgent companies. With the transformation to new technologies came a culture change. These changes positively impacted customer interactions and the speed with which our client was able to respond to feature requests. Eventually, our “800lb Gorilla” client became the disruptive innovator in their vertical. Today, aside from the name on the building and the vertical they serve, they don’t look much like they did when we first met them. The way that they do business has fundamentally changed; across their entire enterprise.

Your enterprise may or may not face similar challenges and you may not need or want such sweeping change, but regardless, understanding that your transformation is revolutionary, not evolutionary will position you well for success. Don’t be surprised if embracing the revolution doesn’t help address some of the issues you are facing.

Just be aware that it isn’t easy or free — revolution never is.

How to Tell Ahead of Time If Your IT Transformation Project is Going to Fail (1 of 7)

How to Tell Ahead of Time If Your IT Transformation Project is Going to Fail (1 of 7)

We’ve started using a phrase in my office that encompasses all of the various migrations, reorganizations and modernization trends going on in IT departments today: “21st century IT transformation.”

Yes, I know. It’s a mouthful. Someone smarter than me will probably coin something more concise. And catchier. The phrase refers to the adoption and combination of technologies such as “Continuous Integration/Continuous Delivery” (CI/CD), “Infrastructure as Code” and “Artificial Intelligence” (AI); and methodologies such as “Agile” and “DevOps;” and service models such as “Cloud Hosting.” Combined, these areas help IT organizations meet business requirements and deliver business value.

Learning from Failures

I’ve been working in the IT Ops space for more than 30 years. The last 10 of these years have been spent assisting clients of all sizes understand and ultimately make the transformation to 21st century IT models and approaches. While I have a great winning percentage overall, I am experienced enough that my scorecard also reveals a few failed projects over the decade; that is, efforts that produced neither the desired business outcomes nor technical outcomes.

Success requires the support of key leadership

There is a fundamental misunderstanding at the executive level of the potential impacts of new concepts such as DevOps, and CI/CD, and principles such as “Fail fast, fix fast.”

During my journey, I’ve been able to learn from my own mistakes as well as from those of others – I’ve observed many projects without being involved. And, while every project is unique in its own way, I’ve recognized that there are a few hallmarks that almost always foretell failure. In this week’s blog, I’ll highlight several of the most prevalent warning signs from a high level. Then, in each subsequent post over the coming months, I’ll go increasingly into more and more depth. I’ll write primarily from a business perspective in this series because most transformation projects that will ultimately fail can be identified before the first engineer is assigned. This comes from a fundamental misunderstanding – at the executive level – of the potential impacts of new concepts such as Cloud Adoption, DevOps, and CI/CD, and principles such as “Fail fast, fix fast.” A quick note before I go into this week’s list. Your particular cloud initiative is not necessarily doomed to failure because one or two of the factors described below apply. That being said, the odds can quickly tip in favor of failure as more and more applicable issues appear.

Your particular cloud initiative is not necessarily doomed to failure because one or two of the factors described below apply.

The List of Dreaded Pitfalls

1. You did not start your transformation with an agnostic application or business feature-based assessment of your current IT estate.

This, more so than any other item in this list, will – if not properly performed – lead to budget/timeline overruns, failed deployments and ultimately unhappy internal/external clients. A properly performed assessment should answer the following questions, at a minimum:

  • What are the business requirements of my IT estate?
  • Why should I transform my IT/What do I want out of it?
  • What is my current application/business feature inventory?
  • For each business feature, what is (or are) the:
    • Actual infrastructure requirements
    • Currently deployed infrastructure
    • Licensing requirements
    • Cost of operation per month
    • Business Continuity/Disaster Recovery posture
    • Actual cost (in lost productivity/revenue) of availability per hour and workday
    • Governance model
    • Security/compliance requirements
    • Scalability requirements
    • Associated development, Quality Assurance, Configuration and sandbox environments
    • Integrated applications, and
    • Ideal post-transformation destination (i.e., SAS, Cloud, AWS/Azure/GCP, Physical/Virtual Infrastructure, or other).
  • Based on the inventory above, what functionality or performance-related issues need to be proven out through Proof of Concept efforts before ultimate decision are made?
  • What is the appropriate high-level budget/timeline required to complete this work? It’s easy to get lulled into the complacent thought that you’ve been operating your infrastructure and applications for a long time, so you know it very well. In practice however, the knowledge required to operate is not the same as the knowledge required to transform.

This leads to the next red flag:

2. You consider the transformation effort to be an evolution of your previous IT models, practices and tools.

21st century IT is a revolution – not an evolution of what we have been doing for decades. The skills required to transform through this revolution are different than those required to operate the existing state, which are different still from those required to operate the new state. Many efforts have failed because the executives responsible did not understand that a cloud migration does not and should not resemble a data center migration – regardless of what the various migration tool and cloud hosting partners will tell you.

3. Without first performing an assessment to evaluate the skills or effort required, you and your senior IT staff dictated the timeline, budget and technology decisions.

This seems so absolutely ridiculous that it can’t possibly be true. Could you imagine going to a heart surgeon and demanding a transplant, without understanding the impacts or even if the transplant was needed? Of course not. But somehow enterprises do this every day with the beating heart of their businesses – a.k.a. IT – without giving it a second thought.

4. Prior to starting your transformation, you didn’t have a complete understanding of the financial and operational models behind 21st century IT in the enterprise.

Yes, you understand OpEx vs CapEx, and are maybe even able to make a solid “Net Present Value of Cash” argument regarding your future IT directions. But have you considered Provisioning vs. Capacity planning models? Do you understand the cost and value related to Infrastructure as Code? Can you articulate the risks inherent in Best Efforts Availability as opposed to Five Nines? How will you manage costs in a world where a single button click can result in thousands of dollars in Monthly Recurring Costs?

5. You view the transformation as a strictly technical effort.

This is a big one. To make IT more efficient through transformation, you have to address two areas: The How and the What. Assigning the work to your IT organization addresses the How. Involving business resources drives the definition of the What. Without both groups working together, operational efficiencies won’t be realized, budgets will be blown, and opportunities lost. If some of the issues listed above are factors in your current transformation effort, it’s never too late to try to resolve them. In the coming months, I’ll expand on the thoughts above and share some anonymous war stories while providing pointers on how to avoid pitfalls in the business of 21st century IT transformation.

21st century IT is a revolution – not an evolution

The skills required to transform through this revolution are different than those required to operate the existing state, which are different still from those required to operate the new state.