Convey Services: Modern cloud management strategy supports scaling of virtual event platform

Convey Services: Modern cloud management strategy supports scaling of virtual event platform

Modern cloud management resolves scalability, reliability, and performance – opening new market opportunities for growth

Cloud Conventions migrated their virtual event services to a new Amazon Web Services (AWS) environment to improve performance and reliability, as well as to rapidly scale in response to customer demand. However, the company soon realized that the ongoing monitoring, optimization, and management of their new AWS environment involved complexities beyond their internal capabilities. Cloud Conventions engaged Effectual as an experienced cloud partner to provide modern managed services expertise.

About Cloud Conventions

Cloud Conventions from Convey Services is an enterprise virtual/hybrid event management platform that redefines the exhibitor and attendee experience to allow companies to provide easy access to in-depth product information, showcase their brands with graphics and videos, create calls to action, and generate immediate sales leads. Used around the world for large managed events and smaller self-directed meetings, conferences and corporate kickoffs, Cloud Conventions automates exhibitors and virtual booths, continuing education, speaker sessions and reminders, invitations and email communication, while at the same time producing detailed analytics on attendee, session and exhibitor activity.

The Challenge

When the COVID-19 pandemic hit, Cloud Conventions was in a unique position to provide a sophisticated virtual event platform to replace a growing list of live events cancelled due to the new restrictions on live gatherings. While demand for their services was growing rapidly, the company began encountering performance and reliability challenges – including outages on their hosted infrastructure.

Despite Cloud Conventions’ repeated requests to address these urgent issues, the existing hosting provider was unresponsive. The company learned that the provider had them on a hypervisor that was out-of-date and obsolete and that they would not be able to upgrade for over 30 days. With numerous high-profile conferences in the US and around the globe scheduled within the next several weeks, this response was unacceptable.

Choosing the AWS Cloud for improved scalability and reliability

To resolve these issues and establish a secure, scalable environment capable of growing with customer demand, Cloud Conventions made the decision to migrate immediately to the AWS Cloud as a lift-and-shift move – with plans to pursue application and database modernization efforts in the near future.

Engaging Effectual to manage, modernize, and optimize workloads

Soon after Cloud Conventions’ development team finished migrating workloads to AWS, they realized they needed a partner with advanced cloud expertise to properly manage the new environment.

An AWS Premier Consulting Partner, Effectual is a cloud first professional and managed services company with a primary objective to enable modernization and solve business challenges with cloud technologies. Effectual’s Modernization Engineers™ deliver expert support and critical guidance on workload performance, acting as an extension of customer internal IT resources. The company also identifies opportunities to optimize security, operations, and cloud spend.

For Cloud Conventions, Effectual’s expertise and capabilities matched their needs precisely.

Adopting Effectual’s Modern Cloud Management Platform

After conducting an initial analysis of Cloud Conventions’ new AWS environment to ensure it was configured correctly, Effectual helped the company develop and implement a modern cloud strategy for governing the use of cloud services across their organization. This included onboarding Cloud Conventions to Effectual’s Modern Cloud Management platform to manage and monitor the health of their workloads, as well as to guide their cloud usage.

Effectual’s platform was developed using recommended controls by the Center for Internet Security (CIS) in conjunction with the AWS Well Architected Framework. The solution also applies Infrastructure as Code automation to deploy and manage customer environments. 

As a Modern Cloud Management customer, Cloud Conventions receives the following services and support:

  • Security & Compliance Management
    Effectual helps Cloud Conventions maintain a trusted security posture across dynamic cloud workloads by leveraging DevOps automation and continuous asset discovery, offering complete visibility into the managed environment.
  • Operations Management
    Effectual ensures that Cloud Conventions is maximizing cloud usage with streamlined procurement, cost management, monitoring, and support, including monthly and quarterly reviews to identify areas for improvement.
  • Shared Responsibility
    Effectual has helped Cloud Conventions establish a clear delineation of responsible, accountable, consulted, and informed tasks across their organization and the Modern Cloud Management platform (this includes 24 x 7 x 365 Security Operations Center (SOC) and customer support).

Solutions & Outcomes

  • Guided the company through Effectual’s Cornerstone Process, establishing the foundation of the customer relationship as well as a baseline understanding of Cloud Conventions’ existing environment
  • Onboarded Cloud Conventions’ to Effectual’s Modern Cloud Management service to ensure security, compliance, availability, and cost optimization for the company’s variable workload
  • Assumed management of the existing AWS environment including Amazon EC2 instances, Amazon RDS, and other web delivery services
  • Launched 24/7/365 support
  • Improved availability and fault tolerance by scaling out to multiple Availability Zones (AZ) in RDS, providing a highly available database solution
  • Utilized AWS Elasticache for Redis to store application session data, allowing Cloud Conventions to scale services more economically and handle unpredictable traffic spikes
  • Deployed Amazon CloudFront for secure, fast delivery of Cloud Conventions’ static, dynamic, and Video on-Demand content at a data transfer volume of 1 GB/month
  • Implemented Amazon S3 for increased scalability and data availability
  • Identified CIS compliance benchmarks that were non-compliant and prioritized them for remediation
  • Leveraged Qualys patch management to ensure that EC2’s remained hardened after deployment
  • To establish guardrails and security control policies for continuous compliance, deployed Amazon Control Tower – if changes are made to the environment that move them into non-compliance, Cloud Conventions will receive an automated email notification
  • Set up application monitoring and logging with Amazon CloudWatch for a holistic view of operational health as well to the ability to respond to changes and optimize resources
  • Deployed AWS CloudTrail to log and monitor access to AWS services through the console
  • Used AWS Config to run continuous compliance checks using CSI, PCI, HIPAA, and GDPR benchmarks by means of conformance packs applicable to Cloud Conventions’ needs


Effectual’s modern cloud management services not only resolved Cloud Conventions’ scalability, reliability, and performance issues, but also allowed their development team to refocus time and energy on other core initiatives and opportunities. 

In addition, by moving to AWS with ongoing cloud management support from Effectual, the company can meet strict regulatory compliance for PCI, HIPAA, CIS, and GDPR. This has opened new opportunities for Cloud Conventions’ to bid contracts and expand its business into international markets.

Most importantly, Cloud Conventions can now onboard new customers and deliver engaging, high-quality virtual event services with the confidence that their infrastructure is well-managed, closely monitored, and optimized for the AWS Cloud.

Interested in learning more about our Modern Cloud Management services? Contact us

Tricon Residential: IT modernization helps industry leader keep pace with accelerated growth

Tricon Residential: IT modernization helps industry leader keep pace with accelerated growth

Tricon Residential is the fourth largest single-family rental company in the US. Founded in 2012, the publicly owned company has a portfolio of over 20,000 homes in ten states. As one of the country’s fastest-growing real estate companies, Tricon has gained a competitive advantage by offering highly responsive, personalized customer service and translating it into profitable long-term relationships.


Since 2016, the company has more than doubled its rental home portfolio. This accelerated growth highlighted the need to formalize and streamline processes, reduce costs, and optimize its operational efficiencies. With properties spread out over large geographic areas, Tricon was also in search of scalable solutions for managing, servicing, and maintaining their homes as well as a responsive communication platform for delivering a high-touch, seamless experience to its residents.

Challenges to pursuing these strategies included limited internal development resources as well as a lack of off-the-shelf solutions for the single-family vertical market. As the company expanded, Tricon partnered with a specialized team of solutions architects at Effectual to integrate their business requirements with DevOps expertise and take advantage of evolving Amazon Web Services (AWS) solutions.

Key objectives:

  • Improving operational efficiencies while scaling teams and services quickly
  • Combining multiple data sources to create complete and holistic reporting
  • Innovating continuously to optimize costs and meet market demand
  • Creating a DevOps culture focused on automation, cross department communication and collaboration

During the last four years, Effectual has supported these goals by designing, developing, and deploying numerous solutions for the company leveraging the AWS Cloud. These include applying IoT capabilities, integrating smart home technologies, and utilizing AI/ML managed services for revenue enhancements.

Solutions & Benefits

Operational Efficiencies
One of Effectual’s first projects was to streamline Tricon’s existing rental process and leverage automation to integrate existing administrative functions with custom business applications. In addition, the team developed and launched a 200-home smart home pilot with BeHome247, a cost-saving program Tricon is rolling out to its entire portfolio..

Continuous Integration – Continuous Deployment (CI/CD)
To speed time to market deployments, increase reliable releases, and provide a secure environment, the team also created a CI/CD stack and pipeline that aligned new feature requests from ideation to deployment.

Performance & Functionality
In order to reduce AWS costs and increase scalability, Effectual streamlined Tricon’s application payments, built a performant dashboard, and deployed AWS S3 for highly scalable cloud object storage.

Monitoring & Logging
Last, by integrating all aspects of custom applications with CloudWatch, Tricon can now easily monitor and quickly troubleshoot issues without affecting their customer experience.

The success of these initial improvements has led Tricon to further expand its partnership with Effectual, including exploring new AWS services and developing additional custom applications to better serve its residents. The company is rapidly becoming an industry leader in new technologies.

“Effectual has been an extension of our team for several years, and we appreciate their focus on implementing scalable, innovative, and well-architected solutions in partnership with us. They continually go above and beyond to ensure that new projects are successful by asking questions and clarifying assumptions to truly understand our business objectives. We utilize their knowledge to evaluate new technologies and services to ensure that our technology stack is optimized.”      
                  – Dawn Dalton, VP of Business Systems

Implementing Modern Cloud Management & Optimization

As the team addressed Tricon’s development requirements, it recognized the company was going to need ongoing management to monitor and maintain the security and performance of its AWS platform as well as to identify opportunities for cost optimization and business intelligence strategies.

With a growing estate of applications running on AWS continued, it became apparent that opportunities existed to improve the maintenance and security of Tricon’s environments. As a cloud-first, security-first Modernization Service Provider, Effectual provided the experience and expertise to keep the company on a path of continued innovation.

In particular, Tricon had experienced disruptions within critical business systems and wanted to improve their response time with greater visibility into what was causing errors. By establishing automated monitoring alert systems, Effectual has helped Tricon to respond quickly and resolve issues as they occurred, reducing downtime and improving customer experience.

Additional modern cloud management solutions implemented by Effectual include: 

  • Deployed Amazon Control Tower to establish guardrails and security control policies
  • Used Qualys patch management to ensure EC2’s remain hardened after deployment
  • Utilize automated secure configuration audit tools to assess the security posture of Tricon’s environment configuration
  • To ensure continuous compliance, leverage configuration tools for generating automated alerts and email notifications against any changes that would compromise Tricon’s security posture or move the company into non-compliance (Tricon’s unique compliance and regulatory requirements determine the acceptable baseline configuration standards)
  • Provide metrics along with recommendations for improvement

“We’re coming to resolution much faster now on issues. Before working with Effectual, it was taking us longer to figure out the root cause.”

– Gregg Knutson, Sr. VP of Information Technology

With consistent reporting, Effectual’s delivery team has also been able to uncover patterns affecting Tricon’s costs. For example, Effectual reviewed the company’s 6 month cost trends during a quarterly business review and identified an unutilized RI (Reserve Instance) that had become orphaned. Resolving the issue helped Tricon take advantage of the significant discount RIs offer – a key strategy for cost optimization. This proactivity is one of the most important benefits of having a long-term partnership with a Modernization Service Provider.

Future Forward

As a trusted advisor, Effectual’s overall goal is to set Tricon on a path of scalability and growth with the confidence it can securely and reliably serve its customers. This includes aligning the company’s business goals with new tools, methodologies, and strategies to support their growing business.

“We try to be a forward-thinking organization in terms of technology, and really want to leverage modern IT systems. With this partnership, we’ve deepened our ability to meet the growing needs of our technology roadmap.”                                                         

– Gregg Knutson

Ryan Hughes Named Effectual Chief Operating Officer

Ryan Hughes Named Effectual Chief Operating Officer

Ryan Hughes will advance the strategic delivery of professional and managed cloud services to create positive business outcomes for Effectual’s customers

Jersey City, NJ – August 10, 2021 – Effectual, a modern, cloud first, managed and professional services company, has named Ryan Hughes as Chief Operating Officer. Focused on creating positive business outcomes for Effectual’s customers, he will lead the advancement and strategic delivery of the company’s professional and managed cloud services.

Hughes was drawn to Effectual’s ability to engage enterprise customers and translate their unique business challenges into innovative, cloud-based solutions. A 20-year professional services veteran, he holds an MBA in Project Management from Penn State University. Most recently, Hughes oversaw the modernization of over 300 critical applications for Nationwide Insurance where he was Vice President of Cloud Enablement and DevOps. Prior to Nationwide, he held the role of Cloud Executive Advisor at Amazon Web Services (AWS). 

“Effectual’s leadership and team of technologists are ideally positioned to deliver on the transformation and modernization initiatives of public sector and enterprise organizations.” said Hughes. “I saw this role as an opportunity to help build a company focused on maximizing the value of cloud technologies for its customers. I am genuinely excited to wake up in the morning knowing that I have access to a portfolio of services that solve for the most challenging scenarios.” 

A five-time speaker at AWS re:Invent, Hughes has led enterprises and government organizations through some of the most industry-recognized public cloud transformations to date – implementing new cloud platforms in regulated Fortune 100 companies, building highly productive cloud enablement teams from scratch, and creating innovative migration mechanisms. As the recipient of the 2011 Ziff-Davis Cloud Enabler of the Year, he led cloud enablement efforts in the Federal, State, & Local Government sectors of Unisys and Dewberry; helping them compete and win awards under the first IDIQ Cloud Contract of its kind (WSCA/NASPO Cloud Services, circa 2009).  

“Ryan’s enthusiasm for the work we are doing at Effectual is inspiring. With extensive real-world experience modernizing the IT estates of large enterprises, he can directly relate to our customers and empathize with the challenges they are facing every day. We are excited to have his expertise and welcome him as part of the Effectual team.” 

Robb Allen, CEO

About Effectual

Effectual is a modern, cloud first managed and professional services company that works with commercial enterprises and the public sector to mitigate their risk and enable IT modernization. A deeply experienced and passionate team of problem solvers apply proven methodologies to business challenges across Amazon Web Services and VMware Cloud on AWS. Effectual is a member of the Cloud Security Alliance, and the PCI Security Standards Council.

Alexis Breslin Joins Effectual as Chief Human Resources Officer

Alexis Breslin Joins Effectual as Chief Human Resources Officer

Alexis Breslin will lead and evolve Effectual’s Human Resources initiatives, including benefits, recruiting, onboarding, training, and company culture

Jersey City, NJ – Aug 3, 2021 – Effectual, a modern, cloud first, managed and professional services company, has named Alexis Breslin as Chief Human Resources Officer. Breslin will lead and evolve Effectual’s Human Resources initiatives, including benefits, recruiting, onboarding, training, and company culture.

Honored as one of the 2017 Leading Women Intrapreneurs in NJ, Breslin brings an entrepreneurial approach to talent recruitment and development. A Rutgers University alum, she has extensive experience across a broad range of HR Management functions and holds both the SHRM Senior Certified Professional (SHRM-SCP) and Senior Professional in Human Resources from HRCI (SPHR) certifications.

“Effectual was founded with a clear vision to deliver excellence and be a meaningful, relevant partner to our customers,” said Breslin. “These core values combined with a passion to continually learn, grow, and succeed are key to our company culture and finding exceptional talent to join our team. I’m excited to help establish Effectual as a destination employer for technologists, thought leaders, and cloud services professionals.”

Breslin spent the past ten years at Solidia Technologies as VP of Human Resources where she developed and scaled the company’s HR program. Her initiatives earned the company recognition as “Best Places to Work in New Jersey” in 2014 and again in 2020.

“Alexis has a strong background in developing strategic HR initiatives that uphold the entrepreneurial spirit of a startup environment through multiple stages of growth,” said Robb Allen, CEO Effectual. “We look forward to her leadership in the evolution and enhancement of the Effectual employee experience.”

Robb Allen, CEO

About Effectual

Effectual is a modern, cloud first managed and professional services company that works with commercial enterprises and the public sector to mitigate their risk and enable IT modernization. A deeply experienced and passionate team of problem solvers apply proven methodologies to business challenges across Amazon Web Services and VMware Cloud on AWS. Effectual is a member of the Cloud Security Alliance, and the PCI Security Standards Council.

VMware Cloud on AWS: Migrating & Modernizing Infrastructure

VMware Cloud on AWS: Migrating & Modernizing Infrastructure

VMware Cloud on AWS offers numerous advantages for migrating and modernizing infrastructure.

Whether you’re considering a data center expansion, full migration, disaster recovery solutions, or building brand new applications, your organization needs flexibility to execute successfully. 

Our Managed VMware Cloud on AWS services allow you to commence a modernization strategy on your terms. This includes cost-effective cloud for VSphere with resources that can be right-sized, choice in storage and where workloads run, firewall policies, and access to AWS services when needed. 

Effectual Names Jeff Carson as Vice President of Public Sector Technology

Effectual Names Jeff Carson as Vice President of Public Sector Technology

Jeff Carson will lead Effectual’s Public Sector Solution Architects and support the development of the company’s government market offering

Jersey City, NJ – July 13, 2021 – Effectual, a modern, cloud first, managed and professional services company, has named Jeff Carson as Vice President of Public Sector Technology. Carson will be leading Effectual’s Public Sector Solutions Architect team and will be supporting the development of the company’s government market offerings.

An Amazon Web Services (AWS) APN Ambassador with over 13 years of technology and public sector expertise, Carson has built a successful career implementing solutions and consulting for Bureau of Prisons, USGS, Centers for Medicare & Medicaid Services, the Federal Reserve Board of Governors, FBI, Air Force, the U.S. Census Bureau and more. Prior to joining Effectual, he led a global technical team at AWS that was responsible for supporting revenue growth by 30%, reaching $100 million in 12 months. Carson also held the status of AWS certified public speaker.

“Given Effectual’s reputation within AWS as a strong public sector partner and the company’s recent contract award with Ginnie Mae, I am thrilled to have the opportunity to join the Effectual Public Sector team,” said Carson. “Effectual’s solutions architects are some of the most qualified technologists in the industry, and I am looking forward to working with them to develop new modernization services for our government customers.”

A well-rounded leader, Carson is highly technical with a strong grasp of the requirements for conducting business in the public sector. He has extensive engineering and architecture experience as well as a customer-focused mindset rooted in performance-based service delivery. Carson holds seven AWS certifications, including AWS Solutions Architect Professional, AWS DevOps Professional, and the AWS Certified Security Specialty. 

“Jeff’s knowledge of AWS, the federal market, and the requirements of FedRAMP compliance equip him to address multiple layers of complexity and execute successful solutions for the public sector. He is that rare combination of a great technologist who also has a deep understanding of the unique business challenges facing public sector organizations.” 

Robb Allen, CEO

About Effectual

Effectual is a modern, cloud first managed and professional services company that works with commercial enterprises and the public sector to mitigate their risk and enable IT modernization. A deeply experienced and passionate team of problem solvers apply proven methodologies to business challenges across Amazon Web Services and VMware Cloud on AWS. Effectual is a member of the Cloud Security Alliance, and the PCI Security Standards Council.

Ginnie Mae : Modernizing Infrastructure Using the AWS Cloud

Ginnie Mae : Modernizing Infrastructure Using the AWS Cloud

To support Ginnie Mae’s digital transformation, Effectual is supporting its cloud infrastructure modernization with enhanced security, DevSecOps, and AWS cloud native innovation.

Effectual joined SiliconANGLE & theCUBE as the winner of the 2021 AWS Global Public Sector Partner Award for Best Partner Transformation to discuss how Effectual is helping Ginnie Mae get the most out of AWS Cloud services.

Josh Dirsmith, Vice President of Public Sector, Effectual, & Jeremy Yates, Deputy Technology Architect, Ginnie Mae, talk with SiliconANGLE’s Natalie Erlich for the AWS Global Public Sector Partner Awards 2021.

Effectual Awarded 10-Year Cloud Services Contract with Ginnie Mae

Modernization Services Provider Named as Co-Prime

Jersey City, NJ – April 14th, 2021 – Effectual, a modern, cloud first, managed and professional services company, has been awarded a 10-year dedicated cloud services contract with the Government National Mortgage Association (GNMA or Ginnie Mae). The $140M single award includes Effectual as co-prime, in collaboration with Amazon Web Services (AWS). An earlier announcement on October 27, 2020 by Ginnie Mae’s contracting office listed JHC Technology as the co-prime. Effectual completed the acquisition of JHC Technology in January 2020 and will therefore be delivering the services awarded to JHC.

For the next decade, Effectual will support Ginnie Mae’s modernization initiatives and requirements as the corporation consolidates legacy platforms and leverages cloud native solutions to deliver more reliable, efficient, and higher quality services. Effectual will build a modern cloud infrastructure to integrate native AWS services with third party tools to achieve compliance with government requirements. Modernizing Ginnie Mae’s platform will provide enhanced security, DevSecOps, monitoring, and service delivery across its entire application portfolio.

“We look forward to bringing our cloud expertise to Ginnie Mae’s modernization efforts,” said Effectual CEO Robb Allen. “This is an excellent opportunity to develop a modern cloud strategy that not only ensures the security and reliability of their program, but also unlocks new capabilities for serving their customers using the innovation of AWS.”

An AWS Premier Consulting Partner, Effectual has deep public sector experience building solutions across AWS and VMware Cloud on AWS as well as expertise in federal standards such as FedRAMP and FISMA. The company’s portfolio of modern cloud services includes strategy and design, migration, app development, and modern cloud management.

With 200+ AWS Certifications, Effectual has achieved the AWS Migration Competency, AWS DevOps Competency, AWS Mobile Competency, AWS SaaS Competency, and AWS Government and Nonprofit Competency designations. Effectual is also a member of the AWS Well-Architected and AWS Public Sector Partner Programs as well as the AWS GovCloud (US) and Authority to Operate on AWS Programs. In addition, Effectual holds the VMware Master Services Competency in VMware Cloud on AWS.  

Josefina Amaro, Cloud Data Analyst: Making the Cloud More Accessible Through Data Analytics

Josefina Amaro, Cloud Data Analyst: Making the Cloud More Accessible Through Data Analytics

When I first became interested in coding, I was working as a senior accountant creating month-end financial packages. Half of my month was working as an accountant, while the other half was spent performing business analytics and building intricate dashboards in Excel. I quickly noticed a lot of my deliverables required a dynamic approach to accurately tell the data’s story.

Ultimately, my research focused in on analytical strategies and the power of using a variety of input fields and statistics to create formulas specific to the department’s unique variables. My coworker and boss at the time supported my potential and recommended I seek out training in a technical field, which is when I discovered Data Analytics. Confident and driven in my decision, I left my job as a Senior Accountant to commit myself to Columbia University’s certified Data Analytics program.

Even though it felt like a gamble and I was faced with an overwhelming amount of knowledge, languages, and software to learn, pursuing my education left me feeling empowered and positive that my hard work would lead to a brighter future. 

Now that I’ve found my place at Effectual as a cloud data analyst, I’ve been introduced to so many new opportunities and always volunteer to take on ambitious tasks. My programming skillset also strengthens my ability to make the cloud more accessible for Effectual customers.

Rethinking your cloud spend

The cloud can sometimes seem like an unapproachable, ethereal concept, but it often only requires a mindset shift in how we approach it.

As a cloud data analyst, I get to dive into how different companies are using their cloud spend, or how they’re preparing for a move to the cloud. I like to tackle the cloud like I tackle a coding project: by looking at the problem I’m trying to solve and seeing if there’s a more efficient way to do things.

Companies will often look to reduce their cloud spend. But that’s not always the right choice – you don’t want to spend less money if it means you’re not progressing. Instead, I look at where inefficiencies can be shored up.

I like to tackle the cloud like I tackle a coding project: by looking at the problem I’m trying to solve and seeing if there’s a more efficient way to do things.

For example, many AWS users are familiar with Amazon RDS or Redshift Database and use them frequently. As AWS continues its rapid pace of innovation, it regularly releases new services that might be beneficial for certain projects.

One of those new releases is Athena, a query service where you can gather data from Amazon Simple Storage Service (S3). Alternatively, you can use Kinesis Firehose or Elastic Map Reduce; all those services let you process data without using a database. If you can have your data in S3 and use that mixture of services for your end product, you can reduce the processing time needed – and thus, streamline your costs for the cloud.

By looking at where inefficiencies lie, you’re increasing the accessibility of the cloud to help where it would be most impactful. Taking a step back and really looking at the “why” behind your solution is critical.

Putting your data to work

Once you have that understanding of your cloud spend, you can also look for other ways to make your processes more efficient. For example, a lot of us working with infrastructure are building services that could be standardized, so we’re exploring AI and machine learning solutions. Those solutions use infrastructure as code to build without having to involve an entire cloud ops team.

At Effectual, we also use AI and machine learning to help us make agile projections that aren’t normalized merely based on what a customer has shared. Of course, our solutions are tailored to each individual customer, but that doesn’t mean they need to be misinformed.

By looking at customers with similar platforms or customer bases and using data from those situations, we are able to make better recommendations for that customer when they’re moving to the cloud. And we present those recommendations in ways that are easily understandable and demonstrate why our strategy is the correct decision.

Continuing worldwide growth

One of the reasons I love working at Effectual is the way we empower companies to come into the market. I’m always excited to hear about new customers getting into the cloud or expressing an interest in coding or data analytics. Even if it’s a smaller business or a government agency, we want people to feel like the cloud is accessible.

For new developers looking to get into the field, this work can be challenging at first. But when you get into it, you can own your skillset and create unique solutions.

Despite not having the most “traditional” background in coding, I’ve always felt welcomed and invited to be a part of new projects and groups in my career, and it’s taught me more than I ever would have thought.

I’d encourage everyone with any interest in technology or learning how to code to go after those passions — you never know what you’ll build!

Josefina Amaro is a Cloud Data Analyst at Effectual, Inc. 

Effectual Recognized with Amazon Web Services Global Public Sector Partner Award

Effectual Recognized with Amazon Web Services Global Public Sector Partner Award

Effectual, an AWS Partner Network Premier Consulting partner, named Best Partner Transformation winner for 2021

Jersey City, NJ – June 22, 2021 –  Effectual announced today that it has received the 2021 Global AWS Partner Network (APN) Public Sector Partner Award for Best Partner Transformation. The AWS Global Public Sector Partner Awards recognize leaders in the channel playing a key role helping customers drive innovation and build solutions using AWS Cloud technology.   

For Effectual, strategic alignment with the AWS PTP team allowed the company to identify new public sector opportunities for its modernization services, supporting Effectual’s rapid growth. The process also elevated awareness of its expertise and experience as an AWS Partner Network (APN) Premier Consulting Partner. Shortly after completing the PTP, Effectual was named co-prime for a $140M single award with Ginnie Mae.  

“The Partner Transformation Program is a high value process that resulted in improved alignment of our service strategy and product portfolio with AWS,” said Effectual CEO Robb Allen. “Working with AWS to refine our public sector initiatives provided the opportunity to enhance our capabilities as Modernization EngineersTM. We have already seen the benefits of strengthening our relationship as a partner.” 

“Every year we are impressed by how our partners continue to innovate using cloud technology, helping their customers raise the bar on mission success, and this year is no different. The 2021 AWS Public Sector Partner award winners display a sincere commitment to impact the lives of our customers around the globe.” 

Sandy Carter, Vice President – Worldwide Public Sector Partners and Programs, AWS

Effectual, along with the other award winners, will be recognized at a special online event hosted by theCUBE on June 30, 2021. To learn more about Effectual’s Public Sector Services and its successful implementation for Ginnie Mae, register to attend The AWS Public Sector Partner Awards 2021 on theCUBE.

The APN is dedicated to helping partners build, market, and sell their offerings so they can grow successful cloud businesses. The 2021 Global APN Public Sector Awards recognize the partners who leaned into innovation and customer obsession to deliver amazing results. Winners were selected based on their demonstration of Amazon Leadership Principles, engagement and success with the APN, and delivery of innovative of solutions to public sector customers in a customer-obsessed way.

About Effectual

Effectual is a modern, cloud first managed and professional services company that works with commercial enterprises and the public sector to mitigate their risk and enable IT modernization. A deeply experienced and passionate team of problem solvers apply proven methodologies to business challenges across Amazon Web Services and VMware Cloud on AWS. Effectual is a member of the Cloud Security Alliance, and the PCI Security Standards Council.

Josh Dirsmith, VP of Public Sector: Leadership – An Essential Trait for Cloud Success

Josh Dirsmith, VP of Public Sector: Leadership – An Essential Trait for Cloud Success

Early in my career, I served as a Platoon Sergeant and Network Manager in the U.S. Marine Corps under Former Secretary of Defense Jim Mattis. I was stationed in Egypt, and one aspect of my role consisted of doing network checks every morning before we headed up to parts unknown in the Middle East.  

My interactions with General Mattis, who was a one-star general at the time, were brief. They were mainly two-minute conversations where we made sure everything was good to go for the day, talked about general training and discussed our Meals, Ready-to-Eat (MREs). The conversations never really veered from these topics. 

On my 24th birthday, however, General Mattis stopped me before I left. “Hey, Josh,” he said. 

I was shaved bald, but if I had hair on my head and neck, it would have been standing up straight. Marine generals do not address Marine corporals by their first name, so I merely responded, “Sir?” 

“Today’s your birthday, isn’t it?” 

General Mattis had me sit down and we talked about life, liberty, and the pursuit of happiness. Then, he wrote me a recommendation for Officer Candidates School. It was one of the finest moments of my career, and I still have that letter framed. 

We had never had a lengthy conversation before, but him knowing that it was a special day for me and gifting me that time showed me how much he cared about his team. I’ve carried that mindset with me throughout my career.  

My number one job is setting my team up for success, learning how to leverage their strengths and mitigate their weaknesses. 

Mission accomplishment, troop welfare

The U.S. Marine Corps has a mantra of “mission accomplishment, troop welfare.” That’s what drives me as a leader. My number one job is setting my team up for success, learning how to leverage their strengths and mitigate their weaknesses. 

I’ve had people I’ve worked with previously come work with me at Effectual. I think that speaks volumes to the respect they have for me as a leader. 

What’s my secret? I treat them as people. I respect them and their goals, I set them up to succeed, and we have a bit of fun doing it. 

The value of leadership within the cloud

That leadership serves me well in the public sector as I work with federal, state, and local governments, educational organizations, and nonprofits. Leadership is an essential trait within the cloud. When we work with companies and organizations, we start by assessing where they are. Again, it’s getting to know them, their strengths, their weaknesses, and figuring out what’s best for their unique situation. 

Once we’ve determined that, we’re ready to move them into the cloud through some combination of professional and managed services. But that move doesn’t go smoothly without the measured leadership of someone who fully understands their challenge areas and objectives. 

You may not know your exact mission within the cloud just yet, and that’s okay. Partnering with someone who can get you there is the first step. I’m excited to continue to grow, refine and expand our customer partnerships, and introducing the public sector to all the cloud can help them achieve. 

Josh Dirsmsith is the Vice President of the Public Sector at Effectual, Inc.

Learn more about Effectual’s public sector expertise >> V,

Effectual Awarded 10-Year Cloud Services Contract with Ginnie Mae

Effectual Awarded 10-Year Cloud Services Contract with Ginnie Mae

Modernization Services Provider Named as Co-Prime

Jersey City, NJ – April 14th, 2021 – Effectual, a modern, cloud first, managed and professional services company, has been awarded a 10-year dedicated cloud services contract with the Government National Mortgage Association (GNMA or Ginnie Mae). The $140M single award includes Effectual as co-prime, in collaboration with Amazon Web Services (AWS). An earlier announcement on October 27, 2020 by Ginnie Mae’s contracting office listed JHC Technology as the co-prime. Effectual completed the acquisition of JHC Technology in January 2020 and will therefore be delivering the services awarded to JHC.

For the next decade, Effectual will support Ginnie Mae’s modernization initiatives and requirements as the corporation consolidates legacy platforms and leverages cloud native solutions to deliver more reliable, efficient, and higher quality services. Effectual will build a modern cloud infrastructure to integrate native AWS services with third party tools to achieve compliance with government requirements. Modernizing Ginnie Mae’s platform will provide enhanced security, DevSecOps, monitoring, and service delivery across its entire application portfolio.

“We look forward to bringing our cloud expertise to Ginnie Mae’s modernization efforts,” said Effectual CEO Robb Allen. “This is an excellent opportunity to develop a modern cloud strategy that not only ensures the security and reliability of their program, but also unlocks new capabilities for serving their customers using the innovation of AWS.”

An AWS Premier Consulting Partner, Effectual has deep public sector experience building solutions across AWS and VMware Cloud on AWS as well as expertise in federal standards such as FedRAMP and FISMA. The company’s portfolio of modern cloud services includes strategy and design, migration, app development, and modern cloud management.

With 200+ AWS Certifications, Effectual has achieved the AWS Migration Competency, AWS DevOps Competency, AWS Mobile Competency, AWS SaaS Competency, and AWS Government and Nonprofit Competency designations. Effectual is also a member of the AWS Well-Architected and AWS Public Sector Partner Programs as well as the AWS GovCloud (US) and Authority to Operate on AWS Programs. In addition, Effectual holds the VMware Master Services Competency in VMware Cloud on AWS.  

About Effectual

Effectual is a modern, cloud first managed and professional services company that works with commercial enterprises and the public sector to mitigate their risk and enable IT modernization. A deeply experienced and passionate team of problem solvers apply proven methodologies to business challenges across Amazon Web Services and VMware Cloud on AWS. Effectual is a member of the Cloud Security Alliance, and the PCI Security Standards Council.

Stephanie Lanning, Director of Channel Sales: The Importance of Keeping Good Company

Stephanie Lanning, Director of Channel Sales: The Importance of Keeping Good Company

I’ve been at Effectual for almost a year, but I have a long history with the leadership and engineering teams who founded the company. I followed the company’s story from the start, because I knew the people here were talented and I always appreciated how communicative and transparent the leadership team was when we worked together in the past. I also knew this was a group of people I wanted to work with again.

I am inspired by the people around me, and it is easy to find that inspiration given the talented people I get to work with every day. We all value collaboration and know how important it is to share information. It gives us constant opportunities to learn from each other, making the overall team stronger. One of our core values is having a “Can Do” attitude, and it is immediately apparent that part of that is built on a foundation of working together. Collaboration is truly how you find the best solution to a challenge.

I am inspired by the people around me, and it is easy to find that inspiration given the talented people I get to work with every day.

People who are drawn to IT seem to have a strong thirst for learning. I have that thirst and like to understand how things work, so a career in technology has been a great fit for me even though I have a different set of skills than some of my colleagues. IT changes quickly, and that’s not news to anyone. Being successful in this industry requires a broad skillset to fully grasp technology’s potential. You need to be willing to constantly learn and expand your skillset to remain relevant.

It’s great to be surrounded by so many intelligent, ambitious, and enthusiastic people you can trust – people who share your same professional goals, motivations, and appreciation of technology. It helps you keep pushing toward self-improvement and continual learning. You’ll realize you can push beyond what you thought were limits and you’ll keep growing, day after day.

Stephanie Lanning is the Director of Channel Sales at Effectual, inc.

Effectual Launches Managed DevOps Platform Optimized for AWS

Effectual Launches Managed DevOps Platform Optimized for AWS

Managed Platform Provides Governance for Secure DevOps Pipelines

Jersey City, NJ – March 10, 2021 – Effectual, a modern, cloud first managed and professional services company, has launched a Managed DevOps Platform optimized for the Amazon Web Services (AWS) Cloud. Built with a security first approach and designed to meet industry compliance standards, the platform integrates automated testing, secure coding guardrails, monitoring, and task management.

“Successful DevOps depends on the right mix of culture, processes, technology, and risk mitigation in an organization’s software assembly line,” said Al Sadowski, Effectual SVP Product Management. “One of our key platform design principles was to ensure that security and compliance are baked into every phase of the DevOps lifecycle. We also support customers from initial set up to monthly coaching and ongoing platform management so they can sustain DevOps culture and continuous application modernization.” 

Effectual’s Managed DevOps Platform supports widely used programming languages and deploys Infrastructure as Code to speed cloud provisioning and application deployment in AWS EC2, container, or serverless environments. The flexibility of the platform allows for a highly customized CI/CD toolchain to meet the precise needs of any application. 

“Enterprises are adopting cloud with specific business outcomes in mind but often lack the resources and skills to address them. ‘Going faster’ to cloud will require third-party expertise as the deficit in engineering and DevOps skills to build, operate and secure cloud environments becomes the biggest cause of migration delays. Securing and automating testing by design during the entire software lifecycle will become an imperative.” 

William Fellows, Founder and VP Research, 451 Research, part of S&P Global Market Intelligence

Optimized to take advantage of the AWS Cloud, the platform addresses the governance, security, and compliance risks many commercial enterprises and public sector agencies face as they seek to scale a consistent DevOps practice across their organization.

Benefits of Effectual’s Managed DevOps Platform:

  • Optimized for AWS Cloud
  • Integrates with most task management software
  • Accelerated release of new software features
  • Reduced defects and the cost of rework
  • Automated security and improved reliability
  • Simplified compliance
  • Enhanced collaboration between developers and operations
  • Automated infrastructure and software release processes at scale
  • Ongoing training for improving DevOps culture, process, coding standards, and metrics

Learn more about Effectual’s Managed DevOps Platform.

About Effectual

Effectual is a modern, cloud first managed and professional services company that works with commercial enterprises and the public sector to mitigate their risk and enable IT modernization. A deeply experienced and passionate team of problem solvers apply proven methodologies to business challenges across Amazon Web Services and VMware Cloud on AWS. Effectual is a member of the Cloud Security Alliance, and the PCI Security Standards Council.

Effectual Achieves 200 AWS Certifications

Effectual Achieves 200 AWS Certifications

AWS Premier Consulting Partner Continues to Deepen its AWS Expertise

Jersey City, NJ – March 4, 2021 – Effectual, a modern, cloud first managed and professional services company, has collectively achieved 200 AWS Certifications. The milestone reflects the company’s deepening AWS app development skills and experience implementing solutions and solving tough technology challenges for its enterprise and public sector customers.

“The continual expansion of our AWS expertise is one of our top priorities,” said Robb Allen, Effectual CEO. “Keeping pace with AWS innovation and staying relevant to our customers requires an ongoing commitment to learning that extends throughout our organization. These certifications are foundational to how we help our customers modernize.”

The AWS Certification Program validates cloud expertise to help professionals highlight in-demand skills and to help organizations build effective, innovative teams for cloud initiatives using AWS. Role-based certifications include those in Cloud Practitioner, Architect, Developer, and Operations roles, as well as Specialty AWS certifications in specific technical areas.

“Keeping pace with AWS innovation and staying relevant to our customers requires an ongoing commitment to learning that extends throughout our organization.”

Robb Allen, Effectual CEO

An AWS Premier Consulting Partner, Effectual holds the AWS Migration, DevOps, Mobile, SaaS, Government, and Nonprofit Competencies, Well-Architected and Public Sector Partner designations, as well as the AWS Lambda, AWS Microsoft EC2, AWS GovCloud (US) service deliveries, and the Authority to Operate on AWS program. The company’s portfolio of modern cloud services includes strategy and ideation, migration, app development, and modern cloud management.

About Effectual

Effectual is a modern, cloud first managed and professional services company that works with commercial enterprises and the public sector to mitigate their risk and enable IT modernization. A deeply experienced and passionate team of problem solvers apply proven methodologies to business challenges across Amazon Web Services and VMware Cloud on AWS. Effectual is a member of the Cloud Security Alliance, and the PCI Security Standards Council.

Todd Helfter, Director of Database Services: Lessons Learned as a Long-Time Remote Worker

Todd Helfter, Director of Database Services: Lessons Learned as a Long-Time Remote Worker

Working remotely is not a new concept. Until the Industrial Revolution, most people didn’t travel for their livelihood. It was only when production shifted from farms and local workshops to mills and factories that the nature of work changed overnight. It wasn’t until the Digital Revolution when new technology once again allowed us to reevaluate how (and where) work was done.

I’ve been working remotely full-time for over 15 years, and it’s significantly easier now than it was back then. That’s not to say there aren’t challenges, but the technology we all rely on for telecommuting is a lot more advanced than it was even just a few years ago.

I’ve been working remotely full-time for over 15 years, and it’s significantly easier now than it was back then.

When I started working remotely, VPN and remote desktop technology didn’t have the necessary capabilities to properly support telecommuting. Instead, we had to put together our own solutions to fill in the gaps however we could.

For example, we customized open-source instant messaging tools to allow people using *nix operating systems to communicate easily. Even having an office phone took a little ingenuity. We used double-sided tape to attach a dedicated VPN router to the back of an office phone to create a secure tunnel. We called it “The Frankenphone,” but it worked!

Today I can do more on my smartphone than I could on my old desktop, and I even have a comparable work experience because the tools are designed for multiple platforms. I definitely don’t miss the experience of working with email attachments on an old BlackBerry. I’m thankful we have all kinds of SaaS tools to keep us connected and properly equipped to do our jobs.

With COVID, more people are working remotely than ever before. If this is the first time you’ve spent a significant stretch working from home, you’ve probably noticed that the technologies and infrastructure available are more than capable of letting us do our jobs remotely. However, we are also facing the unique challenges of sharing our home office with children and partners adjusting to working and learning from home.

Even during these times, I think the most important aspect of making remote work successful is finding (and maintaining) your work/life balance. I know it can be difficult to step away from your computer when you work from home and you’re a stone’s throw from your desk – especially when you genuinely like what you do.

Even during these times, I think the most important aspect of making remote work successful is finding (and maintaining) your work/life balance.

Still, simple things like keeping regular working hours can make a huge difference. It was hard for me to do this at first, but I make sure to have dinner with my family every night. That’s not to say I don’t put in the occasional late night or early morning, but I prioritize keeping that balance. It can be really easy to fall into the trap of thinking “I must sit at my desk 100% of the time so that it’s clear that I’m working and being productive.” Be mindful of these thoughts and don’t feel guilty about taking breaks. You have to establish healthy boundaries and realistic expectations so you don’t burn yourself out.

Technology has made the world smaller, and as companies become more global their employees are likely already working with people they don’t see in person on a day-to-day basis. Telecommuting obliterates the limits of geography, giving people access to opportunities regardless of location as well as giving companies access to a much larger pool of talent. In addition, the same technology that facilitates remote work makes it easier to engage with customers.

I think the general perception of remote work has changed for the better. Companies are starting to realize what my team and I have known for well over a decade – if done properly, remote work won’t have a negative impact on job performance and can even result in increased productivity. Providing employees with the right tools (including SaaS- or cloud-based applications), maintaining consistent communication, and trusting them to do their jobs remotely has been proven to not only increase productivity and engagement, but can also reduce stress and increase general well-being. It’s win-win for companies and their employees.

I’m curious to see if the trend toward remote work will continue once people start feeling more comfortable working out of office spaces again. I certainly hope it does.

Todd Helfter is the Director of Database Services at Effectual, inc.

Shelby Cunningham, Director of Professional Services: Building High Performing Teams & Trusted Customer Relationships

Shelby Cunningham, Director of Professional Services: Building High Performing Teams & Trusted Customer Relationships

When I look back at my extra-curricular activities as a kid, I realize they were probably foundational for my current role leading app development teams at Effectual. From tap dancing to cheerleading, piano playing to water skiing, I learned coordination, team-building, creativity, and how to adapt quickly in a “fluid” environment. Most of all, they were both challenging and fun, which is how it feels to lead a large development team for a growing entrepreneurial company.

Tech has always felt like a natural fit for me. I started my career managing marketing and overseeing development teams for software startups, which evolved into having my own strategic consulting agency. Those experiences led to management roles in client success, partner relations, business development, and product marketing –positions that required a balance of tech know-how, business skills, and relationship-building with customers, partners, and employees.

I joined Five Talent as a program manager several years before the company was acquired by Effectual. As a custom software developer, we always had a high volume of projects with really diverse use cases. This pipeline and our partnership with Amazon Web Services (AWS) made continual learning a strong cultural value for us. To deliver the best solutions for our customers, we had to keep pace with innovation. This is why our developers hold so many high level AWS certifications and why they continue to pursue this expertise as part of Effectual.

Today, I lead Effectual’s professional services app development team. My responsibilities include driving project management and continuous improvement as well as ensuring we are meeting (and exceeding) the expectations of our customers.

The best part of my role here is cultivating long term relationships with customers knowing that our team can build solutions that will have a real impact on the success of their businesses. From the time we start our discovery process to when we launch a product, I want our customers to know we are partners working towards the same goal. That kind of authentic collaboration yields amazing results, and it isn’t hard when your team includes some of the smartest, most talented people in the industry.

The best part of my role here is cultivating long term relationships with customers knowing that our team can build solutions that will have a real impact on the success of their businesses.

Our team is expanding fast as Effectual grows. Because of this, I am focused on building high functioning teams where people are engaged and challenged in their work but also have time to explore new technologies that interest them. Keeping up with innovation is critical to what we do here.

My new interest outside of work is riding motorcycles with my husband. I’ve got a Yamaha now but have my Harley picked out. It isn’t water skiing, but it’s the perfect metaphor for where I am in my career. Moving fast, enjoying the ride, with miles of open road ahead.

Effectual Strengthens Public Sector Modernization Services with Completion of AWS Partner Transformation Program (PTP)

Effectual Strengthens Public Sector Modernization Services with Completion of AWS Partner Transformation Program (PTP)

100-Day program aligns Effectual’s procurement, professional, and  managed services offerings with AWS public sector initiatives

Jersey City, NJ – February 9, 2021 – Effectual, a modern, cloud first, managed and professional services company, has successfully completed the Amazon Web Services (AWS) Partner Transformation Program (PTP).

The AWS PTP is a 100-day program that guides public sector Amazon Partner Network (APN) Partners through a collaborative process to accelerate their AWS skills, strengthen their technical knowledge base, and better serve government, education, and nonprofit customers on their journey to the cloud.

Throughout the intensive program, Effectual worked closely with the AWS PTP team to further refine its service offerings and identify additional public sector opportunities for the company’s end-to-end modernization portfolio. This strategic alignment supports Effectual’s rapid growth and elevates awareness of its capabilities as an AWS Partner Network (APN) Premier Consulting Partner. 

“We are focused on helping our customers derive the highest value from the AWS cloud as they modernize their infrastructure,” said Effectual CEO Robb Allen. “The Partner Transformation Program process and the insights offered by the PTP team have been valuable to our strategic considerations, our service strategy, and have galvanized our relationship with AWS.” 

“Effectual has a bias for action and deep AWS experience that was evident throughout the PTP process,” said Sandy Carter, Vice President of Worldwide Public Sector Partners and Programs at AWS. “We appreciate their commitment to supporting customers with modernization solutions that leverage AWS services and innovation.” 

Effectual has extensive public sector experience building solutions across AWS and VMware Cloud on AWS as well as expertise in federal standards such as FedRAMP and FISMA. The company’s portfolio of modern cloud services includes strategy and ideation, migration, app modernization, modern cloud management, and optimization.  

With 200+ AWS Certifications, Effectual has achieved the AWS Migration, DevOps, Mobile, SaaS, Government, and Nonprofit Competencies, Well-Architected and Public Sector Partner designations, as well as the AWS Lambda, AWS Microsoft EC2, AWS GovCloud (US) service deliveries, and the Authority to Operate on AWS program. 

About Effectual

Effectual is a modern, cloud first managed and professional services company that works with commercial enterprises and the public sector to mitigate their risk and enable IT modernization. A deeply experienced and passionate team of problem solvers apply proven methodologies to business challenges across Amazon Web Services and VMware Cloud on AWS. Effectual is a member of the Cloud Security Alliance, and the PCI Security Standards Council.

Ryan Comingdeer, CTO: Problem Solving Through Continual Learning

Ryan Comingdeer, CTO: Problem Solving Through Continual Learning

My introduction to computer science was a 7th-grade programming class where I learned BASIC. The class opened up a new world to explore that had me immediately hooked. By the time I started high school I knew I was going to follow a career in technology. Some people have a more circuitous path to their vocation, but I was fortunate to discover mine early on and seize the opportunity to build a really fulfilling career doing what I love.

In my opinion, the most important skill in this field is the ability to solve problems through continual learning. Technology is always changing, and the pace of innovation demands constant attention in order to stay ahead of new tools, services, and solutions. That is why 40% of my job every single day is dedicated to learning-and why it is such a huge part of our company culture.

Technology is always changing, and the pace of innovation demands constant attention in order to stay ahead of new tools, services, and solutions.

Learning and curiosity are values that also extend to my role as a father to five daughters, age 8 to 14 years. By teaching them computer science, I am trying to give them the confidence to learn about technology and apply it to real world situations and challenges. This is one of the ways I stay involved with their personal lives and hopefully prepare them for the future. It also helps me stay relevant because I have to do my research to understand what their world is going to look like when they become adults.

Teaching my own kids about technology has led to other opportunities to inspire the next generation of innovators. To support students in the Bend community, I teach a 5th-grade technology-focused STEM program at my children’s school, host local Hour of Code events, and work with the Oregon Department of Education to integrate computer science into the K-12 curriculum. At work, I mentor recent computer science college grads starting their tech careers as part of the Apprenti internship program.

For myself, continual learning includes going after new AWS certifications, training in other cloud platforms, understanding the pros and cons of multiple stacks, testing new services, and keeping current on industry trends. However, the best opportunity for ongoing learning is working with our really talented developers on a broad portfolio of diverse projects.

Our professional services team typically has 40+ projects underway at any given time (IoT, mobile apps, web apps, big data, system integrations) that use 10-12 languages and multiple cloud providers. Even if I am not working directly on a project, I meet with my technical leadership team every week to review what we did, what worked, what did not, and to figure out what we can do better. This gives me a chance to learn alongside them and gather lessons learned as reference points for when I am talking to customers or recommending a new architecture.

For in-depth analysis, I like to pick a topic such as AI and do as much research as I can to understand what the top 5 vendors offer, the benefits of their solutions, the use cases, and the lessons learned thus far. I also follow a dozen blogs that cover new design patterns so I can compare technology stacks and spend at least an hour a night researching how to stay forward thinking on cloud native architecture.

If you want to deliver a relevant, valuable technology solution, you have to start by understanding the problem you are helping your customer solve.

Still, it is not enough to be a technical expert. As professional services providers, our job is to enable business outcomes with measurable results. If you want to deliver a relevant, valuable technology solution, you have to start by understanding the problem you are helping your customer solve. This includes pain points, opportunities, target audiences, business requirements, the competitive landscape, and more. That is why our solutions architects and developers are skilled technologists as well as big picture thinkers interested in how businesses and market dynamics work. I encourage us to ask WHY we are building something as much as HOW.

After spending the last 15 years focused on professional services, I am excited to embark on the next chapter of my career. Working with Effectual’s Modernization EngineersTM is giving me a whole new understanding of the life cycle of a technology solution. I am gaining a more comprehensive view of how to properly manage and monitor the solutions we build in a cloud environment for the long term. The collaboration is making me a better architect and a better technologist, with more learning ahead.

Ryan Comingdeer is the Chief Technology Officer at Effectual, inc. 

Microsoft Immersion Day

Microsoft Immersion Day

Modernizing .NET applications on AW 

Learn how to run your Microsoft .NET applications and workloads on AWS for greater reliability, faster performance and improved security, with access to the deep functionality of AWS cloud services. Explore the benefits of using containers to accelerate your modernization journey from monolithic to microservices architectures. 

  • Overview of .NET on AWS 
  • Getting started with AWS tools 
  • Hosting .NET apps 
  • Migrating .NET workloads 
  • Containerizing legacy ASP.NET apps with Amazon ECS

This event includes a hands-on Modernization Lab on containerizing legacy ASP.NET applications using AWS App2container. 


Ryan Comingdeer, Facilitator & AWS Partner Network Ambassador
Chief Technology Officer

Jignesh Suthar
Senior Solutions Architect 
Amazon Web Services (AWS)


.NET Software Engineers, Architects, DevOps Engineers, and Microsoft IT Professionals

Date: January 2021 – TBA

Time: TBA

Virtual Meeting Details: Zoom link will be sent in registration confirmation

Cost: Free

SpiraLinks: Rapid migration to AWS unlocks new cloud-native capabilities

SpiraLinks: Rapid migration to AWS unlocks new cloud-native capabilities

SpiraLinks offers tailored consulting services for projects, technical event, and implementation management to Fortune 500 companies, including designing, installing, and hosting secure web-based systems for human resources, compensation, and finance teams. The company’s FocalReview® planning suite is a leader in compensation and performance management, supporting customers in the US and beyond.

The Challenge

Driven primarily by the upcoming consolidation and closure of the data center hosting its product platform, SpiraLinks had made the strategic decision to migrate its infrastructure to the AWS Cloud. This included three application servers, a legacy Oracle database environment, and an older standalone Windows application. The company also had several virtual machines that were being retired by their MSP.

SpiraLinks recognized that a successful migration would provide an opportunity to modernize its technology stack and leverage new AWS capabilities to better serve its customers. However, without the internal resources to accomplish the move, the company needed to engage a partner with the technical resources and expertise to achieve the migration.

Benefits of the AWS Cloud

The company chose to migrate to an AWS environment to increase efficiencies, improve security and compliance, and optimize costs. In addition, the SpiraLinks wanted to access new AWS Native services to modernize and evolve its business.

Outsourcing Migration Expertise to Effectual

To achieve these business objectives, SpiraLinks partnered with Effectual to lead its migration and modernization efforts. Effectual is a cloud first, security first managed and professional services company and AWS Premier Consulting Partner with deep expertise leading complex migrations and managing modern cloud environments across VMware, VMware Cloud on AWS, and native AWS environments.

Solutions & Outcomes

  • Completed a full migration of customer-facing applications from on-prem infrastructure to a new, modern, secure AWS environment in less than a month.
  • For applications:
    • Deployed all new modern Linux and Windows servers in separate VPCs for improved security
    • Configured Amazon Elastic Block Store (EBS)for the three Linux EC2 instances hosting Wildfly (formerly JBoss)
  • For Oracle database server:
    • Migrated all data from legacy Oracle environment
    • Upgraded and deployed database into a new Amazon Relational Database Service (RDS), allowing for adoption of Session Manager for accessing application servers (improving security and decreasing costs) and providing added functionality with real-time performance insights
    • Increased the security layout of the data environment by isolating it in its own private subnet and restricting access
    • Restricted access via approved ports from application servers
    • Deployed to a single AWS RDS instance with individual database schemas
  • Replicated and enhanced mail sending capabilities to utilize Fluent Ltd. mail relay service.
  • Increased security due to inherited ISO certification from AWS.
  • Created an AWS Identity and Access Management (IAM)group and defined the IAM policy to provide SpiraLinks developers with access to the AWS Systems Manager Agent(SSM). Once the IAM groups and policies were configured, shared initial login credentials with the primary SpiraLinks contact and configured Multi-Factor Authentication (MFA) to enhance solution security.
  • Deployed and configured CloudTrail and CloudWatch EC2 log streams to monitor instances, and also configured email alerting for these services.
  • Configured Amazon Data Lifecyle Manager (Lifecycle) to take snapshots, with a rolling 14-day retention period.
  • Established a clearer understanding of data needs as well as the specific benefits of AWS environment and services in order to make informed choices.


Through its partnership with Effectual, SpiraLinks was able to achieve a rapid migration of its infrastructure to the AWS Cloud and avoid unexpected downtimes associated with the closure of its MSP’s data center. The migration to an AWS environment provided opportunities to improve security, increase efficiencies, and optimize costs while opening new pathways to modernizing using AWS native services and capabilities.

Next Steps

Moving forward, SpiraLinks will utilize the newer, more secure AWS environment for its many tools and benefits in accordance with the constantly changing business and operational requirements of the SpiraLinks client base. Specifically, compliance and data protection/privacy will be evolving challenges for SpiraLinks and the client base. The AWS environment has been chosen as an excellent “base of operations” to meet those challenges.

SpiraLinks will continue to work with Effectual as a Modernization Service Provider to utilize their expertise in addressing the company’s long-term goals and challenges. In addition, SpiraLinks and Effectual have developed an evolving roadmap that includes further modernization efforts to increase automation, availability, reliability, and security – further establishing the position of SpiraLinks as an industry leader.

About Time Tours: Guiding a Successful SaaS Journey

About Time Tours: Guiding a Successful SaaS Journey

About Time Tours is a Pacific Northwest startup redefining how the real estate industry plans, organizes, and coordinates home tours between agents and homebuyers. With market expertise but only a general business idea, the company asked us for help developing a SaaS-based solution. We guided them on their SaaS journey from a basic app concept to a scalable production-ready launch using AWS SaaS services and best practices.

As a startup, About Time had already identified key pain points facing both realtors and home buyers for scheduling home tours. For all involved, the existing process was time consuming, cumbersome, and fraught with unnecessary complexity. About Time saw an opportunity to streamline scheduling and communication and capture feedback. The company also wanted to maximize the market opportunity and go to market as quickly as possible.

Building the business case & defining the product vision

Given almost 50% of our professional service engagements are SaaS focused, we have deep experience implementing the SaaS business model for clients. We started with About Time by embarking on a full discovery process, beginning with building a well-defined business case for their solution and outlining their product strategy.

This included evaluating customer pain points, developing user stories, and creating a seamless UX/UI experience. We also conducted a competitive analysis of off-the-shelf solutions to determine what problems they solved, how they solved them, and their challenges. After establishing the business case and product vision, we built a series of wire frames showing the functionality of features and workflow before moving on to mockups of the app.

Aligning the MVS with AWS SaaS best practices

For SaaS clients, defining a Minimum Viable Service (MVS) always poses the greatest challenge. It is also the most critical stage on the SaaS journey where resources are concerned, as you can easily over architect your solution and run up costs. We worked with About Time to decide on the right MVS, knowing they would receive important feedback after going live that would likely change the app in future sprints.

Once we had defined the MVS, the AWS SaaS Enablement Framework provided a clear, thorough process for us to evaluate tenancy, security compliances, and compare cost models against the company’s revenue objectives. We also helped About Time prepare documentation and collateral in support of their efforts to secure investor funding.

From development to launch -Leveraging the Well-Architected Framework

In the next phase, our development efforts followed an agile process with milestones sprint by sprint and continual, transparent communication with About Time’s founders and investors. We used the Well-Architected Framework to ensure we were properly evaluating tradeoffs and applying cost optimization strategies when it came to reliability and security. We also segregated their personally identifiable information (PII) data in a multi-tenant environment to meet security compliances.

In addition, we built their app 100% on serverless so it can scale rapidly as their user traffic increases and utilized a pay as you go model to keep costs per user in line with their profit margin and revenue expectations for sustainability and growth.

For testing, our team conducted performance tests to ensure the app could handle expected traffic and security tests to confirm there were no hacking activities. We also held an informal gameday so that there is support documentation in place in case the app goes down during a live environment. Last, our SLA with the company sets expectations regarding our response time and steps we will take to ensure they are up and running quickly.

At launch, About Time’s final mobile and web app represents a highly scalable SaaS solution capable of growing with market demand without compromising on security and cost. Our next steps include capturing feedback and optimizing features and workflows to keep customers happy and satisfied with their solution.

Working with the Effectual team to refine our MVS gave us an objective view of how to align our revenue goals with the right cost model so we could take an informed approach to choosing the best SaaS strategy. As we scale, we know we can meet our business objectives and deliver a high-quality customer experience.

Chris Mergenthaler, Co-Founder

Empowering Marketers and Driving Customer Engagement with Amazon Pinpoint

Empowering Marketers and Driving Customer Engagement with Amazon Pinpoint

In an increasingly virtual world of remote work, online learning, and digital interfacing, successful customer engagement can differentiate you from competitors and provide deeply valuable insights into the features and innovations important to your users. A well-designed, well-managed user experience not only helps you gain market share, but also uncovers new revenue opportunities to grow your business.

At Effectual, we begin projects with an in-depth discovery process that includes persona development, customer journey mapping, user stories, and UX research to design solutions with engaging, meaningful user experiences. Post-launch, your ability to capture, iterate, and respond to user feedback is just as essential to your success.

In our experience, many SaaS-based companies simply miss this opportunity to stay engaged with their customers. Reasons for this include the complexity and cost of designing, deploying, and managing customized marketing campaigns across multiple channels as well as the lack of real time data analytics to inform them. The result is a tidal wave of generic emails, poorly-timed push notifications, and failed initiatives that impact customer retention and engagement.

Amazon Pinpoint is a scalable outbound and inbound marketing communications service that addresses these challenges and empowers marketers to engage with customers throughout their lifecycle. The service provides data insights and a marketing dashboard inside the Amazon Web Services (AWS) admin console for creating and managing customized communications, leveraging automation, data analytics, filters, and integrations with other AWS products and third-party solutions.

Easy to use and scale

  • Manage campaigns from a user friendly marketing dashboard
  • Scale reliably in a secure AWS environment

Targeted customer groups

  • Segment audiences from mobile and web application data or existing customer list

Customized messaging across email, SMS, push notifications

  • Personalize content to engage customers using static and dynamic attributes
  • Create customer journeys that automate multi-step campaigns, bringing in endpoints from your app, API or directly from a CSV
  • Engage customers with targeted emails and push notifications from the AWS admin portal using rich text editor and customizable templates

Built-in analytics

  • Set up customer endpoints by user email, phone #, or UserID # to track user behavior within your app
  • Use real time web analytics and live data streams to capture immediate feedback
  • Measure campaign data and delivery results against business goals

Integrations with other AWS services and third-party solutions

For marketers, Amazon Pinpoint is a powerful tool for improving digital engagement – particularly when integrated with other AWS services that utilize machine learning and live stream analytics. Organizations that invest in designing engaging user experiences for their solutions will only benefit from continually improving and innovating them.

Have an idea of project to discuss? Contact us to learn more about using Amazon Pinpoint to improve your customer engagement.

Zolo Media: Integrating custom solutions to activate and capture a media-hungry regional market

Zolo Media: Integrating custom solutions to activate and capture a media-hungry regional market

Zolo is a media broadcaster and production company based in Central Oregon. The company provides broadcast and advertising solutions through local, network, and original programming to Central Oregon viewers. Zolo is owned by Telephone and Data Systems, Inc. [NYSE: TDS].

Recognizing that viewers are increasingly consuming content online, Zolo positioned itself to offer viewers a robust online viewing experience with daily video content, live streaming, and real-time weather information. The company asked Effectual to help them evaluate approaches for building their new online media platform.

Strategic consulting reveals clear requirements and high level objectives

Effectual and Zolo had a collaborative, in-depth discovery process that uncovered the complex business requirements and technical needs for the new platform. Zolo was particularly concerned about meeting the rigorous corporate security, legal and risk compliance requirements of its parent company TDS.


  • Scope, budget and project phasing recommendations matched Zolo’s requirements and business objectives.
  • Complied with all security, legal and risk requirements of publicly-traded parent company.

“Effectual has been an amazing partner over the past year – always finding a way to make “what if” happen and showing us the possibilities.”
– Michele O’Hara / Marketing & Creative Services, Zolo Media

Amazon Web Services (AWS) empower reliable performance and website scalability

As a certified AWS solutions architect, Effectual saw quickly that AWS could provide Zolo performance reliability for its streaming content while allowing the company to dramatically scale its offerings.


  • Caching and scalability of up to 2 billion unique visitors per month
  • Consistent, uninterrupted streaming content
  • Hosting bandwidth and cost scales with site traffic
  • Ability for multi-authors to add content
  • Custom integrated technical solutions build user community
  • The team built a custom website design responsive to desktop, tablet and mobile that complemented the platform offerings

Xenon, Inc: IoT Proof of Concept Accelerates New Market Opportunities

Xenon, Inc: IoT Proof of Concept Accelerates New Market Opportunities

Xenon, Inc. is a custom hardware provider offering full-service engineering, integration, and field service solutions for the oil and energy industries. The company provides process and environmental analytics, industrial instrumentation and automation, and electrical systems.

Though Xenon primarily serves industrial markets, the firm was approached in 2018 by a new customer interested in applying their industrial engineering background to building IoT solutions for optimizing home maintenance, monitoring, and asset protection. An institutional single-family residence company with a portfolio of thousands of homes, the client was particularly interested in testing automated door locks, water sensors, and other smart devices for secure access and efficient maintenance. Their proposed plan included deploying devices in vacant properties each month with a three-year installation phase.

Partnering with Xenon provided an opportunity to explore and validate the impact of installing IoT smart home solutions for improved customer experience and reduced operational costs. For Xenon, the project presented a new market outside of its industrial focus. To respond, they needed a proof of concept to test in the first 200 homes and present to executives.

Leveraging expert advice for faster proof of concept

Xenon began building the IoT platform in Amazon Web Services (AWS) on its own, but soon encountered issues. As hardware engineers, they realized they were outside their core competency and needed help from experienced solutions architects on software integration with their client’s property management system. They engaged Effectual to review their existing architecture and implement Well-Architected best practices.

Xenon’s primary challenge was creating a cost-efficient cloud architecture that could scale. When the Effectual team conducted an initial review of the company’s environment, we confirmed the existing software layer would require fundamental changes to meet their cost requirements. In addition, our evaluation revealed the platform was built on one computer with no staging environment and no redundancy. This existing environment jeopardized the long-term reliability and scalability of the platform.

Based on this analysis, our team estimated Xenon would quickly out-capacity their existing environment at 100 homes. This was insufficient, as they needed to prove they could scale rapidly to service the company’s expanding property portfolio. Effectual also felt Xenon’s small development team could benefit from mentoring and guidance on key concepts and AWS IoT Core best practices.

Key recommendations and outcomes included:

  • Built a scalable, reliable proof of concept that met the client’s business requirements and budget
  • Confirmed AWS as the right solution for expanding their offering
  • Established DevOps best practices and trained internal team on processes
  • Educated company on costs and complexity of creating an IoT solution on a traditional infrastructure with EC2, load balancers. Showed them the significant benefits of using a serverless framework to process IoT events from Amazon Kinesis and device command management.
  • Developed 187 AWS Lambda functions for an estimated 40,000,000 events per month.
  • Implemented Amazon Kinesis to collect, process, and analyze 60,000 incoming records per day (30 MB of streaming data per day) to provide reliable, real-time insights and rapid response capabilities.
  • Deployed AWS API solution with an advanced logging and control layer for Xenon’s large scale IoT system to handle a high volume of burstable requests. Designed one gateway to ingest IoT alarms and events, and another to receive commands from external systems and applications.
  • Implemented Amazon DynamoDB as the primary storage mechanism for scalability with all tables using On-Demand for capacity control.

With Effectual’s help, Xenon responded quickly with a functional, reliable proof of concept that addressed their client’s pain points and met their business requirements. They validated AWS as the best cloud solution for propelling their project forward and gained a solid understanding of AWS IoT services.

Results & Next Steps

For their client, the project provided a better grasp of the costs and resources needed to deploy smart home systems in their properties. It also revealed what checks and balances they need to put in place for their operations.

From Effectual’s perspective, these outcomes are precisely what a successful proof of concept project should accomplish. If the client does decide to roll out these systems to its entire portfolio, we look forward to helping Xenon revisit its current configuration with some new approaches to further unlock the potential of the AWS Cloud.


Wingo IoT: AWS IoT Solutions Position Startup for Rapid, Secure Scalability

Wingo IoT: AWS IoT Solutions Position Startup for Rapid, Secure Scalability

Wingo IoT is an Oregon-based startup that integrates inexpensive IoT and legacy automation systems into one intelligent solution for industrial applications. Its key value proposition lies in collecting critical data for operational analytics, AI and ML modeling, and insurance premium and claim reductions.

Established in April 2018 by an experienced technical team, Wingo focused its early development efforts on local sensor networks and isolated edge devices for data collection. The company’s hybrid IoT solution included 100% offline monitoring at sites and low-cost methods for collecting and managing facility data.

From the beginning, Wingo was aware their initial cloud architecture would require major improvements to meet stringent availability and security requirements for modern enterprise applications. A growing pipeline of large industrial customers motivated CTO Glynn Fouche to approach Effectual for a Well-Architected Framework Review as well as expert advice on Amazon Web Services (AWS) IoT solutions.

Starting with Well-Architected best practices to build long term success

Fouche recognized Wingo needed to properly leverage cloud services in order to best serve their customers. In particular, he wanted to set the young venture up for success from the start by aligning Wingo’s development process with the 5 Pillars of the AWS Well-Architected Framework.

As an AWS Advanced Consulting Partner and authorized Well-Architected reviewer, Effectual frequently helps early-stage companies leverage cloud-based solutions for projects ranging from proof of concept to full-scale custom software development. In this case, it was clear that with few developers, limited resources, and impending customer rollouts, Wingo was on a tight schedule to identify critical issues for remediation, improve real-time reporting, and operationalize its development process.

Given the company’s aggressive timeline, our team of solutions architects completed a thorough Well-Architected review and remediation in less than three months. During the process, we uncovered 34 high-risk issues requiring attention.

In the process of addressing these key issues, Effectual’s contributions include:

  • Developing cost predictions for company revenue model
  • Adopting a flexible consumption model to reduce development to cost ratio and increase product margins
  • Designing and implementing DevOps process for long term scalability
  • Establishing data storage plan leveraging a combination of Amazon DocumentDB, Amazon S3, AWS Glue and Amazon Redshift for quickly indexing data with instant access
  • Ensuring security compliance in a multi-tenant environment by isolating sensitive data
  • Creating NOC dashboard using AWS Lambda for real-time monitoring and business logic for pulling analytics
  • Deploying Amazon CloudFront to move small json payloads of dynamic content
  • Leveraging API Gateway as the medium for mobile and web apps to trigger backend API services in AWS Lambda
  • Providing security and disaster recovery analysis as well as recommendations for a secure, highly available, and fault-tolerant architecture

In addition, implementing Well-Architected best practices has strengthened Wingo’s confidence it can serve larger customers and meet their strict business and compliance requirements. Based on past experience, Fouche believes Wingo is much better prepared to handle comprehensive due diligence and security audits. The review process and documentation will also have a significant impact on the company’s ability to raise capital and could add significant value in the event of a purchase.

In collaboration with Effectual, Wingo’s next steps include documenting security practices as well as failover and recovery recommendations for performance reliability. These steps are critical as the company develops its cloud-based data architecture, user interfaces, and API gateways for external integrations.

Results & Next Steps

With the Well-Architected review complete, Wingo is now positioned to approach both new customers and potential investors with greater confidence in its ability to receive, process and store data in the cloud and offer powerful data insights for driving optimal business outcomes.


Warm Welcome: Replicating the SaaS delivery model with a smart Proof of Concept (POC)

Warm Welcome: Replicating the SaaS delivery model with a smart Proof of Concept (POC)

With a history of successful SaaS ventures for the photography and real estate industries, entrepreneur David Jay launched Warm Welcome as a Proof of Concept (POC) in early 2019. The product delivers highly personalized video messages through email to support customer onboarding and retention. After investing nine months to gather user feedback, Jay had developed a clear pricing modela list of MVP features, and go-to-market strategy. However, he needed Effectual’s help refactoring the POC to address security and reliability concerns to make the solution production-ready.

Evaluating trade-offs, defining priorities

Highly skilled at building strong, loyal user communities, Jay and his team are adept at responding to user requests, defining focused MVPs, and gathering valuable customer feedback. With Effectual’s support, they have also learned how to use the Well-Architected Framework (WAFR) to evaluate trade-offs and determine priorities for their POCs.

For Warm Welcome, the team decided that time-to-market in the POC phase was a priority. Their goal was to quickly capture user feedback in order to understand the product’s business value.

Aligning pricing and marketing strategies

The first version of Warm Welcome was a small, low fidelity MVP tightly focused on solving the customer’s biggest problems, which were closing a sale and onboarding a new client. Based on analytics, user surveys, phone calls, and focus groups, the company gained key insights into the value of the product. This helped them align their messaging and marketing with the needs of their customers.

In addition, they carefully tracked their actual costs, allowing them to build a pricing model to accurately fit their cost model.

Refactoring architecture for reliability and security

While reliability and security were acceptable trade offs during the POC phase, they needed to be addressed prior to moving into production. Effectual began refactoring the POC by conducting a WAFR, resulting in a re-evaluation of the initial tradeoff decisions. In addition, Effectual offered the following recommendations:

  • Secure the environment to autoscale using EC2, ElasticBeanstalk, AutoScalingGroups
  • Set up a CI/CD pipeline with parallel environments to increase agility and lower risks
  • Leverage additional AWS tools and services including Cloudfront, S3, Aurora RDS, and ElasticTranscoder

Warm Welcome is one of many projects Effectual has worked on with Jay over the last several years. Throughout the client relationship, Effectual has provided strategic advice and technical expertise through all phases of discovery, development, and deployment.


Verdant Web Technologies: Seamless Integration with Amazon EC2 for Microsoft Windows

Verdant Web Technologies: Seamless Integration with Amazon EC2 for Microsoft Windows

An enterprise level company, Verdant provides management software solutions that track, access, and update facility Environmental Health & Safety (EH&S) compliance and sampling information. With a rapidly growing customer base, the company was beginning to confront scalability, reliability, and performance challenges that could not be solved with their existing on premise infrastructure.

Verdant engaged Effectual to conduct a Well-Architected Framework review as well as to develop and execute a strategy for migrating their Microsoft tech stack to the Amazon Web Services (AWS) Cloud.

Migrating to AWS with an existing Microsoft technology stack

Verdant recognized the importance of leveraging AWS services to address increasing customer demand and scalability issues. At the same time, it was critical that the company migrate their existing Microsoft technology stack. This was a requirement for two reasons:

  1. All of Verdant’s technical engineers already used Microsoft (.NET and SQL Server)
  2. The company’s industry clients had tools integrated with Microsoft and were loyal to their solutions

With an experienced team of Microsoft and AWS certified developers, Effectual was confident a seamless  integration with AWS services was possible.

Using Amazon EC2 to build a scalable framework

Using Amazon EC2 for Microsoft Windows Server, we rewrote Verdant’s entire on-premise workload to AWS and built a scalable framework that addressed their business challenges. To accomplish this, our team:

  • Leveraged IIS and the .NET framework
  • Kept the company’s SQL Server but hosted it on Amazon RDS for SQL Server to improve capacity, decrease costs, and reduce database administration
  • Automated their deployment using Jenkins and Elastic Beanstalk – like their enterprise clients, they were using Active Directory for user authentication which we were able to easily integrate with SAML

As a result, Verdant was able to stay with a Microsoft technology stack their internal team and customers trusted and were familiar with while taking advantage of the compute capacity of the AWS cloud. Because we began with an existing framework, we were also able to complete the migration far more quickly and focus on adding value with a customized solution.

Verdant Web Technologies: DevOps & AWS tools improve scalability, profitability, and customer experience

Verdant Web Technologies: DevOps & AWS tools improve scalability, profitability, and customer experience

Verdant offers management software solutions to track, access, and update facility Environmental Health & Safety (EH&S) compliance and sampling information. With a growing customer base and a maturing product, the company was starting to encounter big DevOps and infrastructure challenges that threatened to slow its market momentum.

Verdant’s migration of thought and concept demanded a far more scalable model. To the Effectual team, it was clear that the AWS platform could help them pivot and evolve.

Standardized architecture improves DevOps

Verdant’s primary pain point was architecture. With six different code bases unique to each client, the company updated code changes manually, published them out to 10+ web servers, and ran its SQL scripts on multiple databases. The process was overwhelming their team, impacting scalability, and preventing them from writing new features. The company’s IP also lived with a single developer, creating some vulnerability. Our team immediately got to work rewriting the company’s software with multi-tenant support, allowing different organizations to manage their data separately but with a standardized code base.


  • Streamlined DevOps by automating the deployment/development process with a build server and rapid deployment tools
  • Created a faster, more reliable migration to the AWS Cloud
  • Leveraged AWS for greater security and global redundancies to safeguard against potential downtimes
  • IP knowledge is now shared broadly by Verdant’s entire team so the company is no longer reliant on one person to protect its IP
  • The ability to scale rapidly to meet customer demand

“Effectual has been an amazing partner in the development of our enterprise platform which is now our life blood. Along with their responsiveness, solution engineering depth and capabilities we appreciate their tight management of project budgets and schedules. Effectual is a valued resource and critical part of the Verdant Team!”  
– Ron Petti / President, Verdant Web Technologies

Eliminating hardware lowers cost of customer acquisition

Before deploying AWS, it took Verdant weeks to onboard new clients with a process that required significant hardware investments. Infrastructure was a fixed asset regardless of the number of clients. Our solutions turned infrastructure into an operating cost and eliminated hardware altogether.


  • Reduced new client onboarding from 2 weeks to 1 hour
  • Eliminated need for costly hardware
  • Decreased customer acquisition costs

Scalable solution allows for fast response to market demand

For Verdant, the timing for the project couldn’t have been better. Shortly after its completion, the company’s client base exploded overnight when schools around the US were compelled to perform extensive drinking water testing in reaction to the national crises in Flint Michigan. The revelation resulted in stricter reporting requirements and EH&S monitoring across Oregon, driving sudden intense demand for Verdant’s software solutions. With Effectual’s help, the company was well positioned to capitalize on incoming project opportunities, which resulted in a national award (with Environmental Business Journal).


  • AWS solutions such as Elastic Beanstalk support continuous development and innovation and help Effectual manage multiple application environments for the development/testing/release cycle
  • Increased customer satisfaction with ability to quickly add new functionalities

Tourvast: Building SaaS Solutions Using Scalable Innovation

Tourvast: Building SaaS Solutions Using Scalable Innovation

Tourvast is a Software as a Service (SaaS) provider with a marketing platform that offers real estate photographers tools for creating property presentations and virtual tours that showcase their skills, leverage their art, and build their business. The platform also offers agents the opportunity to enhance their brand across social networks with high end, quality photography and video assets.

While the platform had been in existence for over a decade as licensed software, Tourvast executives wanted to evaluate the company’s intellectual property and consider options for writing their own application for greater usability. With new business requirements and a new go-to-market strategy, they contacted Effectual for help with their decision-making process and next steps. Our team provided insights and strategic advice and ultimately implemented a more scalable platform based on the secure, reliable infrastructure of the Well-Architected Framework.

Recalibrating the pricing model

Effectual solutions architects began a discovery process that included wire-framing and architecture planning. This exposed one of Tourvast’s primary challenges, which was the inability to scale its pricing model. Due to the unpredictability of its customers’ large media files, the current architecture was not consistently covering costs.


  • Conducted an in-depth revenue modeling analysis to identify average costs based on number of photos uploaded as well as the number of videos, pdfs, and other assets created.
  • Designed new architecture for cost objectives with pay-for-use pricing to reduce capital expenses.

Improving performance, scaling for demand:

In addition, tenant activity was slowing performance and impacting overall customer satisfaction and retention.

The existing workflow began with a transaction outside of the platform between a real estate agent and photographer to secure photos for the creation of marketing deliverables. Photographers paid a subscription fee for a specific number of properties in advance and banked them like a credit system. After taking photos of the agent’s identified property, they uploaded their images on the website in order to organize them into deliverables such as slideshows, virtual tours, flyers, and more. Once complete, they provided their realtor customer links to those assets. Upon the agent’s approval of the copyrighted materials, they pay the photographer the invoiced amount through Tourvast to release the media for use.

The challenge was that each time a photographer uploaded their image files, the software would immediately resize them and create a slideshow. This process would take up the site up to 10 minutes while the photographer waited for it to complete. At the same time, it froze platform functionality for all customers on the site.


  • Leveraged serverless architecture using S3 and AWS Lambda for media and multi-tenant loads, resulting in greater flexibility and stability.
  • Implemented CloudFront for streaming videos to deliver content to end users with lower latency.
  • Deployed a blue-green architecture on AWS creating a continuous integration/continuous deployment (CI/CD) pipeline, including up to 10 servers for burstable traffic.
    • Code is now developed and deployed to an AWS Elastic Beanstalk environment, with two separate, but identical, environments (blue and green) to increase availability and reduce risk. This allows the application to continues to run seamlessly while new code is deployed without impacting the user experience.
  • Implemented DevOps strategies and best practices with parallel development, testing, staging, and production environments.
    • Ensured that no development takes place in production
    • Created a testing environment for internal QA
    • Enhanced reliability with a staging environment built for “friends and family” releases with a copy of production data scrubbed for security reasons (with scale of data to mimic what happens in production)

Today, Tourvast is a SaaS company that owns its own intellectual property, with full control over its roadmap. With support from Effectual, it owns its maintenance backlog and understands its third party dependencies and costs. Last, our team continues to help the company innovate and build improvements using proof of concepts fueled by cost-effective AWS tools.

Robert Axle Project: eCommerce website redesign increases sales and customer connections

Robert Axle Project: eCommerce website redesign increases sales and customer connections

Robert Axle Project is the authority on 12mm and 15mm thru axles for bikes. They provide the highest quality products that allow families, adventurers, commuters and recreationists to enjoy traveling by bicycle. Sales are international, online through its ecommerce website as well as through dealers, distributors and OEM partners.

As the “RAP” products have grown in popularity, and in “fit” complexity with an expanding variety of different bike styles and brands, company founders recognized they needed a stronger e-commerce website platform.

Thoughtful content approach increases customer engagement

RAP has a strong customer following and sense of their community through social media channels, trade shows and industry connections. During website planning we identified 4 different customer personas to message to about not only about the product, but also in the different narratives and imagery that could be shared around actually ‘having’ and using the product from RAP.

Thoughtful website copy and messaging was matched with imagery to draw visitors into a compelling company narrative about experiences, recreating and possibility with the RAP product.

The transition in content from heavy product and features focus to more customer benefits and aspirations orientation with the new e-commerce site increased website engagement respectably – with 110+ % increase in page views and 23% decrease in the site-wide bounce rate.

Effectual worked hard to understand our business needs and develop custom features that increased traffic, conversions and engagement with our customers. With excellent project management and communication throughout the process!”  
– Katy Bryce & Chris Kratsch Founders, Robert Axle Project

Custom fit selector streamlines visitor experience

An increasing complexity in matching product ‘fit’ to increasing variations of bike and trailer types, RAP sales support knew there had to be a better way. They asked Effectual to build a custom “fit selection” tool compatible with the new website CMS.

Automating the ‘fit’ process has facilitated customer ease with securing the right product through the website, dramatically reducing sales support calls and product returns.

Fuller engagement platform increases visits

Actively engaged with their community, RAP’s new website supports this conversation – through regular story content (blog posts) and integration with social media feeds. Fresh, regular and fun content that’s attractive to search engines and industry forums brings a 150% increase in website traffic.

Online product sales have increased 125% since the website’s launch.

Pole Pedal Paddle: Cost Optimization Strategies Maximize Impact for Local Non-Profit

Pole Pedal Paddle: Cost Optimization Strategies Maximize Impact for Local Non-Profit

Held in Bend, Oregon, the SELCO Pole Pedal Paddle (PPP) event is a popular annual multi-sport relay race benefiting Mt. Bachelor Sports Education Foundation (MSBEF). Over 2500 amateur and pro athletes compete solo or as a team in the race each year. The six legs of the race include Alpine skiing, Nordic skiing, biking, running, a canoe/kayak/SUP leg, and a sprint to the finish. Race Manager Molly Cogswell-Kelley asked Effectual for a custom solution to improve race registration, team management, and results reporting as well as to reduce PPP costs.

One of the main cost drivers was an expensive yearly subscription service with a third party provider. With two races (the adult PPP and the kid’s PPP), the event was paying for two subscriptions. In addition, even though their production only ran 4 ½ months each year, they were being charged for an entire year. The off-the-shelf service also lacked needed functionality for managing PPP’s different categories and unique team structures. In addition to growing costs, the organization had a new legal requirement for attaining parent signatures for child waivers.

Applying Well-Architected Framework best practices for cost optimization, our team worked on several strategies targeting the PPP production environment and development process to address business requirements and meet cost objectives.

First, it was clear that an annual subscription was a poor pricing model for the once a year event. We shifted to a more flexible, pay-as-you-go solution with AWS to ensure usage and costs match PPP’s short production timeline. Using AWS allowed PPP to run servers and pay for usage only when their environment was turned on. Given that our development team has nine months each year to develop new features and push out code, we also decided to use spot instances for the development and CI/CD environment, representing a 90% discount.

For the same amount PPP was paying each year to manage its waivers, we were able to build a custom application with far greater functionality and flexibility to accommodate ongoing development needs. Due to the nonprofit’s budget constraints, we focused on the adult PPP race as the first phase of the project. We were able to build on this framework two years later for the kids PPP.

To ensure full cost optimization, Effectual leveraged their existing knowledge with .Net membership provider, .Net user provider and the .Net MVC frameworks and libraries. This allowed Effectual to focus only on the custom business logic and leverage out of the box solutions for logins, account creation, registrations, and password creation. This also meant we did not have to re-engineer a security feature, which kept development costs lower. For enterprise reporting and integration, Effectual chose Microsoft SQL Server and IIS.

By consolidating expenses, utilizing spot instances, and integrating AWS services with existing Microsoft technologies, we deployed custom applications that have significantly reduced PPP’s costs and improved their overall user experience year after year.


Mak Grills: Well-Architected Review Improves Scalability & Reliability of IoT Solution

Mak Grills: Well-Architected Review Improves Scalability & Reliability of IoT Solution

Several years ago, Effectual worked with BBQ manufacturer MAK Grills on a product ideation project for a new web app giving owners remote operational control of their grills. Prior to our engagement, the company’s outsourced development process had stalled and they needed help salvaging the project. After reviewing their existing code, we were able to address their wishlist and launch the app on an aggressive timeline.

While the app ran successfully for the first few years, the company began to experience performance issues as its customer base expanded. MAK Grills President Bob Tucker re-engaged Effectual for a Well-Architected Framework Review to evaluate their solution, which was crashing daily and shutting down all of their grills.

For the review, Tucker had the following objectives:

  • Stabilize the production environment
  • Ensure scalability
  • Build an affordable solution using their existing Microsoft technology stack (.NET, SQL Server, IIS)

In our Well-Architected Review, we discovered that the company had recently hired someone to rewrite their firmware. During the rollout, the firmware had 10,000 BBQs sending messages to the MAK Grills website every 5 seconds. This increase in traffic was causing their server to crash at least once a day. They had tried to fix the issue but it was still unresolved. With owners who expected their mobile service to be available 24/7, it was clear the company had a serious customer service problem on their hands.

Actions & Recommendations from the Well-Architected Review

  • Analyzed the MAK Grills Microsoft server (.NET technology stack with a SQL Server on the backend) to identify what was crashing.
  • Refactored the architecture based on new performance requirements using Amazon RDS for SQL Server, Jenkins build server, and Amazon Auto-Scaling Group.
  • Moved all logs from the IIS server to Amazon CloudWatch logs, rotating logs out every other day. This allows MAK to review logs for problems without additional costs and will not crash their server.
  • Leveraged Amazon CodeCommit for their CICD pipeline.
  • Utilized AWS Elastic Beanstalk with a blue/green deployment method to eliminate downtime.
  • Consulted with their firmware developer to provide guidance on IoT best practices.
  • Coordinated all of the company’s outsourced engineering teams to ensure they are on the same page in terms of cost objectives and best practices for scalability and reliability.

Our team also recommended that MAK Grills capture their market metrics to understand the business value of their offering. We installed Google Analytics to evaluate customer behavior and created a company dashboard for greater visibility into user data. In addition, we suggested they evaluate switching their business model to monthly subscription pricing (versus charging a $300 upfront cost for the app at time of purchase).

Based on user feedback and customer data, the MAK Grills sales team is now testing a monthly subscription pricing model with new customers. Effectual’s remediation has stabilized a production environment that can scale automatically, and the company can focus on new product innovation to keep its customers engaged and happy.


FinTech Startup: Maintaining security and meeting compliance in a fast-growing, innovative company

FinTech Startup: Maintaining security and meeting compliance in a fast-growing, innovative company

One of our clients is a fast-growing FinTech company that provides payroll card solutions for US businesses of all sizes. Their primary product offering is a direct deposit debit card that maximizes direct deposit participation among unbanked employees, eliminating the hassle of cashing paper checks.

Prior to a recent acquisition, the startup was enjoying success as a market leader with a wave of new customer acquisition. Its growth trajectory was also attracting new investors keen to enter the FinTech market. At the same time, the 100-employee company was facing challenges meeting its PCI DSS (Payment Card Industry Data Security Standard) compliance in a rapidly changing regulatory environment. Deep into their growth mode, the company’s leadership was told by investors they could not commit significant funding until new compliances were met.

For FinTech startups, PCI fines can threaten critical cash flow and bottom line profitability. Companies who fail to pass their audits can be fined anywhere from $5k to $100k per month depending on their size. Given their aggressive first-to-market strategy, the pressure was on the team to operationalize solutions and meet compliance immediately.

Originally engaged by a third-party security company to help the company with custom software development, Effectual was introduced by their auditing company to help address its regulatory and security concerns. As an Amazon Web Services (AWS) Advanced Consulting and Well-Architected Partner, Effectual has in-depth experience identifying security vulnerabilities. More importantly, the firm’s core expertise is translating those recommendations into clear, pragmatic steps for operationalizing long-term solutions.

Rapid growth and changing internal roles

As the startup expanded to service its widening customer base, internal roles and operational responsibilities were continually changing. The result was an unclear separation of permissions and duties as well as a lack of capacity or direction for detailed oversight. While former consultants had provided high level recommendations for mitigating security concerns, they had not provided the firm with practical, specific solutions for implementing them, leaving the team uncertain as how to proceed.


  • Reviewed all seven workloads – particularly related to Primary Account Number (PAN) data – to ensure the company had change management in place. This included security encryption, data storage, and permissions access.
  • Isolated workloads to keep access separate, creating an Amazon account for each workload.
  • Outlined clear separation of duties for auditing changes in their environment, with segmented duties and workloads.
  • Documented and aligned policies, processes, and permissions with internal changes and promotions to provide stability of roles and what tools each will use consistently going forward.

Managing multiple 3rd party vendors and outsourced workloads

The growing company had also become 100% reliant on third-party vendors for its workloads. Keeping eight different vendors informed of its regulatory and compliance requirements and ensuring necessary standards were met had become extremely difficult for the inexperienced team to manage. In addition, the client was at the mercy of its vendors’ competing timelines and unpredictable capacities. This was dramatically slowing its ability to respond to crucial deadlines for compliance. Effectual’s Well-Architected Framework Review quickly surfaced these issues as well as the need for remediation.


  • Coordinated project management with all third-party vendors to remedy immediate issues affecting compliance.
  • Built a secure CDE data environment to store PAN data.
  • Reduced the number of outside vendors to be more manageable and complimentary.
  • Migrated two PCI-compliant workloads to Amazon using AWS Lambda, Amazon DynamoDB, GuardDuty, and API Gateway.
  • Outlined plan for migrating remaining workloads to Amazon in the next seven months.

Meeting compliance as an everyday activity

Working with Effectual, the client succeeded in passing its crucial PCI audit in less than 3 months. More importantly, the company has built a DevOps foundation for its future growth and regulatory compliance with everyday operations that ensure its continued success.

As a result, the startup is now skilled at the following:

  • Understanding its separation of duties, including how many people are involved and needed to facilitate a change in its environment
  • Documenting and aligning policies, processes, permissions with internal changes and promotions to create greater efficiencies and security
  • Strategically utilizing third-party vendors and keeping them informed as to its compliance needs

“At first, we brought Effectual on board to build an onboarding web application. But they’ve been far more than just a software development firm. Their DevOps infrastructure expertise, ability to build products in a PCI compliant manner, and emphasis on data security has been a game changer for us.”
Evan, VP of Operations


Economic Development for Central Oregon: Digital solutions that adapt and evolve with a growing organization

Economic Development for Central Oregon: Digital solutions that adapt and evolve with a growing organization

As a regional non-profit that helps companies move, start, and grow, Economic Development for Central Oregon (EDCO) had just stepped into a rebranding process to capture its leadership role as an information and networking hub for the region’s business community. At the top of the list: redesigning and overhauling their website.

Built years before on a closed, proprietary platform, their non-responsive site didn’t reflect their own progress and was time-consuming for staff to update and keep current.

“The Effectual team has the talent and creativity to respond to whatever you can dream up – so think big.”
– Brian Vierra Venture Catalyst, EDCO-Bend Venture Conference

Content strategy informs brand messaging

Working closely with tech companies and entrepreneurs, EDCO wanted a digital presence as dynamic as its clients. The group’s new branding aligned with its long-term strategic plan but hadn’t been translated into an effective content strategy that engaged visitors. During an extensive collaboration, we developed the voice and lexicon, user personas, and calls to action that ultimately shaped brand messaging throughout the site as well as in other marketing initiatives.


  • 78% increase – average time on site
  • 30% increase – number of pages viewed
  • Content strategy process refined overall brand messaging

Discover Your Forest + US Forest Service: Strategic consulting uncovers new opportunities to engage visitors

Discover Your Forest + US Forest Service: Strategic consulting uncovers new opportunities to engage visitors

Discover Your Forest (DYF) promotes the discovery of Deschutes and Ochoco National Forests and Crooked River National Grassland by enriching the experience of visitors, building community support and creating the next generation of environmental stewards.

DYF’s new leadership team was ready to explore using digital technology to connect visitors and volunteers to its services and expand its donor base. Our discovery process uncovered strategic opportunities for integrating digital solutions that DYF hadn’t considered possible or affordable – launching them into a new phase of innovation and expansion.

Empowering visitors with easy access to information

DYF’s wanted a digital kiosk at the new Cascade Lakes Welcome Center that gave visitors simple access to trail and permitting info. Our team built a custom web app leveraging their existing US Forest Service databases, making trail and use information user-friendly and instantly available to visitors. Directly after launch, the Forest Service began evaluating the web app for regional offices in the Northwest and beyond.


  • Easy, 24-7 access to visitor information
  • Increased permitting revenue
  • New digital solution for Forest Service visitor services

“Partnership with Effectual helped us engage with a wider audience than we’d ever imagined. Their strategic guidance was invaluable and it shows in our final product.”  
– Rika Nelson Executive Director, Discover Your Forest

Mobile app transforms visitor engagement

As conversations evolved, Effectual encouraged DYF to look beyond the web app to a mobile solution that could engage visitors anywhere. The Forest Service had shelved the idea of a mobile app in the past due to cost and technical issues. New research and some collective problem-solving revealed that going mobile was within reach and within budget.


  • Simple UX makes trail and permit info easy to discover
  • Expanded engagement and access beyond bricks and mortar experience
  • Leveraged existing technology platforms with little added cost
  • Created a standardized, clean set of data deployable across other Forest Service locations

Deep dive business strategy delivers outstanding online experience

Last, the DYF static website needed a complete redesign to boost engagement and connect visitors and volunteers to the group’s mission. Effectual guided their team through an in-depth planning process to identify key personas and calls to action that would drive design and user experience and deliver desired outcomes.


  • Finely tuned UX development and design aligned with business goals
  • Responsive web design and implementation
  • Improved analytics

App Modernization: Strategic Leverage for Managing Rapid Change

App Modernization: Strategic Leverage for Managing Rapid Change

The last few months of the COVID crisis have made this even more evident, dramatically exposing security faults and the limitations of outdated monolithic applications and costly on-premises infrastructure. This lack of modernization is preventing many businesses and agencies from adapting to new economic realities and finding a clear path forward.

Applications architected for the cloud provide flexibility to address scalability and performance challenges and to explore new opportunities without requiring heavy investment.

Whether improving efficiencies with a backend process or creating business value with a new customer-facing app, modernizing your IT solutions helps you respond quickly to changing conditions, reduce your compliance risk, and optimize costs to match your needs. Applications that are already architected to take advantage of the cloud also provide flexibility to address scalability and performance challenges as well as to explore new opportunities without disrupting budgets and requiring heavy investment.

First, what defines technologies that are NOT modern?

  • Inflexible monolithic architectures
  • Inability to scale up or down with changes in demand
  • Security only implemented on the outside layer, not at the component layer
  • Costly on-premises infrastructure
  • Legacy hardware burdens
  • Waterfall development approaches

Maintaining legacy technologies is more expensive than modernizing them

Some of the most striking examples of the complexity, costs, and failures associated with legacy technologies have recently been seen in the public sector. In fact, some state unemployment systems have failed to handle the overwhelming increase in traffic and demand, impacting those in greatest need of assistance. There are those that are already taking measures within the public sector. Beth Cappello, acting CIO of the US Department of Homeland Security, recently stated that had her predecessors not taken steps to modernize their infrastructure and adopt cloud technologies, the ability for DHS personnel to remain connected during the pandemic would have been severely impacted.

Many government applications run on 30+ year-old mainframe computers using an antiquated programming language, creating a desperate need for COBOL developers to fix the crippled technologies. What the situation reveals is the dire need to replatform, refactor, and rearchitect these environments to take advantage of the scalability, reliability, and performance of the cloud.

Benefits of modernization:

  • Security by design
  • Resilient microservices architecture
  • Automated CI/CD pipeline
  • Infrastructure as code
  • Rapid development, increased pace of innovation
  • Better response to customer feedback and market demands
  • Flexible, pay-as-you-go pricing models
  • Automated DevOps processes
  • Scalable managed services (ie: Serverless)
  • In-depth analytics and data insights

The realities of preparing for the unknown

As a result of shelter-in-place orders since early March, we have seen both the success of customers who have modernized as well as the struggles of those still in the process of migrating to the cloud.

Food for All is a customer with a farm-to-table grocery app that experienced a 400x increase in revenue as people rushed to sign up for their service during the first few weeks of the pandemic. Because we had already built their architecture for the Amazon Web Services (AWS) cloud, the company’s technology environment was able to scale easily to meet demand. In addition, they have a reliable DevOps environment that allowed them to immediately onboard more developers to begin building and publishing new features based on user feedback.

Unfortunately, other customers have not been able to adapt as quickly.

When one of our retail clients lost a large number of customers in the wake of COVID, they needed help scaling down their environment as rapidly as possible to cut their costs on AWS. However, the inherited architecture had been written almost 10 years ago, making it expensive and painfully time-consuming to implement adjustments or changes. As a result, the company is currently weighing whether to turn off their app and lose revenue or invest in modernizing it to recover their customers.

In fact, many early cloud adopters have not revisited their initial architectures to ensure they are taking advantage of the hundreds of new features and services released by AWS each year

For another large enterprise customer, the need to reduce technology costs meant laying off a third of their payroll. Though our team is helping them make progress on refactoring their AWS workloads, they were still unable to scale down 90% of their applications in time to avoid such a difficult decision. The situation has significantly increased their urgency to modernize.

The need for a cloud-first modernization service provider

With AWS now 14 years old, it is important to realize that modernization is just as important to early adopters as it is for the public sector’s legacy workloads. In fact, many early cloud adopters have not revisited their initial architectures to ensure they are taking advantage of the hundreds of new features and services released by AWS each year (during Andy Jassy’s 2019 re:Invent keynote alone, he announced 30 new capabilities in 3 hours). For these reasons, and many more, our Modernization Engineers help customers make regular assessments of their cloud infrastructure and workloads to maintain a forward-looking, modern IT estate.

Whether migrating out of an on-premise data center or colo, rearchitecting an existing cloud workload, or developing with new cloud-native features,it has never been more important to implement a modern cloud strategy. This is particularly true for optimizing services across your organization and embracing security as a core pillar.

According to Gartner, 99% of cloud security failures through 2025 will be the customer’s fault. Clearly, no organization wants to be a part of this statistic. Ongoing management of your critical workloads is a worthy investment that ensures your mission-critical assets are secure. The truth is that if security isn’t done right, it simply doesn’t matter.

We work frequently with customers looking to completely exit their data center infrastructure and migrate to an OPEX model in the cloud. In these engagements, we identify risks and dependencies using a staged approach to ensure the integrity of data and functionality of applications. However, this migration or “evacuation” is not an end state. In fact, it is often the first major milestone on a client’s journey toward continuous improvement and optimization. It is also nearly impossible to do efficiently without modern technology and the cloud.

Modern cloud management mitigates risk and enables modernization

While some workloads and applications may be considered cloud-ready for a relatively straightforward lift and shift migration, they can usually benefit from refactoring, rearchitecting, or replatforming based on a thorough assessment of usage patterns. Cloud adoption on its own will only go so far to improve performance and organizational flexibility.

Effectual is a Modernization Service Provider that understands how to modernize applications, their metrics, operational costs, security implications, and compliance requirements

A modern digital strategy allows you to unlock the true capabilities of the cloud, increasing scalability, agility, efficiency, and one of the most critical benefits of any modernization initiative – improved security. Modernized technologies can also utilize cutting edge security protocols and continuous compliance tools that are simply not available with physical infrastructure.

Unlike traditional MSPs (Managed Service Providers) who manage on-premises servers in physical data centers, Effectual is a cloud-first Modernization Service Provider that understands how to modernize applications, their metrics, operational costs, security implications, and compliance requirements. When our development team finishes a project, our customers can Cloud Confidently™ knowing that their environment is in experienced hands for ongoing management.

Most importantly, the path to modernization is not necessarily linear, whether you are developing an application specifically for the cloud, refactoring or rearchitecting as part of a data center migration, or updating and securing an existing cloud environment. New ideas, priorities, and changes to the world we live in require that we adapt, innovate, and rethink our approach to solving business challenges in even the most uncertain times.

When your organization or team needs the power to pivot, we have the Modernization Engineers, systems, tools, and processes to support that change.

Ready to begin your modernization journey?
Contact us to get started.

Ryan Comingdeer is the Chief Cloud Architect at Effectual.

Using Proofs of Concept to Increase Your ROI

Using Proofs of Concept to Increase Your ROI

Not so long ago, R&D departments had to fight for internal resources and justify capital expenditures in order to explore new technologies. Developing on premise solutions was expensive and time-consuming, and decisions were focused on ensuring success and avoiding failure.

In the past 5 years, cloud platforms have radically sped up the pace of innovation, offering companies of all sizes the ability to build, test, and scale solutions at minimal cost. Technology is now a tool to differentiate yourself from your competitors, increase your margins, and open up new markets.

Small investments, big payoffs

By committing only a small portion of your budget to R&D, you can now leverage plug and play cloud services to experiment and test Proofs of Concept (POCs) with potentially huge bottom line payoffs. For large companies, utilizing POCs requires a shift away from risk-averse waterfall development to an agile approach that embraces failure as a path to innovation.

Enterprise organizations can learn from entrepreneurs, who’ve been natural first adopters when it comes to cloud solutions. Startups aren’t afraid of using pay-as-you-go services to build quick POCs for validating markets, testing technical features, and collecting customer feedback. Far more comfortable with agile development, successful early stage companies like Effectual customer Warm Welcome are adept at taking calculated risks and viewing failure as an invitation for learning.

In contrast, enterprise customers may struggle at first to embrace an agile approach and accept failure as an opportunity for insight. As established businesses, they also make the mistake of assuming reputation alone will ensure successful outcomes and often downplay the importance of customer feedback. However, this changes quickly after companies gain experience with POCs and understand the value of testing their assumptions before committing to building out final solutions.

POC vs MVP: What’s the difference?

A Proof of Concept is the first phase of designing a software application. A POC allows you to quickly solve a business challenge for a specific use case in order to:

  • Evaluate tradeoffs
  • Measure costs
  • Test technical functionality
  • Collect user feedback 
  • Determine market acceptance

POCs are timed-boxed (defined by # of hours), with clear KPIs (key performance indicators) for measuring your results. This keeps costs low and provides rapid insights into what changes need to be made before you invest significant resources to scale it.

POCs are rolled out to a controlled, focused group of users (“friends and family”) with the goal of quickly figuring out cost and technical issues. It’s not uncommon to go through 3-4 POCs before finding the one you’re ready to advance. Failure is an accepted and necessary part of this process.

For example, one of our large retail customers has dedicated $4k/month to its backlog pipeline for R&D. At the beginning of the year, we sat down with their team to identify 4-5 business problems the company wanted to tackle. For one particular POC, we developed and tested two different POCs (one cloud-based, one on-premise) before finding a hybrid solution that was the right compromise between cost and functionality.

To minimize risk, they rolled out their hybrid POC to a single store location in order to collect user feedback. Only after making recommended changes did the company commit to moving forward with an MVP at multiple locations across several states. Within 18 months, they have seen a significant return on their investment in both higher sales and increased customer retention. 

A Minimum Viable Product (MVP) is a featured-boxed solution that turns your proven concept into a functional basic product you can test with a wider user base. While it resides outside of a critical business path, an MVP usually requires greater investment and takes longer to evaluate. The goal of an MVP is to:

  • Increase speed to market
  • Establish loyal users
  • Prove market demand
  • Collect broader customer feedback

Organizations of any size can use Proof of Concept to ensure the fast delivery of a final product that meets the needs of customers and provides a measurable return on investment. Learn more about how a POC can drive your business forward.

Have an idea or project to discuss? Contact us to learn more.

AWS IoT Solutions Accelerate the Potential of Edge Computing

AWS IoT Solutions Accelerate the Potential of Edge Computing

IoT is revolutionizing consumer markets, large-scale manufacturing and industrial applications at an incredible pace. In virtually every industry, these technologies are becoming synonymous with a competitive advantage and winning corporate strategies.

We’re witnessing the same trend with our own customers, as companies integrate IoT solutions with their offerings and deploy edge devices to improve customer experience, reduce costs, and expand opportunities.

Installing these smart devices and collecting data is relatively easy. However, processing, storing, analyzing, and protecting massive volumes of that data is where (Internet of) Things gets complicated.

As an AWS Premier Consulting Partner, Effectual guides customers on how to leverage Amazon Web Services (AWS) innovation for their IoT solutions. This includes building performant, cost-effective cloud architecture based on the 5 Pillars of the Well-Architected Framework that scale quickly and securely process real-time streaming data. Most importantly, we apply AI and machine learning (ML) to provide clients with meaningful analytics that drive informed business decisions.

Two of the most common AWS services we deploy for IoT projects are AWS Lambda and AWS DynamoDB.

AWS Lambda: Serverless computing for continuous scaling
A fully-managed platform, AWS Lambda runs code for your applications or backend services without requiring any server management or administration. It also scales automatically to workloads with a flexible consumption model where you pay for only the computing resources you consume. While Lambda is an excellent environment for any kind of rapid, scalable development, it’s ideal for startups and growing companies who need to conserve resources while scaling to meet demand.

We successfully deployed AWS Lambda for a project with Oregon-based startup Wingo IoT. With an expanding pipeline of industrial customers and limited resources, Wingo needed a cost-efficient, flexible architecture for instant monitoring and powerful analytics. We used Lambda to build a custom NOC dashboard with a comprehensive view of real-time operations.

DynamoDB: Fast access with built-in security
We use DynamoDB with AWS Services such as Amazon Kinesis and AWS Lambda to build key-value and document databases with unlimited capacity. Offering low latency and high performance at scale, DynamoDB can support over 10 trillion requests a day with secure backup and restore capabilities.

When Effectual client Dialsmith needed a transactional database to handle the thousands of records per second created by its video survey tool, we used DynamoDB to solve its capacity constraints. The service also provided critical backup and restore capabilities for protecting its sensitive data.

In our experience, AWS IoT solutions both anchor and accelerate the potential of edge computing. Services such as AWS Lambda and Amazon DynamoDB can have a lasting, positive impact on your ability to scale profitability. Before you deploy IoT technologies or expand your fleet of devices, we recommend a thorough evaluation of the cloud-based tools and services available to support your growth.

Have an idea or project to discuss? Contact us to learn more.

5 Reasons Your Development Team Should be Using the Well-Architected Framework

5 Reasons Your Development Team Should be Using the Well-Architected Framework

Amazon Web Services (AWS) offers the most powerful platforms and innovative cloud technologies in the industry, helping you scale your business on demand, maximize efficiencies, minimize costs, and secure data. But in order to take full advantage of what AWS has to offer, you need to start with a clear understanding of your workload in the cloud.

How to Build Better Workloads
Whether you’re working with an internal team or an outsourced consulting partner, the AWS Well-Architected Framework is an educational tool that builds awareness of steps and best practices for architecting for the AWS Cloud. We begin all of our development projects with a Well-Architected Review to give clients full visibility into their workload. This precise, comprehensive process provides them essential insights for comparing strategies, evaluating options, and making informed decisions that add business value. Based on our experience, using well-architected best practices and design principles helps you:

1 – Plan for failure
One of the primary Well-Architected design principles is to architect for failure. This means knowing how to mitigate risks, eliminate downtime, prevent data loss, and protect against security threats. The Well-Architected process uncovers potential security and reliability vulnerabilities long before they happen so you can either avoid them or build a plan proactively for how you’ll respond if they do. This upfront effort can save you considerable time and resources. For example, having a disaster recovery plan in place can make it far easier for you to spin up another environment if something crashes.

Clients who plan for failure shrink their Recovery Time Objective (downtime) and their Recovery Point Objective (data loss) by 2000%.

2 –  Minimize surprises
Mitigating your risks also means minimizing surprises. The Well-Architected Framework offers an in-depth and comprehensive process for analyzing your choices and options as well as for evaluating how a given decision can impact your business. In our Well-Architected Reviews, we walk you through in-depth questions about your workload to create an accurate and holistic view of what lies ahead. When the review answers and recommendations are shared with all departments and stakeholders of a workload, they’re often surprised by the impacts of decisions on costs, performance, reliability and security.

3 – Understand the trade-offs of your decisions
Building well-architected workloads ensures you have options for responding to changing business requirements or external issues, with a structure for evaluating the trade-offs of every one of those choices. If you feel your application isn’t performant, you may have 10 different possible solutions for improving performance. Each one has a tradeoff, whether it be cost, maintainability, or more. The Well-Architected Framework can help your team decide the best option.

Identifying and executing refactoring options based on modern technologies and services can save up to 60% of architecture costs.

As an organization, you should never feel boxed in when it comes to options for improving your workload. The process and questions presented in the Well-Architected Framework can help both your technical and business departments look at all options and identify which ones will have the most positive business impact.

In 75% of the workloads we encounter, the technology department is making the decisions, which means there is no input from business stakeholders as to impacts.

4 – Develop KPIs to monitor the overall health of your application
Choosing KPIs that integrate both technical and business indicators gives you valuable insights into your application’s health and performance. With a Well-Architected approach, you can automate monitoring and set up alarms to notify you of any deviance from expected performance. Once you’ve established this baseline, you can start exploring ways to improve your workload.

KPI’s should be driven by business and should include all areas of your organization including Security, Finance, Operations, IT and Sales. The Well-Architected Framework provides a well-rounded perspective of workload health.

After a Well-Architected Review, it’s common to have 90% of KPIs defining the health of your application come from other business departments – not just from the IT team.

5 – Align your business requirements with engineering goals
Following well-architected best practices also facilitates a DevOps approach that fosters close collaboration between business managers and engineers. When the two are communicating effectively, they understand both the engineering concepts and the business impacts of their decisions. This saves time and resources and leads to more holistic solutions.

To fully leverage the AWS Cloud, make sure your development team has a strong foundation in the Well-Architected Framework. You’ll be able to build better workloads for the cloud that can take your business further, faster.

Have an idea or project to discuss? Contact us to learn more.

Enabling IT Modernization with VMware Cloud on AWS

Enabling IT Modernization with VMware Cloud on AWS

Cloud and virtualization technologies offer a broad range of platform and infrastructure options to help organizations address their operational needs, no matter how complex or unique, and reduce their dependence on traditional data centers.

As the demand for cloud and cloud-compatible services continues to grow across departments within organizations, cloud adoption rates are steadily rising and IT decision makers are realizing that they no longer need to be solely reliant on physical data centers. This has led countless organizations to shrink their data center footprints.

The benefits unlocked by VMC on AWS can have significant impacts on your organization…including the impressive performance of a VMware environment sitting on top of the AWS backbone.

VMware Cloud on AWS is unique in bridging this gap, as it utilizes the same skill sets many organizations have in-house to manage their existing VMware environments. Sure, there are considerations when migrating, but ultimately the biggest change in moving to VMware Cloud (VMC) on AWS is the underlying location of the software defined data center (SDDC) within vCenter. The benefits unlocked by VMC on AWS can have significant impacts on your organization – eliminating the need to worry about the security and maintenance of physical infrastructure (and the associated hands on hardware to address device failure) as well as the impressive performance of a VMware environment sitting on top of the AWS backbone.

Technology That Suits Your Needs

Full and partial data center evacuations are becoming increasingly common and, while there are instances of repatriation (organizations moving workloads from the cloud back to the data center), the majority of organizations are sticking with “cloud-first” policies to gain and maintain business agility. Sometimes, however, even a company that’s begun their IT modernization efforts may still have systems and applications hosted on-premises or in a data center.

This may seem to indicate some hesitance to fully adopt the cloud, but it’s usually due to long-term strategy, technical barriers to native cloud adoption, or misconceptions about cloud security and compliance requirements. It’s rare to find an organization that isn’t loaded with technical debt, fully committed to specific software, tied to lengthy data center commitments – or all of the above.

Mission-critical legacy applications may not be compatible with the cloud, and organizations lack the resources or expertise to refactor those applications so that they can properly function in a native cloud environment. Or perhaps there’s a long-term digital strategy to eventually move all systems and applications to the cloud but, in the meantime, they’re still leashed to the data center. Scenarios like these, and many more, are ideal for VMware Cloud on AWS, which allows organizations to easily migrate legacy VMware workloads with minimal refactoring or rearchitecting, or extend their existing data center systems to the cloud.

New, But Familiar

VMware Cloud on AWS was developed in collaboration between VMware, a pioneer and global leader in server virtualization, and AWS, the leading public cloud provider, to seamlessly extend on-premises vSphere environments to SDDCs built on AWS. VMC on AWS makes it easier for organizations to begin or expand their public cloud adoption by enabling lift and shift migration capabilities for applications running in the data center or on-premises VMware environments.

VMC on AWS also has a relatively minimal learning curve for in-house operations staff because, despite being hosted on AWS, it’s still VMware vSphere at its core and the environments are managed using the vCenter management console. This familiar toolset allows IT teams to begin utilizing the cloud without any major workforce retraining and upskilling initiatives because they can still use VMware’s suite of server virtualization and management tools.

The Right Tools for the Job

The vSphere suite of server virtualization products and vCenter management console may be familiar, but they’re far from outdated or limited. VMware continues to invest in the future, strengthening its cloud and virtualization portfolio by enhancing their existing offerings and developing additional services and tools to further enable IT modernization and data center evacuations.

These efforts mean we can expect VMware to continue playing a major role in helping organizations achieve and maintain agility by ensuring secure workload mobility across platforms, from public cloud to private cloud to hardware.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies mesh well.

HCX, which essentially consists of a series of integrations that establish connectivity across systems and platforms and allows workloads to be migrated without any code or configuration changes, is regularly updated to enhance its functionality. VMware HCX can be used to perform live migrations using vMotion and bulk migration for up to 100 VMs at a time. VMware HCX can also provide a secure, accelerated network extension which, beyond providing a seamless migration experience and minimizing operational impacts usually associated with migrating workloads, helps improve the environment’s resiliency through workload rebalancing. This same functionality plays a critical role in disaster recovery and business continuity by replicating data across multiple locations.

A Thoughtful Approach to Modernization

Whether an organization is prioritizing the optimization of spend, revenue growth, streamlining operations, or revitalizing and engaging their workforce, a mature and robust digital strategy should be at the heart of the “how.” Cloud adoption will not solve these business challenges on its own – that requires forethought, planning, and expertise.

It can be challenging to make the right determinations about what’s best for your own unique business needs without a clear understanding of those needs. And for organizations still relying on old school hardware-based systems, the decision to remain with on-premises deployments, move to the cloud, or lift and shift to a platform like VMC on AWS requires a comprehensive assessment of their applications, hardware, and any existing data center/real estate commitments.

Internal teams may not have the specific technical expertise, experience, or availability to develop suitable digital strategies or perform effective assessments, especially as they focus on their primary day to day responsibilities. As an AWS Premier Consulting Partner with the VMware Master Services Competency in VMware Cloud on AWS, Effectual has established its expertise in VMware Cloud on AWS, making us an ideal partner to help ease that burden.

Cloud adoption doesn’t happen overnight, and organizations have to ensure disparate technologies, which may be at very different stages of their respective lifecycles, mesh well. They need to develop an appropriate modernization strategy and determine the best fit for each application and workload. The right partner can play a critical role in successfully overcoming these challenges.

Hetal Patel is a Senior VMware Technical Lead and co-founder at Effectual, Inc.

FISMA Moderate Requirements met with AWS Infrastructure

FISMA Moderate Requirements met with AWS Infrastructure

Effectual led a Federal Government client in their journey from on-premises infrastructure to a secure cloud environment in AWS.

The Challenge

This Federal Government customer required a move from its on-premises infrastructure to a centralized cloud environment. This move was predicated on the requirement for increased security, flexibility in provisioning infrastructure, and a refresh of technology. The new AWS infrastructure must also be assessed at a FISMA Moderate level for production.

The Solution

Our team led the discovery, architecture, and implementation of an agency’s new infrastructure. We designed a multi-region, international architecture that allowed end users to quickly access virtual desktops at regions closest to those users. The centralized management and region-based architecture allowed devices to move outside the boundary, the virtual desktops infrastructure scaled as users joined around the world, and the agency was able to provision lower cost technology, such as thin clients, to achieve a refresh.

The Benefits

Increased Security

Our team supported the agency in its ATO efforts by provisioning compliant infrastructure and services in alignment with FISMA Moderate controls, then produced documentation supporting the architecture, allowing the agency to get a full ATO.

Privisioning Infrastructure

Our AWS-based architecture and deployment supported configuration of infrastructure to meet minimum workloads, which then scaled as users came online. Additionally, multiple user desktops could be provisioned on a single server, cutting down on associated costs.

Network Efficiency

The agency’s network needed to be overhauled as a result of security concerns. With the AWS backbone and multi-region architecture, users experienced a decrease in latency and the zero-trust model improved network security.

Satellite Imagery Analysis Simplified with Serverless Infrastructure

Satellite Imagery Analysis Simplified with Serverless Infrastructure

Effectual worked with a Federal Government customer to provide a mission critical solution that simplified its Land Satellite sensor processing software of the Earth’s land surface.

The images provide uninterrupted data to help land managers and policymakers make informed decisions about our natural resources and the environment.

The Challenge

A Federal Government customer looked to us to migrate its on-premises infrastructure to a Serverless infrastructure in AWS to ensure cost optimization, availability, and application performance while logging satellite images.

The Solution

Our team implemented AWS Lambda, AWS Batch, Kubernetes, and Amazon EKS. This ensured the client’s ability to collect satellite images that would be used to help scientists track land change due to climate, urbanization, drought, wildfire, and biomass changes.

The Benefits

Cost Optimization

We implemented AWS Lambda to run code without servers. By implementing Serverless infrastructure the client was able to reduce cost by 80%.


Our team implemented Kubernetes to provide automated container orchestration and higher availability across multiple regions. This allowed users – both domestic and abroad – to access satellite photos more efficiently via the web for personal and private use.

Application Performance

We set up serverless storage to compact the client’s satellite imagery retrieval process from 2 weeks to 2 hours.

Application Migration: Service Employees International Union

Application Migration: Service Employees International Union

When flooding took out SEIU‘s New York data center, the national nonprofit needed a plan for migrating to the AWS cloud

Through third-party and cloud-native tools, we provided the infrastructure, resources, and products necessary for an efficient application migration.


The national nonprofit serves branches of the organization with centralized IT based out of its New York offices. When NYC was hit by Hurricane Sandy in 2012, it led to flooding of the organization’s data center, housed in the basement of the building. The resulting outage took a week to recover from. The nonprofit needed a cloud-based backup solution and rapid application migration to ensure that it could be prepared against future disasters.


We began with an assessment of the organization’s data center posture, then created a migration plan and proposed architecture to support the nonprofit moving forward in AWS. We configured VPCs, subnets, networking, and configured access policies. We also connected a third-party disaster recovery service to ensure consistent synching of information between on-premises and cloud servers.

The Benefits

Peace of Mind

After going without its critical IT infrastructure for a week, the nonprofit had confidence its cloud infrastructure would be highly available.

Data Replication

The AWS infrastructure included VPN connectivity to the on-premises network in order to replicate Active Directory and SQL databases to ensure ongoing operations.

VPN Tunneling

In addition to an initial VPN connection, our team configured remote VPN connectivity from field offices in seven east coast cities to ensure all users could access the environment in the event of a failure.

GenomeNext DevOps Process

GenomeNext DevOps Process

GenomeNext is a genomic informatics company dedicated to accelerating the promise and capability of predictive medicine and scientific discovery. It commercializes genomic analysis tools and integrated systems for the evaluation of genetic variation and function.

The advanced informatics and data management solutions are designed to simplify, expedite and enhance genetic analysis workflows. GenomeNext solutions provide the market with genomic data and analysis at an unprecedented combination of performance, quality, cost and scale without requiring the investment in high-performance computing resources and specialized personnel. The proprietary platforms address a broad range of highly interconnected markets, including sequencing, genotyping, gene expression, and molecular diagnostics. GenomeNext customers include leading genomic research centers, academic institutions, government laboratories, and clinical research organizations, as well as pharmaceutical, biotechnology, agrigenomics, and consumer genomics companies.

The Challenge

GenomeNext needed a more efficient way to develop and deploy application changes to its Amazon Web Services Genomics Cloud Platform while maintaining high level of security and compliance.

The Solution

We worked with GenomeNext to design efficient development and agile management process, setup internal DevOps software and AWS infrastructure components, mapped processes to appropriate security and compliance controls, integrated third party DevOps tools with the GenomeNext Cloud platform, implemented development life cycle environments (Dev, QA, and Prod) on AWS, monitored and reduced AWS costs, and architecture high availability and disaster recovery. Our solution enhanced GenomeNext’s ability to quickly and securely roll out application development and infrastructure changes with minimal to zero downtime through the use of tools such as AWS Elastic Load Balancing, AWS CloudWatch, AWS CloudFormation, and AWS CodeDeploy.

The Benefits


GenomeNext recognized the advantages of DevOps automation by a significant increase in deployment frequencies, a dramatic decrease in deployment failures, immediate recovery of failed deployments, and reduction in the time required for changes.

Disaster Recovery

By combining AWS and DevOps, GenomeNext can automate the deployment of an exact copy of its Production solution within minutes into any AWS region, allowing it to meet its recovery time objectives.

Cost Savings

GenomeNext realized cost saving utilizing DevOps and AWS. Cost saving came in terms of maintaining a small staff, increased quality of products, reduction deployment complexity, and faster time to market.

Supporting the Delivery of Early Warning Signs for Earthquakes

Supporting the Delivery of Early Warning Signs for Earthquakes

Effectual delivered a mission-critical solution to a Federal Government Client that ensured the delivery of early warning alert notifications for earthquakes and other natural disasters over multiple geographical locations to save lives.

This could not have been done without a Cloud-based solution to ensure a resilient system.

The Challenge

This Federal Government customer required a move from its on-premises infrastructure to a centralized Cloud environment. The client looked to our team to handle high availability architecture and fault tolerance to meet workloads over multiple geographical locations quickly after a natural disaster. The solution required improved resilience and redundancy capabilities, application performance, and control monitoring.

The Solution

Our team built out a highly available and scalable infrastructure to meet demand in the wake of a disaster. We utilized the customer’s containerized solution and created a pipeline leveraging a GitLab Runner in Amazon Web Services (AWS) to manipulate and manage the AWS Elastic Kubernetes Service (EKS) deployments. This ensured the client’s ability to deliver early warnings for natural disasters through their application.

The Benefits


Our team configured Amazon CloudWatch metrics to identify a surge in traffic in the event of a disaster. This fully integrated AWS service is built with more resilience. Kubernetes was implemented to provide automated container orchestration and higher availability to reach across multiple regions.

Application Performance

We created a proprietary AWS-hosted Git solution to do all the linking, testing, and delivery to code. Our solution increased the rate at which the client released updates to the solution by 90%.

Control Monitoring

We deployed a GitLab Runner in conjunction with GitLab Continuous Integration to ensure all applications were provisioned through a pipeline. These necessary changes led to extreme version control and expediting developer updates.

Predictive Analytics: Volcanic Activity Analyzed Through Moving Magma

Predictive Analytics: Volcanic Activity Analyzed Through Moving Magma

Effectual delivered a mission-critical solution to a federal government client that ensured their sensor processing software was able to predict volcanic activity through moving magma.

This information is used to help scientists forecast seismic activity over multiple geographical locations. This could not have been done without a Cloud-based solution to ensure a resilient system.

The Challenge

Our customer required a move from its on-premises infrastructure to a centralized Cloud environment in AWS. They looked to our team to handle high availability architecture and fault tolerance to meet workloads over many geographical locations quickly after a natural disaster.

The Solution

We provided a highly available and scalable infrastructure that ensured efficiency in wake of volcanos and other natural disasters. This sensor processing solution ensured predictive analytics, resilience, and scalability.

The Benefits

Predictive Analytics

We worked with the customer to create a solution that ensured the user could collect volcano data to analyze and utilize for machine learning to better predict when volcanoes erupt.


Our team configured Amazon CloudWatch metrics to identify a surge in traffic in the event of a disaster. Kubernetes was implemented to provide automated container orchestration and higher availability to reach across multiple regions.


We configured EC2 instances that ensure adequate capacity to meet traffic demands and compute capacity. Our team automated launch configurations to allow the client to quickly launch and/or scale application servers in target environments in the future.

Bird Conservation Science Enabled by Automated Monitoring and Analysis of Migration Patterns

Bird Conservation Science Enabled by Automated Monitoring and Analysis of Migration Patterns

Effectual led a Federal Government client in need of automation, reliability, and efficiency for their bird identification website.

The customer supports the collection, archiving, management and dissemination of information from banded and marked birds in North America. This information is used to monitor the status and trends of resident and migratory bird populations.

The Challenge

This Federal Government customer required a move from its on-premises infrastructure to a centralized cloud environment. The client looked to our team to redesign their website, creating a system that would produce automated checks to save time and manual effort when registering banded and checked birds into the database.

The Solution

Our team assisted the customer in creating a system that would require minimal effort to keep up and running for years. This system saved time and manual effort through the implementation of Amazon Elastic Compute Cloud (Amazon EC2) to automate cron jobs for repetitive tasks to push all submitted web surveys from bird hunters and enthusiasts to the on-premises database. When banded birds were checked in, the system would be able to ensure the identification was correct, eliminating the need to manually check that information.

The Benefits


We utilized Amazon EC2 to automate database syncing. This allowed the bird banding lab to be more efficient when a bird was reported on their website. The client no longer needed to manually log and input the bird species. AWS CloudFormation was implemented to reduce manual work while developing an environment, ensuring productivity when debugging issues.


We used GitLab Continuous Integration in conjunction with GitLab Continuous Deployment to check code for errors, expediting developer changes.


Our team implemented Amazon CloudWatch Events for serverless workflow to trigger Lambda functions. Without having to provision or manage, the client was able to keep the same server running by keeping it warm with a CloudWatch Event. This reduced response times from 3 seconds to a couple hundred milliseconds.

Ensuring Least Privilege Access: Implementing an Active Directory Federation Service

Ensuring Least Privilege Access: Implementing an Active Directory Federation Service

Effectual led the implementation of an enterprise grade Active Directory Federation Service (ADFS) for a large Federal Government client.

Effectual enabled reliable and secure cyberspace capability by providing a highly innovative network architecture, engineering, integration, and simulation services with unrivaled expertise and commitment.

The Challenge

The client looked to our team to move its highly disparate environment into a highly collaborative one. By implementing Federated Access to the Amazon Web Services environment, this ensured least privilege access to client users.

The Solution

We worked with the client to setup an AWS Identity and Access Management (IAM), federated sign-in through Active Directory (AD), and Active Directory Federation Services (ADFS). This ensured least privilege access to client users.

The Benefits


Our team enabled reliable collaborative connectivity to a cadre of remote workers that needed access to the system while utilizing the ADFS PIV card solution.

Increased Security

We were able to meet all security requirements by using a federated solution, allowing the client to set permissions and access levels across different systems. The Federated solution also improved auditing management of credentials.


We implemented AWS CloudFormation to create a template to use when multiple accounts register in the system. This led to an increase in efficiency and ensures consistent configurations overtime.

A Rundown on re:Invent 2019 Pt 2

A Rundown on re:Invent 2019 Pt 2

Members of the engineering team had the opportunity to attend Amazon Web Services’ annual re:Invent conference in Las Vegas.

Every year, AWS announces dozens of customer-sought features at the event (and some leading up to the event in what the community has dubbed “pre:Invent”). In this blog – a second in a series of two (you can read the first here) on re:Invent – we’ll touch on new announcements from this 2019’s conference:

  1. Amazon excited data scientists with the announcement of Amazon SageMaker Studio which provides an easier experience for building, training, debugging, deploying and monitoring machine learning models with an integrated development environment (IDE).
  2. Amazon Athena federated queries turn almost any data source into a query-able data repository, opening opportunities to gather insights based on data from many different sources in different formats.
  3. Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities by using machine learning, statistical analysis, and graph theory.
  4. Automate code reviews with Amazon CodeGuru, a machine learning service which helps development teams identify the most expensive lines of code in their applications and receive intelligent recommendations on how to fix or improve their code.
  5. Amazon Simple Storage Service (S3) adds additional security measures and flexibility to share data with others by introducing Amazon S3 Access Points.

With all the new features coming out of re:Invent, it was difficult for us to pick our top picks, but our team is quickly becoming experts in all the new features and already utilizing them in delivering first-class cloud infrastructure to our clients.

A Rundown on re:Invent 2019 Pt 1

A Rundown on re:Invent 2019 Pt 1

Members of the engineering team had the opportunity to attend Amazon Web Services’ annual re:Invent conference in Las Vegas.

Every year, AWS announces dozens of customer-sought features at the event (and some leading up to the event in what the community has dubbed “pre:Invent”). This blog is the first in a two-part series related to re:Invent announcements from the 2019 conference:

  1. AWS Identity and Access Management (IAM) Access Analyzer provides an easy way to check permissions across the many policies provided at the resource level, principal level, and across accounts.
  2. A feature requested by customers since Amazon Elastic Kubernetes Service (EKS) was announced last year, AWS Fargate support for Amazon Elastic Kubernetes Service will revolutionize the way organizations use the popular Kubernetes container management tools in the cloud and radically reduce the maintenance required for running Kubernetes on AWS.
  3. A pre:Invent announcement that you might have missed if you blinked, CloudFormation Registry and third-party resource support adds the ability to manage virtually any third-party application resource using CloudFormation, an infrastructure as code tool helping organizations iterate faster with repeatable cloud resource definitions stored as code.
  4. Andy Jassy rocked the re:Invent stage in 2018 when he announced AWS Outposts, a new offering to take AWS’ computing capacity into your own data center. This service was made available in 2019, opening a wealth of potential for applications which need to stay local for regulatory or performance purposes.
  5. The Amazon Builder’s Library is a curated list of content written by Amazon’s own technical leaders to illustrate how Amazon builds world-class services and infrastructure.

With all the new features coming out of re:Invent, it was difficult for us to pick our top picks, but our team is quickly becoming experts in all the new features and already utilizing them in delivering first-class cloud infrastructure to our clients.

Data Analytics Offer Insights for Tsunami Disaster Response

Data Analytics Offer Insights for Tsunami Disaster Response

Effectual worked with a federal government customer to provide information for local land-use and emergency response planning to avoid development in hazardous zones and to plan evacuation routes to communities along low-lying coastlines vulnerable to tsunamis.

The Challenge

The customer engaged our team to quickly and effectively move their public-facing web applications and internal applications to the AWS cloud for greater resiliency and availability as well as to implement real-time logging, data analytics, and continuous monitoring for tsunami data.

The Solution

To collect data that would help scientists understand tsunamis and develop effective strategies for improving tsunami preparedness and disaster response, our team implemented a solution utilizing Amazon CloudWatch, AWS CloudTrail, alarms, and Serverless storage.

The Benefits

We implemented Amazon CloudWatch in order to schedule data collection that self-triggers when a tsunami is detected. This innovative continuous monitoring and observability service allows you to detect anomalous behavior in environments, set alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights to keep applications running smoothly.

By deploying AWS CloudTrail, we were able to provide the customer greater accessibility to critical tsunami data analytics for helping scientists understand the sources of local tsunamis and mitigate the impacts of  future events. 

Real Time Logging
Our team set up Serverless storage to collect data from seismic networks to process key components in the impact of tsunamis.

Education and the Cloud

Education and the Cloud

As cloud computing continues to grow within the State and Local Government industry, it has become increasingly popularized in the Education industry.

AWS started an initiative called AWS Educate to provide students and educators with the training and resources needed for cloud-related learning. Cloud computing skills are in high demand throughout the state of Texas, especially as an increasing number of state and local government agencies are embarking on migrating to the cloud. It has been a slow process for the government to migrate to the cloud, but the education sector is ahead of the process. This is due to high demand for the students, teachers, faculty, staff, and parents needing access to critical information using any devices from anywhere. Educators can benefit by migrating to the cloud: it’s cost efficient, offers stable data storage, development and test environments, easier collaboration for users, enhanced security without add-on applications, simple application hosting, minimizes resource costs, and fast implementation and time-to-value.

With all the capabilities of Cloud environments, the Education industry still has a long way to go. There are certain school districts and even Higher Education institutions, that do not have the amount of access as some of their counterparts. Cloud vendors could make a difference and solidify cloud adoption by offering Cloud education to urban neighborhood schools with laptops, computers, and access to training and certifications. As a start, the three major Cloud providers offer cloud education assistance to students:

When it comes to the rapid advancement in the IT industry, I encourage other young minorities, including my daughter, to pursue a career in the technology industry. Children are the future and Cloud platforms will be the leading solution across all markets.

We offer a bundled package for new users which includes an assessment of their current infrastructure which can be beneficial to any Higher Education Institution or K-12 organization. We can build the future together and keep rising to greater heights!

Reach out to Thy Williams,, to learn more about our capabilities and discuss our starter package.

It’s Time for Pharma to Embrace the Cloud

It’s Time for Pharma to Embrace the Cloud

Pharmaceutical companies have long been familiar with the pros and cons of cloud vs. non-cloud environments. The same discussions took place when companies in other industries began transitioning from on-premises to outsourced providers.

However, the pharmaceutical industry, and the data they manage, is within the scope of the Federal Drug Administration (FDA). With the FDA’s purview increasing over the years, and the globalization of their compliance oversight (now including over 150 countries exporting FDA‑regulated products to the United States), they have put more of the onus of following regulations on the pharmaceutical companies.

Enhancing Security Through Strategy and Architecture

In response to the FDA asking questions regarding the Code of Federal Regulations Title 21, Part 11 (CFR 21 Part 11), companies complying with GxP regulations have to ask themselves: “What risk level is acceptable for my business?” Often, this becomes a paralyzing exercise fraught with an unwillingness to change or, at best, test runs of cloud technology in safe areas like dev/test that are ultimately never implemented. This inaction leaves them behind more agile competitors who have clear, well-documented policies around adopting cloud technologies without adding significant risk. Lacking a defined cloud initiative does something that many companies may find surprising – it increases their risk and vulnerability as bad actors, security attacks, and attempts at gaining access to sensitive data become more sophisticated.

“What risk level is acceptable for my business?”
This often becomes a paralyzing exercise fraught with unwillingness to change.

Well-architected cloud environments are the best solution to keep up with those security challenges. According to Gartner, “…through 2020, public cloud infrastructure as a service (IaaS) workloads will suffer at least 60 percent fewer security incidents than those in traditional data centers.” This additional security is the result of the major cloud platform providers (AWS, Azure, Google, and Alibaba) having a virtually unlimited budget and tight controls of the underlying platform. While they do provide a secure platform, it is still up to the users to architect secure environments. Gartner also states: “through 2022, at least 95 percent of cloud security failures will be the customer’s fault.”

The Way Forward

So, what can you do to ensure that your FDA-regulated business remains competitive and secure in a world where change is constant, and breaches happen daily? The first step is also one of the most important: Secure the understanding and sponsorship of the entire executive team.

There should be unanimous and clear support from the executive team, and a realistic understanding of the benefits of a cloud solution. Without their support, any adoption challenges may cause the project to stall, create doubt, or even lead to abandoning your cloud initiatives altogether.

Once you have the executive team’s support, a company-wide policy for cloud initiatives needs to be developed. This policy should be created by those with a deep knowledge of cloud computing to take full advantage of the appropriate cloud services for your business requirements. At this point, engaging with a managed service provider or consultant can be highly beneficial and ensure that your cloud initiatives are realistic and follow best practices for cost, security, and compliance requirements.

Developing Effective Adoption Policies

At minimum, a cloud adoption policy should address security and compliance requirements, workload elasticity and scaling demands, departmental ownership and responsibilities, risk assessment and remediation methodologies, and critical dependencies. In addition, you should also consider addressing storage retention, disaster recovery, or business continuity. The process of developing these comprehensive adoption policies allows your organization to gain a better understanding of how the cloud fits into each aspect of your business, while providing clear goals for your teams to pursue.

Having a clearly defined objective is best practice for implementing a cloud solution, but being too focused on the minutiae can lead to tunnel vision and increases the likelihood of creating an inflexible adoption plan. Designing a plan that functions more as a framework or a set of guidelines than a codified set of instructions, in a sense mirroring the flexible nature of the cloud, will help prevent your teams from losing sight of the advantages of cloud services or hindering innovation.

Another common pitfall to cloud adoption is the tendency to apply current, non-cloud policy to your cloud adoption initiatives. Adherence to legacy IT policies will prove challenging to cloud adoption and could make it impossible to fully realize the advantages of moving to a cloud solution. And outdated approaches could even result in greater costs, poor performance, and poorly secured environments. These risks can all be addressed with appropriate cloud-based policies that foster cloud-first approaches to new initiatives.

Becoming a secure, cloud-enabled organization requires consistent diligence from your internal teams and continuous adherence to the company cloud policy. In the end, the most significant risks to the security of your infrastructure are tied to your own policies and oversight, and the continued security of your cloud and data will require the involvement and cooperation of your entire organization. Clear communication and targeted training will help your teams understand their role in organizational security.

An Outsider’s Expertise

If you’re not sure about the effectiveness of your approach to cloud adoption, bringing in a third party to assist with policy creation or implementation can help save time and money while ensuring that best practice security is built into your approach. Outside organizations can also provide valuable assistance if you’ve already implemented cloud solutions, so it’s never too late to get guidance and insight from experts who can point out where processes or solutions can be improved, corrected, or optimized to meet your specific business requirements.

These third-party engagements have proven to be so useful that AWS has created the Well-Architected Framework and an associated Well-Architected Review program that gives their clients an incentive to have a certified third party review and then optimize their AWS solution (learn more about Effectual’s Well-Architected Review offering). Organizations such as the Society of Quality Assurance and the Computer Validation & Information technology Compliance (CVIC) (disclosure: I am a member of the CVIC) are also discussing these issues to provide guidance and best practices for Quality Assurance professionals.

Outside professional and managed services can provide an immense level of assistance through an objective assessment of your organization’s needs. Their focused expertise on all things cloud will lighten the load on your internal IT teams, help ease any fears you may have about cloud adoption, discover potential savings, and provide guidance to fortify the security of your cloud solution.

Mark Kallback is a Senior Account Executive at Effectual, Inc.

Serverless Infrastructure Enables Data Access Related to Environmental Issues

Serverless Infrastructure Enables Data Access Related to Environmental Issues

The Challenge

This Federal Government customer looked to our team to migrate its on-premises infrastructure to a serverless infrastructure on AWS. The client was in need of a centralized data catalog, management solution for users, and data access for environmental issues.

The Solution

We supported the client with a serverless solution that consisted of Amazon API Gateway, Amazon Cognito User Pools, AWS Lambda, and AWS Step Functions. This ensured the customer’s ability to make high-volume, complex data accessible to stakeholders, policymakers, and managers to facilitate data-driven conversations about environmental issues in a secure setting.

The Benefits

Application Performance

Our team implemented API Gateway to handle the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls to process any surge of traffic on its website.

User Identification

We implemented AWS Cognito User Pools for control over user authentication and user access for the website. This allowed for secure token handling and management of authenticated users from all identity providers.

Cost Optimization

We implemented Lambda functions to run code in a serverless environment and process its large data sets related to environmental issues. The client was able to reduce cost by 80%.

TNTP Application Migration

TNTP Application Migration

TNTP’s mission is to end the injustice of educational inequality by providing excellent teachers to the students who need them most and by advancing policies and practices that ensure effective teaching in every classroom.


In the wake of a flooding, TNTP looked to Effectual to quickly and effectively move their public-facing web applications and internal applications to the AWS cloud for better cost, scalability, disaster recovery capabilities, and better application performance.


Effectual worked with TNTP to define a migration strategy, set up the infrastructure in accordance with best practices and to take advantage of the full feature set of cloud, and provided scripts to automate future updates and deployments. Effectual introduced TNTP to the Infrastructure as Code model so that they could version control the state of their infrastructure through the use of AWS CloudFormation templates and take advantage of AWS’ built-in resource dependency definitions to perform rolling updates with minimal downtime or system impact.

The Benefits

Cost Efficiency

TNTP experienced lower costs for running their workloads in the cloud compared to on-premise IT hardware and maintenance costs. Effectual assisted TNTP to utilize cloud purchasing options and offerings to meet TNTP’s technical requirements while remaining cost-efficient.


The use of the AWS cloud provided capabilities for flexible infrastructure to allow accommodation of various sizes of workloads. The infrastructure used AWS Auto Scaling capabilities along with custom settings in AWS CloudWatch to automatically scale to accommodate larger workloads while retaining transparency of the scaling activities to the end user.

Disaster Recovery

Failover capabilities and strategies such as the use of AWS Elastic Load Balancing within AWS were implemented to protect the system, maximize uptime, and minimizes data loss in the event of a disaster. Notifications, alarms, and safeguards were put in place to ensure immediate notification of any abnormal behavior.

Applications Rearchitected in AWS to Automate Security Triggers

Applications Rearchitected in AWS to Automate Security Triggers

Effectual led a Federal Government client in their journey from on-premises to AWS by extending their data center into the cloud and rearchitecting their applications.

Effectual provided guidance in the following areas

  • Implementing Automation for the client.
  • Creating a new AWS infrastructure and environment.
  • Updating and retooling current applications.
  • Building the solution as a receiver and retooling specific applications to function in the new environment.
  • Interpreting and providing additional information and understanding of features that are new and being developed as it pertains to their issues.

Our Team Leveraged the following technologies

  • AWS CloudFormation templates were created for DevOps
  • Organizations and AWS Config for management of the system
  • AWS CloudTrail and Amazon CloudWatch were utilized for automating security recommendations
  • Amazon CloudWatch was programmed to alert the client if changes were made in their system. The response would trigger the system to return to original configurations and alert security to these changes.
  • AWS infrastructure resources, EC2 instances and RDS database infrastructures

The Benefits

Migration to the Cloud

We rebuilt client applications in the AWS Cloud to connect to their on-premises data. This made their applications more accessible by all and created a working hybrid environment for their data.

Security Improvements

We deployed AWS infrastructure services, including Amazon CloudWatch to monitor resources and trigger responses to changes in the environment.

Management of Resources

The services in AWS monitor both on-premises and AWS cloud environments. The time to build components in the environment was significantly reduced and instances were saved as a template for repeatability.


Considerations for AWS Control Tower Implementation

Considerations for AWS Control Tower Implementation

AWS Control Tower is a recently announced, console-based service that allows you to govern, secure, and maintain multiple AWS accounts based on best practices established AWS.

What resources do I need?

The first thing to understand about Control Tower is that all the resources you need will be allocated to you by AWS. We will need AWS Organizations established, an account factory to create accounts per LOB, and Single Sign On (SSO) to name a few. Based on the size of your entity or organization, those costs may vary. In the Control Tower precursor, AWS Landing Zones, we found that costs for this collection of services could range near $500-$700 monthly for large customers (50+ accounts), as deployed. Control Tower will probably be a similar cost, possibly higher depending on the size of your organization. I will address later in this post on how to go and use Control Tower once you have an account set up a Brownfield situation. In a perfect world, it would be nice to set up the Control Tower and in a Greenfield scenario, but sadly, 99% of the time, that’s not the case.

If you’re a part of an organization that has multiple accounts in different lines of business, this service is for you.

What choices do I need to make?

In order to establish a Cloud Enablement Team to manage Control Tower, you need to incorporate multiple stakeholders. In a large organization, that might entail different people for roles such as:

  1. Platform Owner
  2. Product Owner
  3. AWS Solution Architect
  4. Cloud Engineer (Automation)
  5. Developer
  6. DevOps
  7. Cloud Security

You want to be as inclusive as possible in order to get the most breadth of knowledge. These are the people that will be making the decisions you need to migrate to the cloud and then most importantly, thrive once present and remain engaged. We have the team, so now what can we do to make Control Tower work the best for us?

Decisions for the Team

1. Develop a RACI

This is one of the most crucial aspects of Operations. If you do not have accountability or responsibility, then you don’t have management. Everyone must be able to delineate their tasks from the rest of the team. Finalizing everyone’s role in the workflow then will solve a lot of issues before they happen.

2. Shared Services

In the shared services model, we need to understand what resources are going to the cloud and what will stay. Anything from Active Directory to DNS to one-off internal applications will have to be figured out in a way to accommodate functionality and keep the charge back model healthy. One of Control Tower’s most redeeming and worthy qualities is knowing what each LOB is costing and how they are helping the organization overall.

3. Charge Backs

Since the account factory (previously called Account Vending Machine) is established, each LOB will have its own account. In order to see what the LOB costs are, you must have an account. AWS does not do pricing based on VPC, but by account. Leveraging Control Tower, tagging, and third-party cost management resources all can combine to give an accurate depiction of the costs incurred by a specific line of business.

4. Security

Security will have all logs dumped from each account into a centralized log bucket that can be pointed to the tool of choice to analyze those logs. Other parties may perform audits to read your logs using ready only functions in an account that has nothing else, another feature of Control Tower. The multi-account strategy not only allows for better governance but also now helps in case of compromise. If one account has been compromised, then the blast radius for all the other accounts is minimal. Person X may have accessed a bucket in a specific account, but they did not access it anywhere else. The most important thing to remember is that you cannot treat cloud security like data center security.

There are plenty of choices to make as it relates to Control Tower moving forward for an organization, but if you plan correctly and make wise decisions, then you can secure your environment and keep your billing department happy. Hopefully, this has helped you see what it takes in the real world to prepare. Good luck out there!

Network Virtualization – The Missing Piece of Digital Transformation

Network Virtualization – The Missing Piece of Digital Transformation

The cloud revolution continues to impact IT, changing the way digital content is accessed and delivered. It should come as no surprise that this revolution has affected the way we approach modern networking.

When it comes down to it, the goal of digital transformation is the same for all organizations, regardless of industry: increase the speed at which you’re able to respond to market changes and evolving business requirements, improve your ability to adopt and adapt to new technology, and enhance overall security. Digital strategies are maturing, becoming more thoughtful and effective in the process, as organizations understand that the true value of cloud adoption and increased virtualization isn’t just about cost savings.

Technology is more fluid than ever, and dedicated hardware is limiting individual progress and development more and more every day. Luckily, cloud and virtualized infrastructure have helped lay the groundwork for change, giving companies the opportunity to more readily follow the flow of technological progress but, in the same way that a chain is only as strong as its weakest link, these same companies are only as agile their most rigid component. And that rigid chokepoint, more often than not, is hardware-based network infrastructure.

A lack of network agility was even noted by Gartner as being one of the Top 10 Trends Impacting Infrastructure and Operations for 2019.

A Bit of History
We likely wouldn’t have the internet like we do today if not for the Department of Defense needing a way to connect large, costly research computers across long distances to enable the sharing of information and software. Early computers had no way to connect and transmit data to each other.
The birth of ARPANET in 1969, the world’s first packet-based network, and it’s ensuing expansion was monumental in creating the foundation for the Information Age.

The Case for Virtualization

While some arguments can still be made about whether a business might benefit more from traditional, hardware-based solutions or cloud-based options, there’s an inarguable fact right in front of us: software moves faster than hardware. This is what drove industries toward server and storage virtualization. However, network infrastructure still tends to be relegated to hardware, with the same manual provisioning and configuration processes that have been around for decades. The challenge of legacy, hardware-based network infrastructure is a clear obstacle that limits an organization’s ability to keep up with changing technologies and business requirements.

The negative effect of hardware-based networking goes beyond the limitation of speed and agility. Along with lengthy lead times, the process of scaling, modifying, or refreshing network infrastructure can require a significant amount of CapEx since you have to procure the hardware, and a significant amount of OpEx since you have to manually configure the newly acquired network devices. In addition, manual configuration is well-known to be error-prone, which can lead to connectivity issues (further increasing deployment lead time) and security compromises.

Networking at the Speed of Business and Innovation

As organizations move away from silos in favor of streamlined and automated orchestration, approaches to network implementation need to be refreshed. Typical data center network requests can take days, even weeks to fulfill since the hardware needs to be procured, configured (with engineers sometimes forced to individually and painstakingly configure each device), and then deployed.

Software-defined networking (SDN), however, changes all of that. With properly designed automation, right-sized virtual network devices can be programmatically created, provisioned, and configured within seconds. And due to the reduced (or even fully eliminated) need for manual intervention, it’s easier to ensure that that newly deployed devices are consistently and securely configured to meet business and compliance requirements.

Automation allows networking to match the pace of business by relying on standardized, pre‑defined templates to provide fast and consistent networking and security configurations. This lessens the strain and burden on your network engineers.

Network teams have focused hard on increasing availability, with great improvements. However for future success, the focus for 2019 and beyond must incorporate how network operations can be performed at a faster pace.

Source: Top 10 Trends Impacting Infrastructure & Operations for 2019, Gartner

Embracing Mobility

Modern IT is focused on applications, and the terminology and methods for implementing network appliances reflect that – but those applications are no longer tied to the physical data center. Sticking to a hardware-focused networking approach severely restricts the mobility of applications, which is a limitation that can kill innovation and progress.

Applications are not confined to a single, defined location and maturing digital and cloud strategies have led to organizations adopting multiple public and private clouds to achieve their business requirements. This had led to an increase in applications being designed to be “multi-cloud ready.” Creating an agile network infrastructure that extends beyond the on-premises locations, matching the mobility of those applications, is especially critical.

Network capabilities have to be able to bridge the gap from functioning consistently across all locations, whether they’re hardware-based legacy platforms, virtual private cloud environments, or pure public cloud environments.

This level of agility is beneficial for all organizations, even if they’re still heavily invested in hardware and data center space, because it allows them to begin exploring, adopting, and benefiting from public cloud use. Certain technologies, like VMware Cloud on AWS, already enable organizations to bridge that gap and begin reaping the benefits of Amazon’s public cloud, AWS.

According to the RightScale 2019 State of the Cloud Report from Flexera, 84% of enterprise organizations have adopted a multi-cloud strategy, and 58% have adopted a hybrid cloud strategy, utilizing both public and private clouds. On average, the respondents reported using nearly five clouds on average.

A Modern Approach to Security

Digital transformation creates fertile ground for new opportunities – both business opportunities and opportunities for bad actors. Since traditional approaches to cybersecurity weren’t designed for the cloud, cloud adoption and virtualization have contributed to a growing need to overhaul information security practices.

Traditional, classical network security models focused on the perimeter – traffic entering or leaving the data center – but, as a result of virtualization, the “perimeter” doesn’t exist anymore. Applications and data are distributed, so network security approaches have to focus on the applications themselves. With network virtualization, security services are elevated to the virtual layer, allowing security policies to “follow” applications, maintaining a consistent security configuration to protect the elastic attack surface.

But whether your network remains rooted in hardware or becomes virtualized, the core of your security should still be based on this: Security must be an integral part of your business requirements and infrastructure. It simply cannot be bolted on anymore.

Picking the Right Tools and Technology for the Job

Choosing the right tools and technology to facilitate hybrid deployments and enable multi‑platform solutions can help bridge the gap between legacy systems and 21st century IT.  This level of interoperability and agility help make cloud adoption just a little less challenging.

Addressing the networking challenges discussed in this post, VMware Cloud on AWS has an impressive set of tools that enable and simplify connectivity between traditionally hosted on-premises environments and the public cloud. This interconnectivity makes VMware Cloud on AWS an optimal choice for a number of different deployment use cases, including data center evacuations, extending on-premises environments to the public cloud, and improving disaster recovery capabilities.

Developed in partnership with Amazon, VMware Cloud on AWS allows customers to run VMware workloads in the cloud, and their Hybrid Cloud Extension (HCX) enables large-scale, bi-directional connections between on-premises environments and the VMware Cloud on AWS environment. In addition, VMware’s Site Recovery Manager provides simplified one-click disaster recovery operations with policy-based replication, ensuring operational consistency.

If you’re interested in learning more about VMware Cloud on AWS or how we can help you use the platform to meet your business goals, check out our migration and security services for VMware Cloud on AWS.

Ryan Boyce is the Director of Network Engineering at Effectual, Inc.

FISMA Compliance Requirements Met for Self-Service Cloud Solution

FISMA Compliance Requirements Met for Self-Service Cloud Solution

Effectual enabled a Federal Government customer to set up a self-service cloud solution which is secure, compliant, and automated to scale up and down as necessary.

Customer Needs

The Customer wanted to scale out compliant accounts to meet security concerns such as accessing only approved services, protecting centrally managed resources, and ensuring logging and change activity was being captured. The overall issue was ability to consistently provision AWS accounts in a scalable fashion and manage them over time, keeping them up-to-date with newly approved AWS Services. The goal was to provide secure and compliant cloud hosting options while setting up a customer self-service solution.

Our Approach

We assisted the client in creating their entire environment from Infrastructure as Code while implementing a strict change control processes via GitLab. Custom pipelines were created based off the CI/CD framework for structured code. Overall the entire process was automated, eliminating the scalability issue of provisioning accounts. Our resources worked directly alongside the agency resources to document and achieve a FISMA Moderate ATO.

The Benefits


The customer was able to quickly provision accounts in a consistent method across multiple geographical locations and regions. The entire environment can be deployed in one hour.


We enabled the customer to securely provision their own infrastructure, standardized methodology, and least-privileged architecture. This methodology ensures security in the cloud for the client.

Management of Resources

The services in AWS monitor both on-premises and AWS cloud environments. The time to provision new accounts was reduced from a month to one minute. The deployments are now consistent and can be saved for later use.

Disaster Response: UAV Imagery Alerts

Disaster Response: UAV Imagery Alerts

Effectual delivered a mission-critical solution to a client that ensured the delivery of UAV imagery taken from infrastructure towers that were used to alert high risk fire areas of a wildfire and other natural disasters.

The Challenge

Our customer required a move from its on-premises infrastructure to a centralized Cloud environment in AWS. They looked to us to handle high availability architecture and fault tolerance to meet workloads over many geographical locations. We automated common activities such as change requests, monitoring, patch management, security, and backup services, and provided full-lifecycle services to provision, run, and support enterprise infrastructure.

The Solution

We provided a client with Technical Amazon Web Services Infrastructure architecture to deliver a comprehensive, secure, and cost-effective hosting solution for supporting their efforts with Pacific Power. In addition, our team delivered Managed Services for the customer’s AWS environment. This assisted with the client’s ability to deploy drones to inspect the infrastructure of electrical towers and ensure their efficiency in wake of natural disasters.

The Benefits


The implementation of Amazon CloudWatch Events for serverless workflow to trigger Lambda functions. Drones are programmed to deploy and inspect electrical towers to ensure that they are performing correctly.

Cost Optimization

We created a proprietary AWS-hosted solution in order for the customer to lower costs by running their workloads in the cloud. Our team assisted the client to utilize cloud purchasing options and offerings to meet their technical requirements while remaining cost-efficient.


We configured EC2 instances that ensure adequate capacity to meet traffic demands and compute capacity. The implementation of automated launch configurations to allow the client to quickly launch and/or scale application severs in target environments in the future.

Accelerating DevOps Cultural Adoption with GitLab

Accelerating DevOps Cultural Adoption with GitLab

One year ago, our team made an investment into a self-hosted installation of GitLab.

We had been successful in delivering a managed GitLab installation at a customer site and saw the value in taking advantage of everything the platform had to offer for our internal workloads. As an AWS DevOps Competency partner, we have a successful track record of helping organizations adopt DevOps processes and we understand that the biggest challenge is often aligning an organization’s culture with DevOps principles.

GitLab has helped us bridge that gap by demonstrating the operational excellence that can be achieved with DevOps.

GitLab’s biggest strength is that it addresses all stages of the software development lifecycle. GitLab’s features align strongly with the stages and principles our team has outlined in our DevOps process. The cornerstone of this DevOps process is that everything is delivered as code and all code is continuously version-controlled, tested, and cross-checked by peers. The marriage of GitLab’s repository tools with their built-in CI platform eliminates much of the overhead of setting up continuous integration and testing. Our team has built custom pipeline templates specifically designed around deployments using AWS, CloudFormation, Docker, Kubernetes, Terraform, and other platforms. These pipeline templates allow new projects to inherit shared knowledge and hit the ground running to deliver operational excellence with Agile development speed. We’ve also committed ourselves to share these templates and learned best practices with the community to aid others in quickly and efficiently adopting GitLab and cloud and driving new development.

Our team has designed a one-click style deployment of GitLab on AWS with high availability and security out-of-the-box. We’re using this solution to help other organizations rapidly adopt GitLab and have been successful in doing so at several government and commercial organizations. We also have a one-click GitLab Runner on AWS solution available for scalable, secure GitLab CI runners and are actively working on a one-click deployment for GitLab Runner on Azure and GCP.

GitLab has been a cornerstone of our DevOps practice, and we are just getting started. We have empowered organizations to automate software testing and deployments using GitLab as the engine, and organizations have been able to move faster and better address end-users with those abilities. We’re excited to see what organizations can do with the power that DevOps’ operational excellence gives them, and we’ve partnered with GitLab to accelerate them along that journey.

If you or your organization has more questions in regards to GitLab or our DevOps process, reach out to to set up some time to chat about your business goals.

The Cloud-First Mindset

The Cloud-First Mindset

Across every industry, cloud-native businesses are disrupting legacy institutions that have yet to transform traditional IT platforms.

To remain competitive, industry giants need to change the way they think about technology and adopt a cloud-first mindset. Prioritizing cloud-based solutions and viewing them as the natural, default option is vital to the success of new projects and initiatives.

Migrating legacy systems to cloud has the added benefit of eliminating technical debt from older policies and processes. However, it is important to be mindful in order to avoid creating new technical debt when developing and deploying cloud systems. While adopting a cloud-first mindset may seem like an expected result of digital transformation, it requires significant changes to an organization’s culture and behavior, similar to those required for the effective adoption and implementation of DevOps methodologies.

We have to rethink the old way of doing things – cloud is the new normal

Evolving needs and capabilities

When “cloud” first entered the lexicon of modern business, it was incorrectly thought of as a cost‑cutting measure. Organizations were eager to adopt the cloud with the promise of savings – despite not fully understanding what it was or its ever-growing capabilities. These types of implementations were generally short-sighted: lacking a well-defined digital strategy and focused on immediate needs rather than long-term goals.

As adoption increased, it became apparent that adjusting approach and redefining digital strategy is necessary for success. Optimizing applications for the cloud and developing comprehensive governance policies to rein in cloud sprawl, shadow IT, and uncontrolled (and unmonitored) spend are just part of the equation.

“…spending on data center systems is forecast to be $195 billion in 2019, but down to $190 billion through 2022. In contrast, spending on cloud system infrastructure services (IaaS) will grow from $39.5 billion in 2019 to $63 billion through 2021.”

Source: Cloud Shift Impacts All IT Markets, Gartner

A cloud-first approach reshapes the way an organization thinks about technology and helps mitigate the potential to recreate unnecessary technical debt that was eliminated through digital transformation initiatives.

The human element of digital transformation

Digital transformation should extend beyond technology. It’s a long-term endeavor to modernize your business, empower your people, and foster collaboration across teams. Transforming your systems and processes will have a limited impact if you don’t also consider the way your teams think, interact, and behave. This is especially important because the significant operational changes introduced by modernizing infrastructure and applications can present challenges to employees who feel comfortable maintaining the status quo. Before you can disrupt your industry, you have to be willing to disrupt the status quo within your own organization.

The fact is that change can be difficult for a lot of people, but you can ease the transition and defuse tension by actively engaging your teams. You cannot overstate the importance of clear, two-way communication. Letting your people know what you’re planning to do and also why you’re doing it can help them understand the value of such a potentially massive undertaking. It’s also important to have a solid understanding of what your teams need and creating open lines of communication will enhance requirements gathering efforts. This level of communication ensures that whatever you implement will adequately address their needs, and ultimately improve their workflow and productivity.

The introduction of new tools and technologies, even if they’re updated versions of the ones currently in use, will generally require some level of upskilling. Helping your teams bridge the technical gap is a necessary step.

Competition at its finest

Few sectors have seen the level of disruption faced by the finance industry. FinTech disruptors, born in the cloud and free from the chains of technical debt and bureaucratic overhead, have been able to carve out their place in the market. They’ve attracted customers by creating innovative offerings and customer-focused business models, competing with legacy institutions that seemed to have an unassailable dominance that barred any new entrants.

Legacy retail banking institutions, known for being risk averse, had a tendency to implement new technology very slowly. They were plagued by long development cycles, dedicated hardware solutions, and strict compliance requirements to safeguard highly sensitive data.

When Capital One turned its attention to the cloud, they created a holistic digital strategy that wasn’t limited to tools and systems. They understood that technology was not a line item on a budget, but an investment in the company’s future, and that successfully executing their strategy would require a culture shift. They focused on attracting technologists who could enhance the company’s digital capabilities to increase employee engagement, improve cybersecurity, and improve customer experience by using the latest technologies, including artificial intelligence and machine learning. They also created a cloud training program so their employees would understand the technology, regardless of whether or not they were in technical roles, reinforcing the company’s cloud-first mindset.

FinTech disruptors, born in the cloud and free from the chains of technical debt and bureaucratic overhead, have been able to carve out their place in the market.

Understanding your options

Developing a proper cloud-first mindset is not about limiting your options by using the cloud exclusively. A digitally transformed business doesn’t adopt the latest technology simply for the sake of adoption. In fact, the latest and greatest SaaS or cloud-based offerings may not always be the best option, but you have to know how to make that determination based on the unique needs and circumstances of your business. By objectively assessing business goals, considering all options (including traditionally hosted), and prioritizing agile, flexible solutions, you can redefine your approach to problem-solving and decision-making. This mindset means that cloud is no longer the “alternative.”

We have to rethink the old way of doing things – cloud is the new normal, and hardware-based options should only be implemented if they are truly the best way to meet business goals and overcome challenges. We don’t need to abandon on-premises or traditional IT to maintain or regain competitive edge. We just need to understand that it’s not always the right choice.

This approach will help you develop a macro view of your organization’s needs and prompt you to identify and treat the underlying cause of business challenges, not just the symptoms.

Building a foundation for disruption

Becoming a disruptor in your industry is not the goal of digital transformation –that takes more than just adopting the cloud. The goal is to free your organization from the restraints of costly, outdated legacy infrastructure and monolithic applications, and enabling your teams to scale and innovate. The flexibility of cloud and SaaS-based options reduce the risks associated with developing new products and services for your customers, and instilling a culture of cloud-first thinking gives your people the freedom to explore and experiment. That’s how you drive innovation and compete against new, lean born-in-the-cloud competitors. That’s how you disrupt.

RFD & Associates

RFD & Associates

RFD & Associates, Inc., is an IT Technical Services Company with over 30 years of experience delivering IT solutions to public and private sector clients.

RFD delivers IT solutions from Mainframe to Mobile and everything inbetween. They have helped hundreds of organizations design, build, purchase and implement optimal technology solutions to achieve business goals. RFD needed help designing and developing a scalable, Amazon Web Services (AWS) cloud hosted, multi-tenant web and mobile friendly application. The proposed solution had a requirement to integrate with external APIs to ensure flexibility for future enhancements and integration with third-party tools. The application was also required to be compliant with Personally Identifiable Information (PII) and the U.S. Health Insurance Portability and Accountability Act (HIPAA) security.

Effectual Provided Guidance in the following areas

  • AWS design and architectural services to include making RFD’s multi-tenant hosting environment PII/HIPAA compliant
  • Provided AWS Training and best practices guidance on how to leverage AWS resources
  • Assisted in helping RFD achieve its defined goals:
    • Identify the challenges presented in third-party hosting of AWS.
    • Evaluate the use of cloud services to meet RFD business and technical requirements.
    • Determine portable containerization services.
    • Evaluate architectural decisions in AWS Commercial and GovCloud Regions.

Our Approach

A four-phased approach was developed to implement an AWS hosted environment for RFD:

  • Phase 1: Discovery, AWS Service Selection, and PII/HIPAA Security Requirements Determination.
  • Phase 2: AWS Foundation Build. Provisioned appropriate environments and access; established AWS accounts
  • Phase 3: AWS Service Build. Provisioned AWS services to include: EC2, Route53, S3, WAF, etc.
  • Phase 4: Process Documentation and Environment Review. Created AWS documentation of resources and provided reports on overall solution, security and cost.

The Benefits


We configured EC2 instances that are PII/HIPAA compliant ensuring adequate capacity to meet traffic demands and compute capacity. In addition, we implemented automated launch configurations to allow RFD to quickly launch and/or scale application severs in target environments in the future.

Security & Compliance

The implementation of AWS Compute, Storage, and PII and HIPAA compliant Database services to ensure the security of sensitive data used in the environment.

Monitoring Services

To maximize the functionality of many services, AWS CloudWatch was configured to help RFD set thresholds/alarms to monitor custom metrics for auto-scaling needs.

A Cloud Security Strategy for the Modern World

A Cloud Security Strategy for the Modern World

In the borderless and elastic world of the cloud, achieving your security and compliance objectives requires a modern, agile, and autonomous security strategy that fosters a culture of security ownership across the entire organization.

Traditional on-premises approaches to cybersecurity can put organizations at risk when applied to the cloud. An updated, modern strategy for cloud security should mitigate risk and help achieve your business objectives.

Cloud service providers such as Amazon Web Services (AWS) place a high priority on the security of their infrastructure and services and are subject to regular, stringent third-party compliance audits. These CSPs provide a secure foundation, but clients are still responsible for securing their data in the cloud and complying with their data protection requirements. This theory is substantiated by Gartner, which estimates that, through 2020, workloads hosted on public cloud will have at least 60% fewer security incidents than workloads hosted in traditional data centers, and 95% of cloud security failures through 2022 will be the fault of customers.

Traditional approaches to cybersecurity weren’t designed for the cloud – it’s time for an update

Updating how you think about cybersecurity and the cloud

Despite the significant security advances made by CSPs since the birth of cloud, users still need a deep understanding of the cloud’s shared responsibilities, services, and technologies to align information security management systems (ISMS) to the cloud. Today, the majority of data breaches in the cloud are the result of customers not fully understanding their data protection responsibilities and adopting poor cloud security practices. As the public cloud services market and enterprise-scale cloud adoption continues to grow, organizations must have a comprehensive understanding of not just cloud, but cloud security specifically.

Through 2022, at least 95% of cloud security failures will be the customer’s fault

Source: Gartner

The cloud can be secure – but are your policies?

A poor grasp of the core differences between on-premises and cloud technology solutions resulted in a number of misconceptions during the early days of cloud adoption. This lack of understanding helped fuel one of the most notable and pervasive cloud myths in the past: that it lacked adequate security. By now, most people have come to realize that cloud adoption and digital transformation do not require a security tradeoff. In fact, the cloud can provide significant governance, risk, and compliance (GRC) advantages over traditional on-premises environments. A cloud-enabled business can leverage the secure foundation of the cloud to increase security posture, reduce regulatory compliance scope, and mitigate organizational responsibilities and risk.

It is common to see enterprise organizations lacking the necessary expertise to become cloud resilient. Companies can address this skills gap through prescriptive reference architectures. AWS, for example, has created compliance programs for dozens of regulatory standards, including ISO 27001, PCI DSS, SOC 1/2/3, and government regulations like FedRAMP, FISMA, and HIPAA in the United States and several European and APAC standards. Beyond these frameworks, consultants and managed service providers can work with organizations to provide guidance or architect environments to meet their compliance needs.

Regardless of the services leveraged, the cloud’s shared responsibility model ensures that the customer will always be responsible for protecting their data in the cloud.

Making the change

Similar to the challenges and benefits of implementing DevOps (discussed here by our CEO, Robb Allen), effective cloud security requires a culture change with the adoption of DevSecOps, shifting the thinking in how teams work together to support the business. By eliminating the barriers between development, operations, and security teams, organizations can foster a unified approach to security. When everyone plays a role in maintaining security, you foster collaboration and a shared goal of securely and reliably meeting objectives at the speed of the modern business.

When everyone plays a role in maintaining security, you foster collaboration and a shared goal of securely and reliably meeting objectives at the speed of the modern business.

Additionally, cloud-specific services and technologies can provide autonomous governance of your ISMS in the cloud. They can become a security blanket capable of automatically mitigating potential security problems, discovering issues faster, and addressing threats more quickly. These types of services can be crucial to the success of security programs, especially for large, dynamic enterprises or organizations in heavily regulated industries.

Implementing the right cloud tools can lead to significant reductions in security incidents and failures, giving your teams greater freedom and autonomy to explore how they use the cloud.

The way to the promised land

Security teams and organizations as a whole need to have a deep understanding of cloud security best practices, services, and responsibilities to create a strategic security plan governed by policies that align with business requirements and risk appetite. Ultimately, however, a proper cloud security strategy needs buy-in and support from key decision-makers and it needs to be governed through strategic planning and sound organizational policies. Your cloud security strategy should enable your business to scale and innovate quickly while effectively managing enterprise risk.

Darren Cook is the Chief Security Officer of Effectual, Inc.

The Promise of FinOps

The Promise of FinOps

Cloudability’s Cloud Economic Summit put the spotlight on the importance of accountability and cloud cost management.

Our partner, Cloudability, recently hosted the Cloud Economic Summit in San Francisco, providing a look into the current and future state of cloud cost management. Cloudability CEO Mat Ellis, CTO Erik Onnen, Co-founder J.R. Storment, 451 Research Director Owen Rogers, and AWS Worldwide Business Development lead Keith Jarrett, presented alongside speakers from Autodesk and OLX Group, addressing the need for FinOps – a disciplined approach to managing cloud costs. Supporting the event, Cloudability published a press release, “FinOps Operating Model Codifies Best Practices of the World’s Largest Cloud Spenders, Enabling Enterprises to Bring Financial Accountability to the Variable Spend of Cloud.”

“Celebrate achievement, get better every day – this is FinOps.”

—Mat Ellis, CEO of Cloudability
Cloudability Logo

Introducing the day, Mat set the stage that public cloud adoption is part of a much bigger trend seen in many industries throughout history – managing a supply chain. Milestone innovations disrupt at an astronomical scale; from the printing press, to rubber, to the internet, and now cloud computing. We’ve all felt the disruption created by cloud computing and many of us have been part of the 21st Century IT revolution. As seen at AWS re:Invent last year, the adoption of DevOps culture to foster innovation and enable competitive advantage has been embraced by large insurance organizations like Guardian and the world-famous guitar manufacturer Fender.

However, with AWS now 13 years old, many cloud technology buying decisions are still based on an outdated model. There is a need for iterative, ongoing monitoring and accounting for cloud spend. Enter Cloudability. Analyzing hundreds of millions in cloud spend per month, and billions per year, Cloudability’s platform delivers keen insights and benchmarking tools that enable a clear path to cloud cost diligence and FinOps success.

Cost Management in The Cloud Age

Digging into the data behind the mission of FinOps, Owen Rogers, Research Director, 451 Research presented some stark realities about the current state of cloud cost management (Full report available here). The study found that more than half of large enterprises worry about cloud costs on a daily basis and 80% believe that poor cloud financial management has a negative impact on their business. These enterprises need a comprehensive platform to manage multi-million-dollar cloud budgets.

Owen Rogers, Research Director, 451 Research presented some stark realities about the current state of cloud cost management

Another eye-opening data point presented was that 85% of respondents overspend their budgets, with nearly 10% spending two to four times their allocated budget. Pair this with 18% of respondents that were unaware they were overspending, and the picture is not pretty. The biggest reasons cited for not addressing this issue were “too small of an overspend to resolve” and “not wanting to hinder innovation.”

While well-intentioned, the study showed that “not wanting to hinder innovation” and pushing off a responsible approach to cloud cost management does exactly what the respondents are trying to avoid: halts cloud adoption, cripples innovation, lowers the quality of service, increases cost, and creates a sprawling underutilized cloud footprint.

Cloud Cost Management Directly Impacts Company Culture and Business Bottom Line

The reality is that cloud cost management directly impacts business. Thankfully, there are steps to take to mitigate the commonplace inefficiencies identified by Owen. For example, 33% of respondents are manually extracting and aggregating cloud costs in a spreadsheet – this is the epitome of anti-agile. Only 52% of instances are rightsized for their workload and, beyond that, only 52% of respondents are taking advantage of Reserved Instance discounts.

The tools and opportunities to improve the health and efficiency of your cloud environments are readily available. In fact, the 451 Research report shows average savings of 27% were achieved through the use of a cost management platform. With an expected CAGR of 17% between 2017-2022, now is the time to implement the behavioral changes that instill a culture of FinOps within your organization.

The problem is shared accountability – The solution is a FinOps culture

What became apparent in the research presented by Owen Rogers is a distinct need for IT and Finance teams to come to the table together to discuss the path forward. The good news is there are companies that are pushing the envelope and leading the way in diligent and responsible cloud cost management. Those who have embraced a FinOps culture are utilizing performance benchmarking and have a clear understanding of the fully-loaded costs of their cloud infrastructure. This is the promise that we can aspire to and it starts with collaboration between IT, finance, and individual lines of business.

There is a distinct need for IT and Finance teams to come together to discuss the path forward

FinOps high performers have near real-time visibility of all cloud spend. Individual teams understand their portion of total spend, are enabled to budget and track against targets, and utilize Reserved Instances for 80-95% of their cloud services.

Similar to having a clear understanding of household finances, this level of diligence affords more benefits than just cost savings. A remarkable side effect of FinOps culture is a 10-40% improvement in operational efficiency within your organization.

FinOps Foundation

In addition to the information presented at the Cloud Economic Summit, Cloudability launched the FinOps Foundation. Comprised of founding members from Atlassian, Nationwide, Spotify, Autodesk, letgo, and many others, the FinOps Foundation is a non-profit trade organization bringing people together to create best practices around cloud spend.

J.R. Storment, Cloudability Co-founder, takes on the role of President of The FinOps Foundation. J.R. describes the need for the organization here.

“…Why is the Foundation needed? At many companies I talk with, engineering teams spend more than needed with little understanding of cost efficiency.”

J.R. Storment, Cloudability

We are excited to see our partner defining this space and eager to participate in the FinOps Foundation. We are also looking forward to reading “Cloud Financial Management Strategies, Creating a Culture of FinOps,” their O’Reilly Media book which is slated to be published later this year.

Thanks again to Cloudability for hosting us at the event, we are looking forward to an exciting year together.

Robb Allen is the CEO of Effectual, Inc.

Building Strength Through Partnership

Building Strength Through Partnership

The cloud partner ecosystem is changing. The days when organizations could act as a “Jack of all trades, master of none” are over.

Legacy IT resellers are going the way of the dinosaur in favor of partners who can deliver clear value-add with a track record of transformative success. Specialization is the order of the day. This cuts to the heart of what we mean by a partnership — and how it differs from simply being a “vendor.”

IT partnerships should allow your in-house team to remain focused on generating revenue and building the organization.

Why Specialized Partnerships Matter

Choosing the right partner is absolutely critical to executing a successful cloud transformation. We addressed this in a previous post. Every organization is necessarily limited by its own technical and human resources. The right partner brings expertise, experience, and proven processes to ensure that internal limitations don’t equal a failed transformation process.

A Successful Cloud Partnership

Let’s take a look at one of the most recent and important cloud partnerships: AWS and VMware. AWS brought their cloud platform services together with VMware’s virtualization expertise. The result was a specialized certification program, a robust migration service, a cost insight tool providing greater transparency into cloud spending, and a joint hybrid cloud product to incentivize customer adoption. Each partner brought highly specific value-add services and together they created a game-changing cloud solution for the enterprise.

Partners Versus Vendors

It’s worth exploring what we mean when we talk about being a partner as opposed to being a vendor. A vendor is easy enough to explain: It is a company providing a service. The point is, even the best vendors are not as invested in your success as a partner. They certainly wish their customer success and hope for continued business, but there is no strategic, long-term involvement, or commitment to understanding their clients’ unique business goals.

In some cases, vendors may even push templated or cookie-cutter solutions that simply don’t fit. This isn’t to say that every vendor is out to take advantage of their customers; it’s simply a recognition that a generalized vendor offering tends to be limited, in contrast to a specialized partnership.

By comparison, a successful partnership is a more intimate relationship. In these engagements you’re not just purchasing IT services – you’re working hand-in-hand to grow the efficiency and effectiveness of your IT resources.

Cloud security, migration, and cost optimization are exactly the types of endeavors that call for partners.

The key difference is a subtle but important one — collaboration. It’s often thought that a good partner will “take care of everything” for you, but this is not true, nor should it be. A true partner requires your input to understand how your business defines success, and relies on this data to make informed decisions on the technologies they deploy. It is essential for your teams to be involved in this process, as they will adopt and learn new methodologies and processes throughout the engagement.

It’s not about choosing between vendors or partners. It’s about recognizing where more generalized vendors will fulfill your needs and where specialized partners are a better fit. Simple, straightforward tasks are fine for vendors. More involved and strategic endeavors, however, require a partner. Cloud security, migration, and cost optimization are exactly the types of endeavors that call for partners.

Extending Your In-House Capabilities

IT partnerships should allow your in-house team to remain focused on generating revenue and building the organization. Leaning on strong partners can in effect be an extension of your IT team, expanding your resources and solving problems that may have required additional training or experience beyond the expertise and skill sets of your internal teams.

Cloud security, migration, and cost optimization are exactly the types of endeavors that call for partners.

Keeping your teams focused on their core responsibilities has a highly desirable secondary effect – boosting in-house morale. Not only does this improve the workplace, it makes it easier for you to attract and retain top talent.

Cloud Confidently™

At effectual, we engage you as a partner, not a vendor, which is why we specialize in cloud, and not cloud-adjacent services like data center operations. Our deep experience in cloud enablement facilitates your digital transformation. This includes helping you to determine the best implementation strategy as well as establishing metrics to quantify and measure your success. But our specialization is in security and financial optimization.

The important thing is not to be just technologists, but to be able to understand the business goals [clients are] trying to achieve through the technology.

Cloud is a rapidly evolving ecosystem. AWS rolled out 1,400 new services in 2017, another 800 throughout the first half of 2018, and an impressive number of new product and service announcements during re:Invent 2018. We understand that it can be difficult to wade through these waters to find the right solutions for your business challenges, including your specific security requirements. What’s more, your team is likely already fully committed running core applications and tools. You need a partner who can help keep your in-house team free to do what it is they do best.

RightScale’s 2018 State of the Cloud report found that most organizations believed they were wasting 30 percent of their cloud spend. In fact, the study found that 35 percent of their cloud spend was attributed to waste. We look for ways to help our partners get their invoices under control but also to understand what is driving their cloud costs. Finally, we help organizations properly allocate their spend, ensuring that the right applications, business units, regions, or any other grouping of your business is spending exactly what it should and no more.

We strive to understand your long- and short-term goals by working closely with your organization and provide you with strategic solutions for sustained growth. Interested in learning more? Reach out and let us know what you are looking to solve – we love the hard questions.

Robb Allen is the CEO of Effectual, Inc.

Next Up: Machine Learning on AWS

Next Up: Machine Learning on AWS

If you have been to AWS’s re:Invent, then you know the tremendous amount of excitement that cloud evangelists experience during that time of the year.

The events that AWS hosts in Las Vegas provide a surreal experience for first-timers and are sure to excite even the most seasoned of veterans. Let’s talk about one of the exciting technologies that are sure to change the world as we know it, or at least the businesses we are familiar with – Amazon Machine Learning.

Introduced on April 9, 2015, Amazon Machine Learning (ML) has received a surge of attention in recent years given its capability to provide highly reliable and accurate predictions with a large dataset. From using Amazon ML to track next-generation stats in the NFL, to analyzing real-time race data in Formula 1, to enhancing fraud detection at Capital One, ML is changing the way we share experiences and interact with the world around us.

During re:Invent 2018, AWS made it clear that ML is here to stay and has announced many offerings that support the development of ML solutions or services. But you may be wondering: What exactly is Amazon ML?

According to AWS’s definition:

“Amazon Machine Learning is a machine service that allows you to easily build predictive applications, including fraud detection, demand forecasting, and click prediction. Amazon Machine Learning uses powerful algorithms that can help you create machine learning models by finding patterns in existing data and using these patterns to make predictions from new data as it becomes available.”

We, as a society, are at the point where machines are actively providing decisions for many of our day-to-day interactions with the world. If you’ve ever shopped as a Prime member on, you have already experienced an ML algorithm that is in tune with your buying preferences.

In our Engineer’s Corner, our very own Kris Brandt Amazon Web Service As A Data Lake, discusses the critical initial step towards implementing an ML project, Data Lake creation. In this blog, Kris explores what a Data Lake is and provides some variations to its implementation. The development of a robust data lake is requisite for implementing an ML project that provides the business value expected from the service capabilities. ML runs on data and having plenty of it provides a foundation for an exceptional outcome.

Utilizing existing data repositories, we can work with business leaders to develop those cases for leveraging the data and the ML for strategic growth. You can connect with the Effectual team by emailing

Because of ML’s proliferation throughout the market, AWS announced these ML solution opportunities during re:Invent 2018:

AWS Lake Formation
“This fully managed service will help you build, secure, and manage a data lake,” according to AWS. It allows you to point it at your data sources, crawl the sources, and pull the data into Amazon Simple Storage Service (S3). “Lake Formation uses Machine Learning to identify and de-duplicate data and performs format changes to accelerate analytical processing. You will also be able to define and centrally manage consistent security policies across your data lake and the services that you use to analyze and process the data,” says AWS.

Amazon Textract
“This Optical Character Recognition (OCR) service will help you to extract text and data from virtually any document. Powered by Machine Learning, it will identify bounding boxes, detect key-value pairs, and make sense of tables, while eliminating manual effort and lowering your document-processing costs,” according to AWS.

11 AWS Snowball Planning Considerations

11 AWS Snowball Planning Considerations

Data transfer/migration is a key consideration in any organization’s decision to move into the cloud.

If a sound strategy is applied, migration of on-premise data to the cloud is usually a seamless process. When an organization fails to do so, however, it risks running into challenges stemming from deficiencies in technical resources, inadequate planning, and/or incompatibility with legacy systems, to name a few.

Data transfer via AWS Snowball is no exception. If performed incorrectly or out of order, some of the seemingly insignificant tasks related to the data migration process can become substantial obstacles that adversely affect a timeline.  The AWS Snowball device can be simple to use if one is familiar with other AWS data transfer services and/or follows all of the steps provided in the AWS Snowball User Guide. However, neglecting a single step can greatly encumber an otherwise ordinary data transfer process.

According to AWS on its service:

“AWS Snowball is used to transport terabytes or petabytes of data to and from AWS, or who want to access the storage and compute power of the AWS Cloud locally and cost effectively in places where connecting to the internet might not be an option.”


When preparing to migrate data from on-premises storage into AWS via a Snowball device, an organization should be aware of the importance of 11 easily overlooked tasks and considerations associated with planning for the data move. They are as follows:

1. Understanding the specifics of the data being moved to the cloud.

Ensure that it is compatible and can transfer seamlessly to the cloud via AWS Snowball. Follow a cloud migration model to help layout specific details and avoid surprises during the data transfer process.

2. Verifying and validating the amount of data being transferred.

Snowball is intended for large data transfers (over 10 terabytes). Using it for smaller data transfers is not a cost-effective option.

3. Verifying that the workstation meets the minimum requirement for the data transfer.

It should have a 16-core processor, 16 MB of RAM, and a RJ45 or SPF+ network connection.

4. Performing a data transfer test on the workstation an organization plans to use to complete the task.

This will not only equip the organization with an understanding of the amount of time needed to perform the transfer but will provide an opportunity to try various methods of transferring data. Additionally, it will assist with estimating the time the Snowball device will need to be in the organization’s possession, as well as its associated cost.

NOTE: The Snowball Client must be downloaded and installed before this step is performed.

5. Creating a specific administrative IAM user account for the data transfer process via the management console.

This account will be used to order, track, create and manage Snowball Import/Export jobs and return the device to AWS.

NOTE: It is important to avoid using personal IAM user accounts if individuals will be responsible for ordering the device and performing the data transfer.

6. Following the “Object Key Naming convention” when creating S3 buckets.

It is also important to confirm that the selected S3 bucket name aligns with the expectations of the stakeholders.

7. Confirming the point of contact/s and shipping address for the Snowball device.

This is especially important if the individual ordering the device is different from the one performing the data transfer.

8. Setting up SNS notifications to help track the stages of the snowball job.

This will keep the stakeholders informed of the shipping status and the importing of data to the S3 bucket.

9. Being aware of how holidays could affect the progress or process of the data-transfer timeline.

This is important because additional costs are accrued 10 days after the Snowball is delivered.

10. Considering the organization’s administrative processes that might hinder or delay the data transfer process.

By factoring in internal processes (e.g., Change Request management, stakeholder buy-in, technical change moratoriums, etc.) into the timeframe it will take to receive the device, start the job, and ship it back to AWS can help prevent unnecessary fees.

NOTE: The Snowball device has no additional cost if it is returned within 10 days from the date it is received. Following that time, however, a daily late fee of $15 is applied until the date AWS receives it.

11. Keeping the original source data intact till the data import is confirmed.

It is very important that source data remain intact until the Snowball device has been returned to AWS, the data import has been completed, and the customer has validated the data in the S3 bucket(s).

Transferring data from on-premises to an AWS Snowball can be an uneventful endeavor when thorough planning is done in advance of ordering the device. Taking these 11 planning tasks and considerations into account is essential to eliminating some of the potential headaches and stress occasionally associated with this type of activity.

Refer to AWS Snowball Documentation for additional information and specific instructions not covered in this article.

If you or your organization has more questions, reach out to us at

Amazon Web Service as a Data Lake

Amazon Web Service as a Data Lake

“Cloud,” “Machine Learning,” “Serverless,” “DevOps,” – technical terms utilized as buzzwords by marketing to get people excited, interested, and invested in the world of cloud architecture.

And now we have a new one – “Data Lake.” So, what is it? Why do we care? And how are lakes better than rivers and oceans? For one, it might be harder to get swept away by the current in a lake (literally, not metaphorically).

A Data Lake is a place where data is stored regardless of type – structured or unstructured. That data can then have analytics or queries ran against them. An allegory to a data lake is the internet itself. The internet, by design, is a bunch of servers labeled by IP addresses for them to communicate with each other. Search Engine web crawlers visit websites associated with these servers, accumulating data that can then be analyzed with complex algorithms. The results allow a person to type in a few words into a Search Engine and receive the most relatable information. This type of indiscriminate data accumulation and the presentation of context-relatable results is the goal of data lake utilization.

However, for anyone who wants to manage and present data in such a manner, they first need a data store to create their data lake. A prime example of such a store is Amazon S3 (Simple Storage Service) where documents, images, files, and other objects are stored indiscriminately. Have logs from servers and services from your cloud environments? Dump them here. Do you have documentation that is related to one subject, but is in different formats? Place them in S3. The file type does not really matter for a data lake.

ElasticSearch can load data from S3, indexing your data through algorithms you define and providing ways to read and access that data with your own queries. It is a service designed to provide customers with search capability without the need to build your own searching algorithms.

Athena is a “serverless interactive query service.” What does this mean? It means I can load countless CSVs into S3 buckets and have Athena return queried data as a data table output. Think database queries without the database server. Practically, you would need to implement cost management techniques (such as data partitioning) to limit the ingestion costs per query as you are charged on the amount of data read in a query.

Macie is an AWS service that ingests logs and content from all over AWS and analyzes that data for security risks. From personal identity information in S3 buckets to high-risk IAM Users, Macie is an example of what types of analysis and visualization you can do when you have a data lake.

These are just some examples of how to augment your data in the cloud. S3, by itself, is already a data lake – ‘infinite’, unorganized, and unstructured data storage. And the service already is hooked into numerous other AWS services. Data lake is here to stay and is a mere stepping stone to utilizing the full suite of technologies available now and in the future. Start with S3, add your data files, and use Lambda, ElasticSearch, Athena, and traditional web pages to display the results of those services. No servers, no OS configurations or security concerns; just development of queries, lambda functions, API calls, and data presentation – serverless.

Our team is building and managing data lakes and the associated capabilities for multiple organizations and can help yours as well. Reach out to our team at for some initial discovery.

The Right Partner is Better than a Crystal Ball

The Right Partner is Better than a Crystal Ball

Mistakes can create amazing learning opportunities and have even led to some of the most beneficial discoveries in human history, but they can also have far-reaching, time-consuming, and costly implications.

Luckily, someone else has probably already made the mistakes you’re bound to make, so why not reap the benefits of their errors and subsequent experience?

Know Your Strengths and Limitations

One of my passions in life is building things. Working with my hands to create something new and unique fills me with a sense of accomplishment beyond description. Over the years, I’ve taken on a variety of projects to scratch this itch; from baking to renovating my kitchen (I’m really quite proud of the custom-built cabinets I made from scratch), to customizing various vehicles I’ve owned over the years. Along the way, I’ve built several Jeeps into serious off-road machines.

In the years before YouTube, when I was first developing the skills required to lift and modify a Jeep, I often encountered situations where I wasn’t confident in my knowledge or abilities. I knew that the vehicle I was working on would need to be just as safe and reliable on the freeway as it would be out in the middle of nowhere – places, where the results of my efforts would be stress-tested and the results of poor workmanship, could be catastrophic. Each time I encountered an area where I had limited knowledge or experience, I would do all the research I could, then find someone with the right experience who could coach me through the process (and critique my work along the way).

Fortunately, I had a ready supply of trusted friends to advise me. In my time spent driving these heavily modified vehicles, I did encounter the occasional failure, but thanks to the skills I developed under the direction of these watchful eyes, none of them ever put me at significant risk.

“The wise man learns from the mistakes of others.”

Otto von Bismarck

As enterprises modernize their infrastructure, they should look at IT as a contributor to business innovation rather than a means to an end. Alongside this shift in view, the expertise required to define strategy, architect solutions, and deliver successful outcomes is increasingly difficult to acquire. In this dearth of available talent, many of the enterprises I’ve been dealing with are struggling with the decision between:

  1. Delaying modernization efforts
  2. Plowing ahead and relying on internal resources to get trained up and tackle the challenges
  3. Bringing in a third party to perform the work of modernization.

Unfortunately, none of these options is ideal.

Choosing the Best Path

The first option, delaying modernization, limits the enterprise’s ability to deliver products and services to their stakeholders and clients in innovative ways – opening the door for disruptive competitors to supplant them. For dramatic evidence of this, look at the contrasting stories of Sears, the first company to offer ‘shop at home’ functionality, and, the disruptor who has supplanted them as the go-to home shopping solution. The option of delaying presents a significant risk, but the risk of assigning internal resources to address issues they’re not fully prepared to handle should not be underestimated.

Plowing ahead with a team that’s facing unique challenges for the first time means you’ll be lacking the benefits of experience and hindsight. In my previous posts, I’ve discussed some of the hidden traps encountered along the modernization journey, many of which can only be seen in hindsight. These traps can only be avoided once you’ve had the inevitable misfortune and experience of having fallen into them before. “Fool me once…”

It’s similar to the Isla de Muerta from the Pirates of the Caribbean movies; “…an island that cannot be found – except by those who already know where it is.” Unlike the movie, there’s nothing magical about the pitfalls that litter the path to modernization. Many of the concepts that we have accepted as IT facts for decades are invalidated by modern approaches. So, the logical decision seems simple: outsource the effort to someone who has been there before.

Finding the Right Experience and Approach

Bringing in a partner that is experienced in the many areas of IT modernization is the safest of the three options, but your mode of engagement with this third party directly relates to the benefits your enterprise will enjoy. The best advice I can offer is to look for a provider who views modernization from a business perspective, not merely as a technical effort. Most providers will tout their technical expertise (we at effectual do too), but the reality is that nearly all competent providers have people with the same certifications. Technical certifications are no longer the differentiator they used to be. When you are interviewing these teams, ask how they plan to interact across your various enterprise teams outside of IT. If they look puzzled or don’t have an answer, you know that they are not a business solution provider.

Once you have an idea of who is able to help with your modernization efforts, you need to make a decision regarding the methodology that will best suit your enterprise. One possible route is to completely turn the effort over to the outsourced team. While this is a fairly risk-free approach that leaves you with transformed IT when the project is over, you don’t gain any of the expertise required to manage your environment moving forward. I’ve found that the greatest benefits are realized when an enterprise and their provider partner together on the solution.

Providers as Partners
Partner resources collaborate with enterprise resources to deliver solutions, while also providing training, insight, oversight, and guidance.

In this scenario, the partner team takes the lead on the migration project under the executive direction of the enterprise, just like my friends who would help with my vehicle builds. Partner resources collaborate with enterprise resources to deliver solutions, while also providing training, insight, oversight, and guidance. At the end of the day, the enterprise enjoys a better purpose-built solution and develops the expertise to enhance it as additional business requirements are identified, or existing requirements evolve.

What the Future Holds

As modernization efforts start to take hold and your teams gain confidence, you should not consider the journey complete. This is the point in the revolution where a modernized organization can truly view IT and engineering as the linchpin of your competitive advantage, whether it be through cloud adoption, big data, artificial intelligence, mobility, or other current technologies. Historically, the interaction between business and IT has been a two-step process. The business conceptualizes features that would benefit some constituency, whether it be internal or external, then directs IT to build it.

In the new world, where technological capabilities are rapidly evolving and growing, the competitive advantage comes primarily from changing that two-step process. The business first asks IT “What is possible?” and then business teams collaborate with IT to deliver forward‑thinking solutions. This is the behavior that enables innovation and disruption within industries. We’ll explore this topic in depth in a future post.

Learning from the Experts

What has made me a successful builder over the years has been my good fortune to have skilled artisans available to guide and coach me through the work as I was learning how to do it. As I learn tips and tricks from experts, I begin to behave like an expert and deliver high quality work. As you look to your modernization efforts your enterprise can Cloud ConfidentlyTM and see similar growth by bringing in the right partners to help lever your team’s skills and understanding.

What I Saw at AWS re:Invent That You May Have Missed

What I Saw at AWS re:Invent That You May Have Missed

It was an interesting re:Invent for me – launching our new company, the constant feeds from AWS and several partners filled with litanies of new feature and product announcements, multiple multi-hour keynote speeches (yes, I did sit through all of them), working to discern hype from reality, fighting through the throngs of people, and trying to convince the 12-year-old child inside me that I don’t need a Deep Racer.

In the midst of all of this, there were a couple of interesting success stories that really validated my recent lines of thought regarding the business ramifications of cloud transformation.

Andy Jassy’s wide-ranging keynote included product and feature announcements related to Database, Storage, Machine Learning, Security, Blockchain, and who can forget Outposts (so you can run the cloud in your closet).

Revolutionizing the Insurance Industry

In his keynote, Andy Jassy introduces and turns the stage over to Dean Del Vecchio, EVP, CIO, and Head of Enterprise Shared Services for Guardian Insurance. Guardian is a great example of a legacy enterprise that has embraced the revolution despite being in a highly regulated industry. With the words, “I’m going to start in a place that may not be expected – I’m going to talk about our workplace Strategy”, Mr. Del Vecchio had my full attention. From there he went on to talk about Guardian’s multi-year transformation that needed to take place before they moved their first workload to the cloud. They changed their office environment, modernized project methodology, trained their staff on new technologies, and ultimately revolutionized their culture. This was not a tale of a headlong sprint to cloud, it was a thoughtful, self-aware, and measured approach.

Dean Del Vecchio, EVP, CIO and Head of Enterprise Shared Services for Guardian Insurance

Guardian is a great example of a legacy enterprise that has embraced the revolution despite being in a highly regulated industry.

Once cultural change had begun to take root, Guardian stood up AWS environments and ran Proof of Concept workloads for a full year before the first workload was moved. They identified gaps during this time and worked with vendors to develop solutions that enabled Guardian to be bold and confident in their migration. They documented their technological biases, the core being a Cloud First posture (rather than an All-In on cloud directive). Because of their approach and inherent understanding of the revolutionary nature of 21st-century technologies, they were able to take a Production First approach to moving to the cloud. There were undoubtedly significant pain points and localized failures throughout this journey, but having come through the bulk of it now, Guardian sees their adoption of cloud as a competitive advantage. They have revolutionized the way they interact with their customers and are excited to begin making use of forward-looking technologies like AI, AR, and VR in the cloud to further enhance client experience.

Guardian’s journey revolutionized a 158-year-old Fortune 250 enterprise, from top to bottom.

What was most interesting to me was what their journey was not. It was not exclusively an IT effort, a rapid lift and shift migration, or even primarily technical in nature. It was definitely not shortsighted in nature. It did revolutionize a 158-year-old Fortune 250 enterprise, from top to bottom, and they are now well-positioned for another 158 years of success.

Fender Guitars – re:Invented

In Werner Vogels’ keynote, he started off by talking about minimizing blast radius, one of my favorite topics, and he wrapped up by talking about Deep Racer, the toy that my inner 12-year-old seems to think I need. In between, amongst some explanations around high-level database software design principles, he introduced Ethan Kaplan, the Chief Product Officer, Fender Digital, from Fender Musical Instruments, one of the premier guitar manufacturers in the world. Mr. Kaplan shared some of the results from research Fender conducted around their client base.

Basically, it boils down to three key points:

  1. 90% of all first-time guitar buyers quit playing within 6 months of the purchase
  2. Those who don’t quit will purchase 8-10 guitars over their lifetime
  3. Guitar players spend roughly 4x the value of their first guitar on lessons

From this data, they realized that there is a significant market for guitar lessons, and if they can find a better way to improve on the traditional lesson model, they will likely sell many more guitars.

Fender Digital Took a Cloud First Approach for All New Application Development

Fender Digital Chief Product Officer, Ethan Kaplan, discussed the apps and teaching methodology created to help new guitarists see success. Now shooting 4K video 6 days a week, Fender worked alongside AWS in developing a full video processing pipeline architecture that brings terabytes of new content to their users every day.

This led them in two directions. They created a number of apps to help new players understand their new instrument, and they developed an app and teaching methodology that is more fun for the new player and helps them see musical success sooner. Combined, these convert a higher percentage of new players into lifetime players. The story I saw here is that the adoption of revolutionary 21stcentury IT technologies has enabled Fender to transform their business. They have always made great guitars and stringed instruments and will continue to do so, but now their business is more and more about teaching people how to play. Fender is shooting 4k videos on two soundstages six days a week to support this new mission, producing Terabytes of content per day. Working alongside AWS to create a video processing pipeline architecture, Fender can now automatically transcode down to cellphones and up to 4k televisions, simultaneously auto-populate their CDN and CMS, and archive raw video to glacier 24 hours a day, 7 days a week. Before long the core of their business will be developing and delivering musical training curricula with the manufacture of instruments being a sideline.

Realizing 21stCentury IT

For the past couple of months, I’ve been writing about how cloud and cloud-related technologies are revolutionary in nature, and how your decision to adopt cloud should be viewed as a business effort rather than a strictly technical effort. These two stories from re:Invent keynotes are great examples of similar success that I have seen with clients who have recognized and acted on this principle.

If you are reading this and you’ve already begun a cloud effort that looks like a data center move, it’s not too late to change course. Don’t fall into the trap of thinking that we’ll just Lift and Shift to the cloud now and transform later – this is a prime example of thinking of cloud as a technical rather than business solution. I’ve yet to see an enterprise be successful at following through on this approach. If you’re not looking at 21stcentury IT as a business change agent and competitive advantage, it won’t be long before one of your competitors is.

When Best Efforts Aren’t Good Enough

When Best Efforts Aren’t Good Enough

“Have you tried rebooting it?”

There was a time, not so long ago, when that was the first question a technician would ask when attempting to resolve an issue with a PC or a server that evolved from PCs. This was not limited to servers; IT appliances, network equipment, and other computing devices could all be expected to behave oddly if not regularly rebooted. As enterprise IT departments matured, reboot schedules were developed for equipment as a part of routine preventative maintenance. Initially, IT departments developed policies, procedures, and redundant architectures to minimize the impact of regular reboots on clients. Hardware and O/S manufacturers did their part by addressing most of the issues that caused the need for these reboots, and the practice has gradually faded from memory. While the practice of routine reboots is mostly gone, the architectures, metrics, and SLAs remain.

Five Nines (or 99.999%) availability SLAs became the gold standard for infrastructure and is assumed in most environments today. As business applications have become more complex, integrated, and distributed, the availability of individual systems supporting them has become increasingly critical. Fault tolerance in application development is not trivial, and in application integration efforts it is orders of magnitude more difficult, particularly when the source code is not available to the team performing the integration. These complex systems are fragile and will behave in unpredictable ways if not shut down and restarted in an orderly fashion. If a single server supporting a piece of a large distributed application fails, it can cause system or data corruption that will take significant time to resolve, impacting client access to applications. The fragile nature of applications makes Five Nines architectures very important. Today, applications hosted in data centers rely on infrastructure and operating systems that are rock solid, never failing, and reliable to a Five Nines standard or better.

As we look at cloud, it’s easy to believe that there is an equivalency between a host in your data center and an instance in the cloud. While the specifications look similar, critical differences exist that often get overlooked. For example, instances in the cloud (as well as all other cloud services) have a significantly lower SLA standard than we are used to, some are even provided on a Best Effort basis. It’s easy to understand why this important difference is missed – the hardware and operating systems we currently place in data centers are designed to meet Five Nines standards, so it is assumed, and nobody asks about it anymore. Cloud-hosted services are designed to resemble systems we deploy to our data centers, and although the various cloud providers out there are clear and honest about their SLAs, they don’t exactly trumpet the difference between traditionally accepted SLAs and those they offer from their rooftops.

A Best Efforts SLA essentially boils down to your vendor promising to do whatever they are willing to do to make your systems available to you. There is no guarantee of uptime, availability or durability of systems, and if a system goes down, you have little or no legal recourse. Of course, it is in the interest of the vendor and their reputation to restore systems as quickly as possible, but they (not you) determine how the outage will be addressed, and how resources will be applied to resolve issues. For example, if the vendor decides that their most senior technicians should not be redirected from other priorities to address the outage, you’ll have more junior technicians handling the issue, who may potentially take longer to resolve it – a situation which is in your vendor’s self-determined best interest, not yours.

There are several instances where a cloud provider will provide an SLA better than the default of Best Efforts. An example of this is AWS S3, where Amazon is proud of their Eleven Nines of data durability. Don’t be confused by this, it is a promise that your data stored there won’t be lost, but not a promise that you’ll be able to access it whenever you want. You can find available SLAs for several AWS services, but none of them exceed Four Nines. This represents effectively 10x the potential outage time over Five Nines and applies only to the services provided by the cloud provider, not the infrastructure you use to connect to them or your applications which run on top of them. The nature of a cloud service outage is also different than one that happens in a data center. In your data center, catastrophic all-encompassing outages are rare, and your technicians will typically still have access to systems and data while your users do not. They can work on both restoring services and “Plan B” approaches concurrently. When systems fail in the cloud, oftentimes there is no access for technicians, and the work restoring services cannot begin until the cloud provider has restored access. This typically leads to more application downtime. Additionally, when systems go down in your data center, your teams can typically provide an ETA for restoration and status updates along the way. Cloud providers are notorious for not offering status updates while systems are down, and in some cases, the systems they use to report failures and provide status updates rely on the failed systems themselves – meaning you’ll get no information regarding the outage until it is resolved. Admittedly, these types of events are rare, but the possibility should still give you pause. So, you’ve decided to move your systems to the cloud, and now you’re wondering how you are going to deal with the inevitable outages. There are really only a few options available to you; first, you can do nothing and hope for the best. For some business applications, this may be the optimal (although most risky) path. Second, you can design your cloud infrastructure like your data centers have been designed for years. My last two posts explored how expensive this path is, and depending on how you design, it may not offer you the availability that you desire anyway. Third, you can implement cloud infrastructure automation and develop auto-scaling/healing designs that identify outages as they happen and often respond before your team is even aware of a problem. This option is more cost-effective than the second option, but it requires significant upfront capital and its effectiveness requires people well-versed in deploying this type of solution – people who are in high demand and hard to find right now. Finally, the ideal way to handle this challenge is to rewrite applications software to be cloud-native – modular, fault-tolerant applications that are infrastructure aware, able to self-deploy and self-re-deploy through CI/CD patterns and embedded infrastructure as code. For most enterprise applications this would be a herculean effort and a bridge too far. Over the past several decades, as we’ve made progress in IT towards total availability of services, you’ve come to rely on, take comfort in, and expect your applications/business features to be available all the time. Without proper thought, planning, and an understanding of the revolutionary nature of cloud-hosted infrastructure, that availability is likely to take a step backward. Don’t be like so many others and pay a premium for lower uptime. Be aware that there are hazards out there and bring in experienced people to help you identify the risks and mitigate them. You’re looking for people who view your moves toward the cloud as a business effort, not merely a technical one. Understand the challenges that lie ahead, make informed decisions regarding the future of your cloud estate, and above all, Cloud ConfidentlyTM!

Don’t Take Availability for Granted
Over the past several decades, as we’ve made progress in IT towards total availability of services, you’ve come to rely on, take comfort in, and expect your applications/business features to be available all the time. Without proper thought, planning, and an understanding of the revolutionary nature of cloud hosted infrastructure, that availability is likely to take a step backward. Bring in experienced people to help you identify the risks and mitigate them.

Cloud: The Mirage of Massive Cost Savings (Ketchup on the side)

Cloud: The Mirage of Massive Cost Savings (Ketchup on the side)

“Why are you moving to the cloud?” is a question I’ve asked more times than I can count. It’s one of the first questions posed to a potential client, for multiple reasons.

The two most important reasons are; first, I want to get a little insight into how thoughtful/educated this potential client is in relation cloud, and second, I want to understand what metrics will be used to determine success and or failure of the project we are considering undertaking. Potential clients respond to this question in various ways, but almost always, one of their first answers is around saving money and/or cutting costs. When I hear this response, I ask a couple of follow-up questions to clarify how they plan on accomplishing this ambiguous goal. More often than not, they have no idea how they will recognize cost savings and many just expect it to be a natural benefit of moving their VMs to the cloud. This near universal acceptance of a broad notion, with little factual basis, reminds me of the story of ketchup. In the mid 19thcentury, a doctor took the ketchup of the time (which was basically fermented mushroom sauce or ground up fish innards –further reading on this if you are so inclined ) and added tomatoes to it. He made some somewhat dubious claims regarding the maladies that could be cured by his new ketchup, which were picked up by the press. By the later part of the 19thcentury, with the help of unscrupulous hucksters along the way, nearly everyone believed that ketchup cured all ills. While ketchup does have some definite health benefits and is a very tasty condiment; it’s rich in Vitamin C and anti-oxidants; a cure-all it is most definitely not.

The truth is, simple cloud migration, even when instances are right sized and Reserved Instances (RIs) are purchased, is unlikely to produce significant cost efficiency for infrastructure that isn’t properly architected to take advantage of cloud services. In my last post I shared an example of two different cloud deployment strategies for a sample application. The five-year total operating costs were roughly $350k for one strategy and $14k for the other. The difference; to recognize the operational cost savings of $336k over 5 years, an enterprise would need to spend roughly $100k and several months of effort upfront. Enterprises are wary of the upfront costs, 125% of the expensive model’s projected first year operating costs, or 571% of those first quarter operating costs, so they make a short-term financial decision to proceed with the $350k option. More often than not, this decision is made in the IT department, based on their limited budget visibility, not at the executive level where greater budgetary visibility and enterprise strategy is handled. Another dirty little secret about cloud cost management that doesn’t often percolate up to executive levels is that the flexibility of cloud allows your IT teams to immediately spin up services that generate significant cost with little or no financial oversight until the invoice comes due. For example, common compute instances at AWS can cost from $6 – $10 per hour with specialized services available through the marketplace costing several times that. An otherwise well-meaning IT employee (with no purchasing authority) could spin up a single $8/hour resource with no oversight, which by the time the bill has been received could total $6k-$7k in charges. While this situation would likely be recognized and addressed at the time the invoice was reviewed, dozens and dozens of smaller instances could take years of invoice cycles to clear up and over time have a much greater but less initially obvious impact. I have been involved in several remediation efforts for clients’ IT departments that were spending $500k+ annually in un-accounted for cloud services.

Cross-functional governance

The most successful way I’ve seen enterprises address this new requirement is to create a cross-functional governance committee that includes representation from finance, IT and core business units, with the charter of managing cloud related costs.

Contrast this with the IT Provisioning Model where costs were governed prior to the time of purchase. When an IT department needed additional infrastructure, a Capital Expenditure process is/was in place that required finance approval. Budgets and expenditures were relatively easily managed, and without proper authority, individual purchasing power was limited. In the revolutionary world of 21stCentury IT, we need revolutionary methods of governance. The most successful way I’ve seen enterprises address this new requirement is to create a cross-functional governance committee that includes representation from finance, IT and core business units, with the charter of managing cloud related costs. In enterprises that have Cloud Steering or Governance Committees or a Cloud Center of Excellence, this cross functional group works under their direction. My good friends at Cloudability, who have developed what is probably the most comprehensive cloud cost reporting and management toolset in the industry, refer to this committee as the Cloud Financial Office (CFO – I believe the pun is intended). This committee evaluates the needs of the business, the reporting and cost management/containment requirements of finance, and the operational/support requirements of IT, to determine the best approach for meeting all three stakeholders. They develop strategy, policies and procedures for IT, finance, and business that lead to a deployed cloud infrastructure that is manageable from a cost perspective. As I mentioned above, there are tools that support this mission, but without the insights of the entire committee to interpret and act on the data, you will not recognize the value of the tools or succeed in being cost efficient with the capital you spend on cloud-based infrastructure. Tools are not a silver bullet. Just like there’s a nugget of truth underlying the health benefits of ketchup, when thoughtfully planned, considered and executed, you can recognize significant IT cost reductions as well as several other powerful benefits from transforming your infrastructure to the cloud. On the other hand, just like drinking a bottle of ketchup a day won’t cure or prevent any maladies, ignoring the revolutionary nature of the cloud and how your enterprise must adapt in order to “Cloud ConfidentlyTM”, won’t lead to any promised savings. It will likely result in higher costs for fewer benefits than you enjoy now. As you approach cloud adoption, remember, not everyone making claims of free and instant IT savings has your particular best interests at heart. Many of them, much like the 19thcentury ketchup hucksters, benefit handsomely as you overspend in the cloud. It’s the 21stCentury, don’t drink the ketchup, Cloud ConfidentlyTM!

A Tale of Two Models: Provisioning vs. Capacity

A Tale of Two Models: Provisioning vs. Capacity

A couple of weeks ago, I wrote about current IT trends being ‘revolutionary’ as opposed to ‘evolutionary’ in nature.

Today, I want to expand on that concept and share one of the planning models that make cloud systems in particular and automated infrastructure in general more cost effective and efficient. When talking to clients I refer to this as “The Provisioning vs. Capacity Model”. First let’s look at the Provisioning Model, which, with some adaptation, has underpinned infrastructure decisions for the last five decades of IT planning. The basic formula is fairly complex, but looks something like this:

((((CurrentPeakApplicationRequirements * GrowthFactor) * HardwareLifespan) + FudgeFactor) * HighAvailability) * DisasterRecovery

Let’s look at a practical example of what this means. As an IT leader, being asked to host a new application, I would work with the app vendor and/or developers to understand the compute, storage and networking configurations they recommend per process/user. Let’s say that we determine that a current processor core can support 10 concurrent users and a single user creates roughly 800K of data per day.

I would then work with the business to identify the number of users we expect to begin with, their estimate for peak concurrent users and what expected annual growth will be. Ultimately, we project that we will start with 20 users who may all be using the system at the same time. Within the first year, they anticipate scaling to 250 users, but only 25% of them will be expected to be using the system concurrently. By year five (our projected hardware lifespan) they are projecting to have 800 users, 300 of whom may be using the system at any given time. I can now calculate the hardware requirements of this application:  

YearUsersStorage (GB)Concurrent UsersCores

Being an experienced IT leader, I ‘know’ that these numbers are wrong, so I’m going to pad them. Since the storage is inconsequential in size (I’ll likely use some of my heavily over provisioned SAN), from here on out, I’ll focus on compute.  The numbers tell me that I’ll need 2 servers each with 4 quad core processors for a total of 32 processors. Out of caution I would probably increase that to 3 servers. Configuring memory would follow a similar pattern. Because the application is mission critical it’ll be deployed in a Highly Available (HA) configuration, so I’ll need a total of six servers in case the there is a failure with the first three. This application will also require infrastructure in our DR site, so we’ll replicate those six servers there for a total order of twelve servers. In summary, on day one, this business would have a dozen servers in place to support 20 users.

The Provisioning Model can lead to overkill

Under the provisioning model a Highly Available solution with sufficient Disaster Recovery infrastructure could result in a large server deployment to support a very small number of users.

I know what you’re thinking, “This is insanity, if my IT people are doing this, they are robbing me blind!” No, they aren’t robbing you blind, they are following a “Provisioning” model of IT planning. The reason they plan this way is simple; it usually takes months from the time that an infrastructure need is identified to the time that it is deployed in production. It looks something like this in most enterprises:

  • 1-2 weeks – Identify a need and validate requirements
  • 1 week – Solicit quotes from 3 approved vendors (if the solution comes from a non-approved vendor, add 3 months to a year for vendor approval)
  • 2-3 weeks – Generate a Capital Request with documented Business Justification
  • 2 weeks – Submit Capital Request to Finance for approval
  • 2-3 weeks – Request a PO from purchasing & submit to vendor
  • 2-3 weeks – Wait for vendor to deliver hardware & Corporate receiving to move equipment to configuration lab
  • 3-4 weeks – Manually configure solution (Install O/S & Applications, request network ports, firewall configurations, etc)
  • 2 weeks – Install and Burn-In

The total turnaround time here is 15-20 weeks. Based on the cost, time, pain and labor it takes to provision new infrastructure, we want to do it right and be prepared for the future, and there is no quick fix if we aren’t. Using a provisioning model, the ultimate cost in deploying a solution is not in the hardware being deployed, but rather in the process of deploying it.

The upshot of all this is; Most of your IT infrastructure is sitting idle or nearly idle most if not all of the time. As we assess infrastructure, it is not uncommon for us to see utilization numbers below 10%. Over the past 15 years as configuration management, CI/CD, virtualization and containerization technologies have been adopted by IT, the math above has changed, but because those technologies are evolutionary in nature, the planning process hasn’t. In the Provisioning model, we are always planning for and paying for capacity that we will need in the future, not what we need today. Enter Cloud Computing, Infrastructure Automation, Infrastructure as Code (IaC) and AI. Combined, these technologies have ushered in a revolutionary way to plan for IT needs. IaaS and PaaS platforms provide nearly limitless compute and storage capability with few geographic limitations. Infrastructure Automation & IaC allow us to securely and flawlessly deploy massive server farms in minutes. AI and Machine Learning can be leveraged to autonomously monitor utilization patterns, identify trends and predictively trigger scaling activities to ensure sufficient compute power is delivered “Just in Time” to meet demand, then scaled back as demand wanes. In cases where IaaS and PaaS providers experience localized outages, the same combination of IaC and AI can deploy your infrastructure in an unaffected region, likely before most of your user base or IT is even aware that an outage has occurred. Software updates and patches can be deployed without requiring system outages. The possibilities and opportunities are truly mind boggling. Taking advantage of these capabilities requires a complete change in the way our IT teams think about planning and supporting the applications our users consume. As I mentioned above, the incremental hardware costs of over-provisioning in the data center is inconsequential when compared with the often un accounted for cost of deploying that hardware. In forward looking IT, where IaaS and PaaS are billed monthly and provided on a cost per deployed capacity model, and infrastructure can be nearly instantly deployed, we need to abandon the Provisioning Model and adopt the Capacity Model. Before I proceed, you need to understand that these three pillars; IaaS/PaaS, Infrastructure Automation, and AI must all be in place to effectively take advantage of the cost savings and efficiency of the Capacity Model while still delivering secure, reliable services to your users. Merely moving (often referred to as “Lift and Shift”) your servers to the cloud and optimizing them for utilization may provide some initial cost savings, but at significant risk to security, availability and reliability of services.

3 Pillars of the Capacity Model

IaaS/PaaS, Infrastructure Automation, and AI must all be in place to effectively take advantage of the cost savings and efficiency of the Capacity Model.

Following the Capacity Planning model, we try to align deployed infrastructure to utilization requirements as closely as we can, hour by hour. You may have noticed in my Provisioning example above I was primarily concerned and planning for the required capacity at the end of the lifespan of the infrastructure supporting the application. I was also building to a standard that no system would ever exceed 35%-40% utilization. In the new capacity planning model, I want every one of my services running at as close to 90% utilization as possible. Ideally with only enough headroom to support increase in utilization for as long as it takes to spin up a new resource (typically only a few minutes). As demand wanes, I want to be able to intelligently terminate services as they become idle. I use the word “Intelligently” here for a reason; it’s important to understand that many of these resources are billed by the hour, so if I automatically spin up and terminate a resource in 15 minutes, I am billed for a full hour – If I do it 3 times in a single hour, I’m billed for 3 hours. Let’s look at a sample cost differential between Provisioning and Capacity modelling in the cloud. For this exercise, I’m just using the standard rack rates for AWS infrastructure. I am not applying any of the discounting mechanisms that are available and using simple calculations to illustrate the point.

It’s also important to remember that Revolution is neither free or easy; developing and refining the technologies to support this potential savings for this new application will cost $50k-$100k over the five years.

Provisioning Model – 5 Year Costs:

YearInstanceCost/HourQtyHour/MonthAnnual Cost
Total Cost: $352,512.00

Capacity Model – 5 Year Costs:

YearInstanceCost/HourQtyHour/MonthAnnual Cost
Total Cost: $14,551.32

In the model above, for simplicity of understanding, I only adjusted the compute requirements on a yearly basis, in reality with the ability to dynamically adjust both instance size and quantity hourly based on demand, actual spend would likely be closer to $8k over 5 years. It’s also important to remember that Revolution is neither free or easy; developing and refining the technologies to support this potential savings for this new application will cost $50k-$100k over the five years depending on the application requirements. At the end of the day, or at the end of five years, following the capacity model may result in spending well less than half the cost of the provisioning model, but you would have enjoyed much higher security, reliability and availability of applications with a significantly lower support cost. To wrap up this very long post, yes, it is true that massive cost savings can be realized through 21stCentury IT Transformation, but it will require a Revolution in the way you think about supporting your business applications. Without people experienced in these very new technologies, you’re not likely to be happy with the outcome. Finally, if you encounter anyone who leads the charge to cloud with words like “Lift and Shift”, please don’t be hesitant to laugh in their face. If you don’t you may end up spending $350,000+ for what could otherwise cost you $8,000.

Cloud, All In or All Out?

Cloud, All In or All Out?

I recently spoke with a good friend of mine who is a Finance SVP with a publicly traded North American manufacturer.

He was very excited to tell me that his executive team had been strategizing over the past couple of quarters and was getting ready to publicly announce they would be moving all IT services to the cloud and would be 100% complete by Q4 2019. As our conversation progressed, I asked him why they had made this decision and he offered several reasons, some of which were more valid then others. Ultimately, with some prior knowledge of how large, diverse, and (in several significant areas) outdated their technology estate was, I asked what their IT teams thought of this initiative. I’m pretty sure my jaw actually dropped at his reply, “Outside of the CIO’s office, nobody knows yet.”

It’s not just all or nothing

There are a whole host of issues with the direction this executive team was headed, but for this particular blog I want to focus on one particularly poor decision that I see play out over and over with potential clients; they are either All In, or All Out on cloud adoption.

I would argue that the vast majority who take either of those two positions have a fundamental mis-understanding of what “Cloud” is. Since the term “Cloud” has been co-opted by nearly every vendor to mean almost anything, this misunderstanding is not surprising. For the purpose of my blog today, I’ll be referencing the key components of cloud computing: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). At their core, each of these offerings support the delivery of business application features to users. At the end of the day, delivering business application features to their users as effectively and efficiently as possible is, and should be, a primary concern of every executive. Executives hire very smart and talented teams of Architects, Analysts, Product/Program/Project Managers, Engineers and Administrators to accomplish this delivery. As these teams work with their respective vendors to understand the nature of the various applications they must operate, then design, build and configure systems to support them. To support diverse applications, these teams need diverse tools; and Cloud (IaaS, PaaS & SaaS) is only one. As powerful as the cloud may be, it is not always ideal, or in some cases, even suitable for every situation or application.

All In, or All Out on cloud adoption?

I would argue that the vast majority who take either of those two positions have a fundamental mis-understanding of what “Cloud” is.

Imagine for a moment that you are hiring the best contractor in your area to build you a home. You’ve worked with them to determine the ideal design, select the desired finishes, and come up with a budget and timeline. At what point would you think it was in your best interest to dictate to this contractor, the best in the area, what tools they may or may not use to deliver your finished home? Wouldn’t it be better to let them use the best tool for each individual job that needs to be done? If this is true, why do we as executives think it is in our best interest to sit in board rooms and determine what tools our IT teams may or may not use, without understanding the nature of the applications they need to operate? The simple answer: it is not. Rather than limiting the tools that their teams can make use of, executives, as the primary visionaries and strategists of the enterprise should develop guidelines for their teams. These guidelines will help them identify appropriate tools and strategies, ultimately helping them align themselves with the overriding executive vision. In our practice, we break these guidelines into two primary sections; Outcome and Bias statements. Outcome statements generally speak to requirements related to availability, reliability, durability and usability of applications, while Bias statements are a list of prioritized preferences for how the application is delivered. This construct provides for executive oversight, while also empowering teams to ultimately understand what is right and do it.

“Outside of the CIO’s office, nobody knows yet.”

There are many enterprises that have gone all in on cloud adoption and many who have avoided it all together. In all my experience, I have yet to encounter an enterprise that has gone All In on the cloud without making significant compromises or undergoing supernatural gymnastics to get everything in (except for businesses that were born in the cloud). Likewise, I have yet to work with a business who has completely opted out of cloud that couldn’t benefit from having some of their systems residing there. “Outside of the CIO’s office, nobody knows yet.” As you can imagine, this jaw dropping statement was not the end of our conversation. We discussed the nature of Cloud Services, and he invited me to consult a bit with several of his peers and superiors within the organization. I was able to provide a little of my perspective and insight into the path they were preparing to undertake. The verdict is still out on what their plan for cloud adoption will be, but I have not seen them make a public announcement regarding a plan for a 100% cloud adoption by EOY 2019.

IT Revolution, not Evolution

IT Revolution, not Evolution

In my previous blog, I stated:

21st Century IT is a revolution – not an evolution of what we have been doing for decades. The skills required to transform through this revolution are different than those required to operate the existing state, which are different still from those required to operate the new state.

A fundamental misunderstanding of this concept underlies almost every troubled or failing IT transformation project. It took me a few years of assisting enterprises in their cloud migrations to fully understand the ramifications of the difference between evolution and revolution as it applies to IT initiatives and how they impact business.

First, let’s consider an earlier technological revolution

The combined impact of the Personal Computer, GUI and desktop publishing was revolutionary and had a transformative impact on the enterprise. Prior to this revolution, enterprises had specially trained computer operators to handle I/O functions for the mainframes and steno pools with typewriters for document creation. As a result of this revolution, there was a fundamental change to the way business was done. Employees, managers and executives alike were able, and quickly required to generate their own documents, manage their own calendars and perform their own data I/O. Within a short time, everything changed. Successfully navigating this change required different IT staff with completely different skills. It ushered in and set the tone for the next 3-4 decades of IT practices.

Within a short time, everything changed

Successfully navigating this change required different IT staff with completely different skills. It ushered in and set the tone for the next 3-4 decades of IT practices.

It’s interesting to contrast this with the evolution of virtualization that really took hold at the turn of the century. While virtualization had significant impact on IT Departments and how compute power was provisioned in the data center, it did not significantly change what IT staffs did, or how they did it. The skill sets required after a move to virtualization were mostly the same as those required prior to the move and the virtualization of data centers was primarily performed by existing IT staff. The impacts of this transformation effort were barely felt, if recognized at all, by those outside of IT.

I’ve spoken with countless enterprise leaders who view the transformation to 21st Century IT as nothing more than a data center migration – something that is normal to the ongoing operations of an IT department. While this can technically work, it’s unlikely to provide the ideal outcomes promised and sought after. Ultimately the reality will be that an IT estate moved in this manner will most probably cost more over the long term, negatively impacting security, availability and performance. The great news is that it doesn’t have to be this way!

If you embrace this transformation as a revolution, impacting all aspects of how your enterprise does business, you’ll be taking the first important step.

A few years ago, my team and I were brought into a large financial services company. They were looking to contain IT costs and as a result, investigating the cloud as a way to accomplish that goal – but this is not the start of the story. Over the previous several decades, this enterprise had become one of a couple “800lb Gorillas” in their particular vertical. They had thousands of employees, all the major customers, massive amounts of data and an annual IT spend nearing 9 digits. A few years prior to our involvement, a couple of start-up companies with few staff, no customers, no data and extremely limited IT budgets entered their vertical and started disrupting it. Initially, these start-ups were ignored by the enterprise, then they were mocked, and ultimately, as they began to take market share away, they were feared. The enterprise started playing defense, leading to the cost cutting exercise we were brought in to assist with.

If you embrace the transformation to 21st Century IT as a revolution, impacting all aspects of how your enterprise does business, you’ll be taking the first important step.

As we worked with this client and helped them understand the revolutionary nature of the cloud and the wide-ranging impacts that it could have on the way they do business, they began to reevaluate their posture with regards to the insurgent companies. With the transformation to new technologies came a culture change. These changes positively impacted customer interactions and the speed with which our client was able to respond to feature requests. Eventually, our “800lb Gorilla” client became the disruptive innovator in their vertical. Today, aside from the name on the building and the vertical they serve, they don’t look much like they did when we first met them. The way that they do business has fundamentally changed; across their entire enterprise.

Your enterprise may or may not face similar challenges and you may not need or want such sweeping change, but regardless, understanding that your transformation is revolutionary, not evolutionary will position you well for success. Don’t be surprised if embracing the revolution doesn’t help address some of the issues you are facing.

Just be aware that it isn’t easy or free — revolution never is.

Digital Transformation Journey of a Global Restaurant Chain

Digital Transformation Journey of a Global Restaurant Chain

A year long cloud transformation from a global network of physical data centers used to run company systems and applications.

One of the world’s largest restaurant chains offers technology services to its stores and franchisees that perform functions as diverse as POS systems, store management systems, data analysis and predictive analytics, digital advertisement and e-commerce and customer engagement platforms.

Their IT organization uses a small core staff with limited resources supported by a network of service providers and partners to fulfill key IT roles. Traditionally, the company used a global network of physical data centers to run their applications.

The company engaged with members of our team on key transformation initiatives

  • Need to build environments faster and with more flexibility than what traditional infrastructure in a data center could provide
  • Need to decouple scale up / out decisions from development cycles to speed the development process (get new apps and update apps faster)
  • Address local needs while maintaining global standards
  • Reduce CapEx to increase financial flexibility and agility
  • Lack of internal expertise in implementing and managing a global, enterprise public Cloud environment
  • Need to tie costs to specific internal projects and teams while benefiting from economies of scale

Following the completion of this engagement a prominent financial analyst commented: “The company is establishing a first-mover advantage with digital that can drive sustainable share gains in late 2017 and beyond.”

The Results

  • Reduced time to deploy infrastructure from 60- 120 days to minutes
  • Implemented Cloud automation for some workloads to enable same day deployments
  • Enabled global deployment of resources in AWS regions close to end customers / users to increase performance and decrease latency
  • 20-50% cost reduction on next generation deployments for customer engagement and data warehouse projects vs. traditional models
  • Enabled internal bill back of resources in AWS to specific projects and teams

Following the completion of this engagement a prominent financial analyst commented: “company is establishing a first-mover advantage with digital that can drive sustainable share gains in late 2017 and beyond.”

ERP Disaster Recovery Solution on AWS

ERP Disaster Recovery Solution on AWS

A longtime leader in golf equipment and apparel needed to find an alternative disaster recovery solution for a new ERP system.

Finding itself with a number of legacy IT systems, the company was looking to upgrade their infrastructure in a number of areas. With an upcoming planned elimination of a corporate disaster recovery support platform, IT management saw an opportunity to investigate alternative solutions for their DR requirements.

“We wanted to upgrade our disaster-recovery capabilities in order to mitigate the chance of data loss in our mission-critical, enterprise resource planning or ERP system,” said the Director of Infrastructure and Services.

“We were looking at the concept of continuous data protection in both our onsite production and DR environments,” he added. The company also wanted to incorporate newer technology, which would allow for quickly scaling memory size, CPU and disk space – without having to purchase incremental hardware.

While they were using nightly backup and data replication for disaster recovery, the company envisioned a solution with a lower recovery point objective (RPO) through continuous replication. They required a best-of-breed disaster recovery environment to match the 99.99 percent uptime of their new Oracle ERP solution.

The planned elimination of a legacy DR platform provided an opportunity to modernize.

“From the very beginning, we were talking about instances and hourly costs. This was an entirely different approach from the colocation options we explored earlier.”

The Solution

To achieve their vision of a scalable DR environment, the company needed to look beyond colocation. Our experts helped the company focus on finding a suitable DR as a Service and cloud solution.

“For a long time, we did not think our requirements would work with Amazon. We required private networking and multiple nodes to be replicated synchronously, that seemed to defy implementation at a public cloud provider,” said the director of infrastructure and services.

To facilitate disaster recovery of its ERP database, the company decided on an Oracle Limited disaster recovery optimized solution. “We learned that the Oracle Limited solution was available to us at no cost when in sleep or standby state.”

AWS is an authorized cloud platform for Oracle — one of a very small number of approved cloud vendors.

It was the flexibility and willingness to share its operations expertise that attracted the company  to effectual’s team. “From the very beginning, we were talking about instances and hourly costs. This was an entirely different approach from the colocation options we explored earlier.”

An economical cloud-based, disaster-recovery environment offering the potential to do more with less.

The company deployed the architecture for its disaster recovery platform on Amazon Web Services. “We can even move between various Amazon data centers if needed for changing protection requirements – without incurring any data transfer charges.”

The Effectual team was able to build a custom, and cost-effective, DR environment harnessing the power of AWS. The company had a highly specific use case for the deployment of cloud resources for disaster recovery in an AWS environment. It was an ideal opportunity for Effectual to architect and secure an optimized solution at scale.

How to Tell Ahead of Time If Your IT Transformation Project is Going to Fail

How to Tell Ahead of Time If Your IT Transformation Project is Going to Fail

Yes, I know. It’s a mouthful. Someone smarter than me will probably coin something more concise. And catchier. The phrase refers to the adoption and combination of technologies such as “Continuous Integration/Continuous Delivery” (CI/CD), “Infrastructure as Code” and “Artificial Intelligence” (AI); and methodologies such as “Agile” and “DevOps;” and service models such as “Cloud Hosting.” Combined, these areas help IT organizations meet business requirements and deliver business value.

Learning from Failures

I’ve been working in the IT Ops space for more than 30 years. The last 10 of these years have been spent assisting clients of all sizes understand and ultimately make the transformation to 21st century IT models and approaches. While I have a great winning percentage overall, I am experienced enough that my scorecard also reveals a few failed projects over the decade; that is, efforts that produced neither the desired business outcomes nor technical outcomes.

Success requires the support of key leadership

There is a fundamental misunderstanding at the executive level of the potential impacts of new concepts such as DevOps, and CI/CD, and principles such as “Fail fast, fix fast.”

During my journey, I’ve been able to learn from my own mistakes as well as from those of others – I’ve observed many projects without being involved. And, while every project is unique in its own way, I’ve recognized that there are a few hallmarks that almost always foretell failure. In this week’s blog, I’ll highlight several of the most prevalent warning signs from a high level. Then, in each subsequent post over the coming months, I’ll go increasingly into more and more depth. I’ll write primarily from a business perspective in this series because most transformation projects that will ultimately fail can be identified before the first engineer is assigned. This comes from a fundamental misunderstanding – at the executive level – of the potential impacts of new concepts such as Cloud Adoption, DevOps, and CI/CD, and principles such as “Fail fast, fix fast.” A quick note before I go into this week’s list. Your particular cloud initiative is not necessarily doomed to failure because one or two of the factors described below apply. That being said, the odds can quickly tip in favor of failure as more and more applicable issues appear.

Your particular cloud initiative is not necessarily doomed to failure because one or two of the factors described below apply.

The List of Dreaded Pitfalls

1. You did not start your transformation with an agnostic application or business feature-based assessment of your current IT estate.

This, more so than any other item in this list, will – if not properly performed – lead to budget/timeline overruns, failed deployments and ultimately unhappy internal/external clients. A properly performed assessment should answer the following questions, at a minimum:

  • What are the business requirements of my IT estate?
  • Why should I transform my IT/What do I want out of it?
  • What is my current application/business feature inventory?
  • For each business feature, what is (or are) the:
    • Actual infrastructure requirements
    • Currently deployed infrastructure
    • Licensing requirements
    • Cost of operation per month
    • Business Continuity/Disaster Recovery posture
    • Actual cost (in lost productivity/revenue) of availability per hour and workday
    • Governance model
    • Security/compliance requirements
    • Scalability requirements
    • Associated development, Quality Assurance, Configuration and sandbox environments
    • Integrated applications, and
    • Ideal post-transformation destination (i.e., SAS, Cloud, AWS/Azure/GCP, Physical/Virtual Infrastructure, or other).
  • Based on the inventory above, what functionality or performance-related issues need to be proven out through Proof of Concept efforts before ultimate decision are made?
  • What is the appropriate high-level budget/timeline required to complete this work? It’s easy to get lulled into the complacent thought that you’ve been operating your infrastructure and applications for a long time, so you know it very well. In practice however, the knowledge required to operate is not the same as the knowledge required to transform.

This leads to the next red flag:

2. You consider the transformation effort to be an evolution of your previous IT models, practices and tools.

21st century IT is a revolution – not an evolution of what we have been doing for decades. The skills required to transform through this revolution are different than those required to operate the existing state, which are different still from those required to operate the new state. Many efforts have failed because the executives responsible did not understand that a cloud migration does not and should not resemble a data center migration – regardless of what the various migration tool and cloud hosting partners will tell you.

3. Without first performing an assessment to evaluate the skills or effort required, you and your senior IT staff dictated the timeline, budget and technology decisions.

This seems so absolutely ridiculous that it can’t possibly be true. Could you imagine going to a heart surgeon and demanding a transplant, without understanding the impacts or even if the transplant was needed? Of course not. But somehow enterprises do this every day with the beating heart of their businesses – a.k.a. IT – without giving it a second thought.

4. Prior to starting your transformation, you didn’t have a complete understanding of the financial and operational models behind 21st century IT in the enterprise.

Yes, you understand OpEx vs CapEx, and are maybe even able to make a solid “Net Present Value of Cash” argument regarding your future IT directions. But have you considered Provisioning vs. Capacity planning models? Do you understand the cost and value related to Infrastructure as Code? Can you articulate the risks inherent in Best Efforts Availability as opposed to Five Nines? How will you manage costs in a world where a single button click can result in thousands of dollars in Monthly Recurring Costs?

5. You view the transformation as a strictly technical effort.

This is a big one. To make IT more efficient through transformation, you have to address two areas: The How and the What. Assigning the work to your IT organization addresses the How. Involving business resources drives the definition of the What. Without both groups working together, operational efficiencies won’t be realized, budgets will be blown, and opportunities lost. If some of the issues listed above are factors in your current transformation effort, it’s never too late to try to resolve them. In the coming months, I’ll expand on the thoughts above and share some anonymous war stories while providing pointers on how to avoid pitfalls in the business of 21st century IT transformation.

21st century IT is a revolution – not an evolution

The skills required to transform through this revolution are different than those required to operate the existing state, which are different still from those required to operate the new state.

On-Site Team Supports Bureau of Prisons AWS Infrastructure

On-Site Team Supports Bureau of Prisons AWS Infrastructure

Waldorf, MD – November 13th, 2017 – JHC Technology, an Effectual Company, is pleased to be increasing its support of the US Department of Justice’s Bureau of Prisons.

The Bureau of Prisons was established in 1930 to provide more progressive and humane care for federal inmates, to professionalize the prison service, and to ensure consistent and centralized administration of federal prisons.

JHC Technology, an Effectual Company, will place members of its professional services team on-site with the Bureau of Prisons to support the ongoing delivery of AWS cloud infrastructure. The company’s team of expert developers has in-depth experience supporting public sector organizations in their modernization initiatives and holds multiple AWS Cloud certifications. The engagement includes supporting a robust DevOps environment including application development, security, migration, and other professional services.

“JHC Technology has a long history of implementing innovative solutions at the Department of Justice,” said JHC Technology CEO Craig Atkinson. “To support the work of the Bureau of Prisons – first by delivering infrastructure to the team and now to provide skilled personnel on site to help drive innovation and secure cloud adoption – is the next logical step in our partnership with the agency. We look forward to advancing the Bureau’s public sector mission.”

About Effectual

Effectual is a modern, cloud first managed and professional services company that works with commercial enterprises and the public sector to mitigate their risk and enable IT modernization. A deeply experienced and passionate team of problem solvers apply proven methodologies to business challenges across Amazon Web Services and VMware Cloud on AWS. Effectual is a member of the Cloud Security Alliance, and the PCI Security Standards Council.

Social Media App Migration to AWS

Social Media App Migration to AWS

Transforming an existing legacy environment and building it natively on AWS.

A leading social media company’s apps explore the humanness of people instead of simply quantifying how good they are, or judging how professional their work may be. Users can upload their work directly to their platform, using it to edit photos, with enhancements – including white-balancing, filters and journaling – readily available. One user documented his mother’s fight with cancer, displaying the emotions he and his family felt during a very trying time.

In 2015, the company acquired a platform that creates tangible photo books, prints and gifts for digital photos. The platform was using a non-native environment and didn’t believe it was the right strategic technology in the long term. The combined companies have more than 30 million monthly active users across its platform consuming 5 billion images. Reliable infrastructure users can depend on is essential. The company wanted to work with experts who to help them transform their existing legacy environment and build it cloud natively on AWS.

30 million monthly active users consuming 5 billion images

Reliable infrastructure users can depend on is essential. The company wanted to work with experts who to help them transform their existing non-native environment and build it cloud natively on AWS.

The companies DevOps team is very capable – they had significant experience with Chef, using it to configure their servers and developer machines –  but the transformation to AWS required resources the team just didn’t have.

The transformation to AWS required resources the company’s team just didn’t have.

30 million monthly active users consuming 5 billion images.

Reliable infrastructure users can depend on is essential. The company wanted to work with experts who to help them transform their existing non-native environment and build it cloud natively on AWS.

The Solution

Since the migration to AWS, the company has seen great benefits: “Operationally, AWS is faster and more configurable than what was being used,” said the VP of Engineering at the company. “The new environment is faster, more reliable, and cost efficient. All three are pretty important things!”

Most  importantly, this solution had a big impact on the stability of the company’s platform and by extension, it’s brand. Previously, the platform had experienced random reboots on several occasions. The successful migration to AWS allowed them to have confidence going into the holiday season, their most important quarter of the year.  Confidence in the reliability and performance of their platform allowed the company to focus on maintaining a great experience for their user community.

“The experts had both the attitude and the aptitude, and I feel as comfortable as I possibly could in having AWS as the long-term foundation for our infrastructure.”

Bring in the Experts

With the support of our experts the company was able to seamlessly migrate from its existing environment to AWS. From the start, the company was impressed with our experts knowledge and expertise. “They inspired a lot of confidence, and their team clearly had the technological expertise,” said the VP of Engineering. “We knew they could get the job done.”

Our experts leveraged their extensive experience with Vagrant and Packer to build a strong foundation for the company’s cloud native environment.  This enabled the company to emulate the user experience of their members. The insights they gained helped them improve their product, streamline their operations and deliver an optimized user experience.

“Technology expertise is the number one thing we look for when hiring an outside firm,” said the VP of Engineering. “The experts had both the attitude and the aptitude, and I feel as comfortable as I possibly could in having AWS as the long-term foundation for our infrastructure.”