READ TIME: 4 minutes, 50 seconds
tl;dr: If you have resources in a public cloud, you’re likely making sure you leverage the elasticity the cloud provides to ensure you don’t pay more than you should. That’s the beauty of AWS, Azure and GCP. The flexibility these cloud platforms offer result in a monetary value to your enterprise IT group. Unlike the capex hog that is your data center, in the cloud you only pay for the workloads – active sources – you run, when you run them.
CTA: Our Prisms Service Processor helps enterprise IT with security and monitoring in the cloud. Learn more. Sign up for a private preview today.
In Part One of this series, we introduced and defined cloud packet brokers.
For this post, we explain what is an “active source” in a public cloud environment and why they matter (it’s a cost issue!). When architecting a cloud network with solutions like Nubeva Prisms, active sources become an important concept to understand.
An active source might be a virtual machine (VM) or service. Some cloud architects might call active sources “cloud workloads”, “compute workloads” or “hosts.” At Nubeva, active sources are simply the workloads running in the cloud that cloud platform providers view as billable. The more active sources you have, the more cloud fees you rack up each month. The impact of active sources in cloud becomes clear as enterprise IT groups migrate more and more to the cloud.
You’re already familiar with the high cost of maintaining a data center, right? In the data center, the major challenge with production deployments is to minimize downtime – ideally down to zero. In such a case, a standard blue-green deployment model serves as the best option.
Using the blue-green model in the data center, you can create and maintain two identical application stacks – one that is always live (green) and a second that is standby (blue). When you need to deploy a new build to production, you can do it on a blue stack for testing. Once it’s working, the blue stack goes live and the green becomes the standby stack – and you always have the ability to do a quick rollback if an issue arises unexpectedly.
You do this in the data center because you have all the resources available to you and they’re paid for – processing capability, memory, storage – these are capex assets at your fingertips.
The downside? Would you believe that on any given day, nearly half of your enterprise data center resources, using a blue/green devops process, sit idle? Most data center resources are only maxed during peak periods like the retail industry’s holiday buying season or when tax season approaches and the financial services industry gets crushed 24/7. During the peak usage periods, costs go up and up and up.
So now, the public cloud starts to look pretty inexpensive when compared to running those duplicated resources in the data center.
Cloud flexibility is, indeed, the true value in migrating apps and resources to cloud platforms. When you apply blue/green devops in the cloud, for example, the enterprise IT budget only pays for cloud resources and monitoring when those workloads are active. In the traditional data center model, the budget is getting pinched for monitoring and maintaining the unused (blue) standby sources. Ouch!
Moving to the cloud makes financial sense, because here your enterprise IT department will only get billed for the active workloads in place that are being monitored. In the cloud, you’ll have both active and inactive sources – resources that you add and provision in an elastic, dynamic group. You’ll have workloads that are in process, but you won’t be incurring cloud monitoring fees on these workloads until they’re active. It’s the new blue/green model of devops applied in the cloud.
Consider an online banking app for example. When you’re ready to push a version of code to production, there are many steps along the way that are fundamentally different in the cloud vs. a traditional data center. Using the blue/green infrastructure model, the blue side for pre-production testing is physically the same number of machines, storage and networking as the green side. In the cloud, you scale to exactly what you need for testing – this alone significantly alters the active workloads you pay for and provision.
The next step is determining the amount of networking and monitoring tools. In a traditional data center, you typically have duplicate WAFs, load balancers, firewalls, IDS/IPS, and so on. Not only do you pay for inactive workloads, you also pay to monitor those inactive workloads. But in the cloud, you run only green workloads through that monitoring path and reduce tool infrastructure costs.
In the cloud, it’s easy to scale without having to waste precious budgets running and monitoring two identical workloads all the time. You automatically recoup up to 60% of what’s spend in the data center for monitoring two identical stacks. In cloud, you’ll have a “hot” workload as an active source and a “warm” standby workload that is inactive where you can do development and test long before you have to think about making the switch and monitoring.
The elastic nature of the cloud environment has many advantages. One of the biggest is that you only pay for what you use, so running your active and inactive sources in cloud are certain to cost much less than in the traditional data center.
Of course, once in the cloud, users still need to monitor the data streaming to and from the active sources. Users are still responsible for ensuring they are secure in the cloud. New SaaS-based technologies enable cloud packet data to be not only acquired but also processed and distributed to the appropriate teams and tools. This, in turn, allows SecDevOps to analyze the data and keep these active sources in the cloud running smoothly.
In today’s cloud environment, security and monitoring has never been more important. That’s why our Nubeva Prisms solution just makes sense. Prisms makes simplifies packet monitoring in the cloud. It makes it easy for anyone to pre-configure Prisms sources, templates, AMIs, connections and elastic packet processing rules to keep the information flowing to your tools for every active (green) source. And, it guarantees that every standby (blue) source is ready to feed your tools when when you flip that switch.
The result is never having a gap in cloud visibility or security – even during the most demanding elastic events.