TL;DR: IT teams love to hate on agents. All too often agents introduce complexity, overhead, cost and unanticipated consequences. It’s not that 3rd party code per se is a problem – the rise of open source and shared/branched git libraries prove that point. So it’s time to modernize, update and consolidate agent design. Here are 3 ways we can start creating next-generation packet acquisition agents in the public cloud.
In the world of enterprise software nothing seems to be reviled more than “agents.” It seems every security company has some sort of agent that is required for the security magic to happen. Organizations will often deploy these agents without in-depth testing. Usually things are fine. Sometimes they’re not. More systems often mean more agents, more complexity, more potential points of failure or conflict.
The current enterprise opinion seems to be shifting towards the idea that agents are evil – at worst - and a necessary evil at best.
At the same time, enterprises are embracing Devops principals throughout their technology silos. Tons of third-party libraries are in use. Open source software is used for many networking and security solutions today. It is common to find systems such as Zeek (formerly known as Bro), Kibana, or Elastisearch. It is interesting that, at the same time enterprises are complaining about agents, they are also using more 3rd party code than ever.
It is clear that third-party code is not the core complaint. Instead, it’s the overhead, complexity and resource drain that deploying third-party often creates. With security and monitoring agents, that load and complexity quickly becomes unwieldy. One system with many agents on it, each replicating packet data, event logs or other telemetry to waiting resources on the other end can gradually wear down your systems, eat up bandwidth or steal CPU cycles that your application needs for peak functionality.
Poor agent design or using legacy agents that started life in the data center is a medicine worse than the cure.
- Agents that require humans-in-EVERY-STEP-OF-the-loop for deployment fail to take advantages of the automation and orchestration inherent to the modern public cloud. Modern agents should be self-updating and also able to automatically dissolve when their useful life ends.
- Agents that require manual configuration impose a massive resource and capacity burden on the teams that simply want to monitor and secure the systems they’re watching.
- Agents that fail to manage the throughput at public cloud scale are most guilty of trying to live past their usefulness. SOC, NOC and Devops teams are the ones that end up suffering the most because they are trying to solve problems while critical data is missing.
So at Nubeva, we created a next generation agent that doesn’t suck. We call it our Prisms sensor.
We wanted to ensure our sensor did not have any of those legacy issues. We wanted to live up to the 21st century IT ideal of how code in the public cloud is used. We made sure that three key principles were adhered to: Simplicity, automation, and performance. These principles are baked into the core of our sensor.
- Simplicity. The Nubeva Prisms sensor is simple and small. The entire Nubeva Prisms sensor is a single binary executable packaged into a docker container. This makes it simple to install and upgrade with no other overhead. The fact that it needs no external execution context (containerized or not) means that it is far more nimble and interoperable than other alternatives. The idea is to be small, self contained and portable. Everything is configured and setup with 1 single command. The Nubeva Prisms sensor can be picked up and deployed whenever and however you need without impacting CPU, memory or bandwidth of monitored environments. Orchestrating / automating this command can be done in a myriad of methods: from a customized AMI, ansible deployment, or any existing orchestration solution. There are no configuration files to edit. And nothing ever requires a reboot. It is small, fully self-contained and portable. That is how you build a truly next-generation agent.
- Automation. The second aspect of agents that Nubeva has solved is the automatic maintenance of the code. In the past, customers had to manually upgrade or remove their agents. If there was a configuration change, the customer would have to update a special file and reboot the machine (or, if they were lucky, stop and start a service). At Nubeva, we think that is silly. It’s just not the way the cloud works. Systems requiring manual upgrades and intervention are stuck in the past. Our Prisms sensors automatically upgrade themselves whenever new code is available. In addition, should a customer choose, it is also possible to remotely uninstall the Nubeva Prisms sensors and have them automatically dissolve as needed. We believe the key is to make the solution as “automagical” as possible.
- Performance. Finally, the last leg of the agent tripod needing a serious update was performance. The existing cloud visibility solutions using agents have issues when mirroring traffic larger than 1Gb. And one solution even had trouble with 100Mb of sustained traffic. Nubeva Prisms sensors are currently tested up to 3.5Gb of mirrored traffic, 3.5Gb of original traffic, for a total of 7Gb at the network interface.
So it is true: Nubeva Prisms sensor excel far and away beyond traditional agents. That’s why we don’t call it an agent.
Nubeva Prisms sensors can replace your existing legacy packet mirroring agents in the public cloud without sacrificing the systems that require that traffic. You see, the Prisms sensor can replicate traffic to any number of destinations with a reachable IP address. This means that you no longer need one agent for each system you have that requires packet traffic. Instead you can have one agent to collect all packet traffic, then replicate that traffic – filtered and processed however you like – to any number of destinations. This simplifies the packet acquisition architecture while increasing the ROI on the tools, teams, and processes you already have in place.