Apr 12, 2022 by Jason McIntosh
Before we begin – we would like to assure you that Armory Enterprise and OSS Spinnaker are NOT with the standard deployment paradigms vulnerable to this vulnerability. Your services have to be deployed to an application server as a WAR. Not even all application servers are impacted (though I’d suspect MOST are due to how app servers operate). IF you did a custom deployment, well, that’s on your teams to investigate further, but the supported installation on Kubernetes uses spring boot in embedded mode which is NOT vulnerable to this particular vulnerability. That said, lets dive into what happened and the background…
Like many others in the security land, there were early rumblings that there might be a major vulnerability in Spring’s core libraries. Through some back channels we were already hearing “Hey there may be a major vulnerability and it looks bad.” As a result, we started early initial investigations. There were signs that it was in a very commonly used style of code, but not a lot of details, and therefore we did not immediately trigger our incident response. We DID ho wever provide an early notification to our security & engineering teams that “there may be a real RCE to fix and early analysis was showing this could be bad, but don’t trigger the alarms YET”. The translations weren’t great but initial signs were increasingly “there’s something coming”. These early rumblings soon turned into a thunderous roar… “Spring may have a bad RCE and it’s everywhere”. This triggered full on security investigation on the armory side.
As said up above, Armory had started watching this vulnerability and the status VERY early on the 29th – even before it was on Pivotals website or on bug alerts – as we’d been monitoring traffic around “hey have you seen this POC code… “. We, like many, were initially confused as it seemed there were already announcements around an RCE regarding spring cloud functions (which turned out to be unrelated, and we’d already identified as not impacting spinnaker). It was after more details emerged on the 30th and in doing our own tracing that we realized this was POTENTIALLY much more of an issue. It was also at this time that we confirmed it was NOT tied to the spring cloud functions vulnerability but something in the core libraries.
I’m not going to go through all our internal analysis nor side channel conversations (though in #sig-security in the OSS slack there are a few public conversations going on), but Armory had identified that the class loader was both a potential failure and potential safeguard very early on the 30th. It was ALSO around this time that more details were exposed publicly making our investigations a lot easier to confirm where the vulnerability existed. I’ll link to a few but please do NOT consider this a definitive list of sources. Specifically:
The references above are interesting reads. There was a lot of confusion going around in the community on this point, and some of these references both helped with understanding the issue but some increased confusion of the issue. Like a lot of “hey this is early” scenarios, the details were sparse or sometimes incomplete and people got confused quickly. There are some great details, but at the end of the day, it comes down to… this is complex code at times, and there are some nasty things that can happen even if you’re careful. I’m not sure anything could have easily caught this one because of where and how it operates – though ANYTHING in Java which does dynamic class handling/reflection or similar should ALWAYS be suspect and used with caution. But that’s how a lot of AOP type stuff operates and provides so much power and flexibility – so you can’t avoid it entirely for most uses. Attacks like these are GOING to be discovered and are GOING to impact your systems. As a rule, you can’t prevent bugs in the system. BUT there are things you CAN do to enable quick recovery.
What’s the KEY thing to help with these issues? Make sure you have the teams, tools and support to address them rapidly. For example… have a continuous delivery system that could let you inject patches into all your code with a single PR… kinda like Armory Enterprise Spinnaker – Continuous Delivery is a KEY feature you need to update fast, fix fast, and address these kinds of situations. Having a solid team to assist in fixing and troubleshooting is ALSO key (Armory is hiring!). And lastly, try to build a community to help with these fixes – in this scenario the OSS community had a PR with fixes the evening full details were known (thanks to https://github.com/jervi). We didn’t end up needing the fixes but we were in a position where we could have rapidly produced patches and fixes to address the issues.
Introducing Quick Spin One of the most common challenges that organizations face when implementing a continuous deployment strategy is the time and focus that it takes to set up the tools and processes. But a secure, flexible, resilient and scalable solution is available right now. Want to see if it’s the right tool for your […]
Read more →
Spinnaker is the most powerful continuous delivery tool on the market. DevOps engineers and developers recognize this power and are looking to use Spinnaker as a foundational tool in their Continuous Integration and Continuous Delivery (CI/CD) process for hybrid and multi-cloud deployments. Such a powerful, expansive open source tool needs expertise within your organization to […]
Read more →
Today, Armory is excited to announce the availability of the GitHub Action for Armory Continuous Deployment-as-a-Service. GitHub is where developers shape the future of software. After a developer writes and tests their code in GitHub, it must be deployed. Armory’s GitHub Action for Continuous Deployment-as-a-Service extends the best-in-class deployment capabilities to Kubernetes. CD-as-a-Service enables declarative […]
Read more →