Skuid User Demo: Deploying to Kubernetes with Spinnaker
In an interview and demo, Ethan Rogers, a Technical Operations engineer from Skuid, shows us how Skuid uses Spinnaker to deploy to their Kubernetes clusters. Skuid is a cloud UX platform and deploys their application on Kubernetes, and has found value in using Spinnaker to save time on manual steps and scripting. Ethan demonstrates a simple application deployment to two Kubernetes clusters using only Spinnaker and notes how approachable it should be to any developer that is already familiar with Kubernetes.
Ethan covered the following:
- The events leading up to Skuid adopting Spinnaker
- The pain points that Skuid was experiencing prior to adoption of Spinnaker
- Demoing a deployment to Kubernetes in the Spinnaker UI, including the creation of the Load Balancer and selecting the port
- Cluster management of Kubernetes with Spinnaker
- Ethan’s opinion on using Helm to manage Kubernetes (he doesn’t like it)
- “I cannot imagine having to script a Canary Deployment per se with Helm”
- Showing how Spinnaker automatically produces the YAML, Secrets, etc.
- Spinnaker only creates ImagePullSecrets
- Creation of a Pipeline’s stages within Spinnaker’s UI
- Discussion of Continuous Deployment and Continuous Delivery
- “I feel very passionately that Spinnaker fills the hole that is missing in the Kubernetes community as far as deploying actual applications multiple times per day.”
- Deployment strategies made easier with Spinnaker
- Inserting a stage for manual judgment in the pipeline
- Dynamically injecting information about the pipeline
- Single-click rollback and rolling forward mechanisms of Spinnaker
- “All this is handled for you. You don’t have to write any code to do this.”
- The pains he used to experience if he were to deploy this same demo without Spinnaker
- Time and manual steps saved with Spinnaker
- The trade-off of using Spinnaker (For Skuid, a full transition cannot be done yet)
- “For any customer-facing application I can’t see a reason that you wouldn’t use Spinnaker – it just makes things so much easier to do with Kubernetes.”
- Tracking audit logs with Spinnaker
Here’s a transcript of the talk:
Colin On air. All right, […00:03]. This is Armory and this is [inaudible] for Armory.
Isaac […00:11] Isaac. I’m the CTO for Armory.
Colin And we’re here with Ethan from… I can’t pronounce it.
Ethan It’s […00:22] like the CMO.
Colin […00:25] it was. I thought it was like [inaudible] or [inaudible] here we are completely. I thought it could be squid. But then they want to use the […00:36] squid and he has agreed to show us the demo of how squid has been using Spinnaker to deploy to their Kubernetes cluster. And is there anything you want to say before we jump into it?
Ethan No, we can just jump right in if you guys want to do that. Or I could give you a little bit of background.
Colin Yeah, a little bit of background for any of the people that would be watching this. And can you tell us what it is, what you’re deploying, what the app is, any important things we should probably know or that the DevOps community would like to know beforehand, so they have a freedom of reference as to what you’re doing?
Ethan Totally, yeah. Our application, we’re a cloud UX platform. So we’re deploying our application on Kubernetes. We’re 100 percent Kubernetes-based. So everything runs in containers. We have a couple of services that we deployed that make up that application. It’s not a whole bunch but it is enough to where we need some kind of repeatable pattern for deployment. Previously, we were using Jenkins and just a bunch of bash scripts that are called kube control to do this. And when we started seeing problems with database migrations would fail, we wouldn’t really have a good recourse to figure that out what was wrong. We started seeing deployments that would go out. But the Jenkins deployment would fail with no rhyme or reason. So we started looking for a better way to do it. And instead of writing our own, we decided let’s invest some time, figure out what the Spinnaker thing is all about. And that actually ended up being a really good decision. So we did have to write it ourselves. So yeah, that’s kind of where […02:35].
Colin In terms of the previous pain points you were having, you were just using kubectl deploy. Do you guys just have one Kubernetes cluster? Or do you guys have like a staging and production cluster?
Ethan Yeah. Actually, we have […02:54] production across multiple regions and one testing cluster that serves as a staging and test cluster across different namespaces. So we’ve got staging environment isolated by namespace and a test environment isolated by namespace, continuous deployments to test and […03:15] pushes to our staging namespace just to verify that everything kind of works before we go […03:20].
Colin And all of those clusters are managed through Spinnaker and are able to kind of deploy to each particular region.
Ethan Yup. One of the things that we love about Spinnaker is from a single point of view, we can get all the information about what’s running across all of our clusters. Before doing that, it was a pain. We had no real, good way of seeing that besides kube-controller get Pod and jump context to a different cluster and then kube-controller get Pod. Okay, those are different.
Colin Yeah, okay. Yeah, that’s good to know. I think when we run into a lot of people who […04:07] considering Spinnaker. They’re using Kubernetes. And I think there are lot of people who are still kind of curious about Spinnaker but are always wondering why not just use kubectl apply. And what I think you’re saying is there are a few reasons. One is […04:25] your deployments cross multiple regions, multiple clusters, multiple namespaces. The other one is Jenkins and custom scripts are a bit brittle. And they tend to break. There’s no recourse for those. In particular, when you’re trying to do workflow management across migrations of database cluster to an actual deployment on Kubernetes, there’s not a lot of flexibility there.
Ethan Yup, totally. Bash scripts are very brittle. Yeah.
Colin We’re very familiar. So that will be a little bit of a background there. Before we dive further, for clarification for the audience, are you currently using OSS Spinnaker?
Ethan Yes, we’re currently using OSS Spinnaker. We got into it pre-halyard. So […05:23] have a halyard-managed deployment. Everything is managed via the helm chart right now for us.
Colin All right, anything else?
Ethan I think is there anything else that the audience would like […05:38].
Colin [inaudible] the audience can’t even ask questions
Ethan The audience can’t even ask questions. I wish there was like a […05:44] would be so helpful. Could you use screen sharing now and show us the demo […05:52] show us, how your squid is doing?
Colin Can you guys see Spinnaker right now? Pretty good?
Ethan […06:07], yup.
Colin Okay, sweet. All right. So for this quick, little demo, I’ve got a [GKE – 06:16] cluster spun up. So this is Spinnaker talking to that cluster. Right now we can see that we don’t have any server groups. We don’t have any load balancers created. So before I go ahead and deploy, I’m going to actually create a load balancer. For those who might not know, a load balancer is the Spinnaker equivalent of a service in your Kubernetes cluster. So down here we’re going to actually change this type to a load balancer which for anyone who’s familiar with Kubernetes, they’ll know exactly what that is. So what we’re going to do is go ahead and expose the port that our app is going to run on. We’ll call this external so we know that it is an external-facing service. And we should be good there. So we’ll go ahead and create that. So what Spinnaker is going to do here is actually call our Kubernetes API and create a Kubernetes service that is of the type load balancer. And if we jump over here into our terminal, we can see that we’ve created Armory demo external. And we’re waiting on that external IP. So while we’re doing that, we’ll just jump over here to our clusters.
Ethan Before you move on, are you running this all on AWS or GCP? Which platform is the underlying cloud provider?
Colin For this demo, it’s actually Google cloud platform with just a simple, quick, Kubernetes cluster. At squid we run everything on AWS. We manage all of our own Kubernetes clusters with Terraform.
Colin Yup, okay. So we’re going to create a server group. A server group in Spinnaker in the Kubernetes world is just a collection of a Pod. By default they are just replica sets. You can check this deployment box to get a Kubernetes deployment. There are a few rough edges with that still. But for the purpose of this demo, we’ll stick with replica sets. So what I’m going to do here is just deploy a simple, little application called hello squid. It’s a […08:50] Web server that just renders a template. So we’ll create our container. We’re defining a Pod spec here in the Kubernetes world. We’re actually going to expose it, find our load balancer. This would add the appropriate labels to our Pods so that they are exposed by load balancer.
Ethan Can we go back up to the Deployment object for Kubernetes? I know a lot of folks are always curious about this. Can you […09:22] check on that box? Can you explain to the audience a little bit about why you would want to or not want to use this with inside of Spinnaker and what you get out of it or how Spinnaker treats pure replica sets?
Colin Totally. For those of you who are familiar with Kubernetes in the Deployment object, the Deployment object handles some Pod rotation for us. So you can see here we’ve got a rolling update or a recreate. And in this case, that would just kind of hand over those update semantics to Kubernetes. So we don’t have to worry about letting Spinnaker handle that. If we were to just deploy a Replica Set, however, Spinnaker takes a little bit more control. So a Replica Set just guarantees that a certain amount of pods stay online. So if you’re deploying a Replica Set, Spinnaker will actually handle pulling your pods out of the load balancer. The Replica Set will keep those online, obviously, put the new server group or Replica Set, and you bring those into load balancer when they’re ready. There are two types of deployment strategies that you can use with replica sets in Spinnaker. There’s Red/Black and highlander. If you try to use those with the deployment, however, things are going to get a little weird. It doesn’t quite work right because Kubernetes under the hood is trying to do what it knows it’s supposed to do with the deployment, and then Spinnaker is going to be trying to do things on top of that. It just kind of gets messy. How is that?
Ethan That’s good. I guess one more question is is there a time that you would want to use the Deployment object versus letting Spinnaker handle it? […11:22] suggest letting Kubernetes do its thing?
Colin I think if you’re using Spinnaker and you want to let Spinnaker manage most of those things, then that would probably be the appropriate time to use the Replica Set. If you’re fine with just rolling update or recreate, then that would probably be the best thing. If you really need rollbacks, then it would definitely be recommended that you use Replica Set because rollbacks are completely busted if you use the deployment.
Ethan [That’s right – 12:02].
Colin Yeah, that’s probably a good time to use replica sets versus deployments.
Ethan Okay. That’s really important to know, I think, for people who are just curious about the differences and what they gain and what they lose. So […12:16].
Colin No problem. So like I said, we’ll put this server group behind the load balancer, and we’re going to deploy two replicas. We’ll go ahead and map our port. So down here in this container section, this is actually your container spec. So when your typical Kubernetes manifest for deployment or Replica Set, you have a Pod spec which is made up of a bunch of container configurations. So here, port 8000 is the port our app runs on. You can configure all these settings just like you could in Kubernetes YAML. You’ve got image pull policy. I’m just going to do always in case my image tag isn’t up-to-date on those nodes. Volume mounts, I’ll actually point this out specifically. Volume mounts and volume sources, so Spinnaker calls them something different than most people are used to in Kubernetes. So a volume source would be your volumes that are usually at the bottom of your spec where you define what config maps or secrets that you’re going to […13:33]. And you can define those there. And then in your container, in the volume mount is where you would actually point at the volume source to mount that into the container. I usually see a lot of people being confused about that on Slack. So I just wanted to call that out specifically. And hopefully, that’s helpful for somebody.
All right, looks like we’re good. This is a really simple application. So we’ll go ahead and create that.
Ethan […14:12]. How are you managing things like pipelines and… [inaudible].
Colin Remove mount [laughs], removing the source. That should work.
Ethan How are you managing pipelines and the creation of resources within Spinnaker? Are you using the UI? Has that been working out well for you? Or are you doing something more automated, more […14:47] kind of thing?
Colin So up until this point, we’ve pretty much just been using the UI. At some point, we’d like to get into using that API to do some kind of automated stuff. Unfortunately, there’s not a whole lot of information. If you have authorization or authentication enabled, there’s not a lot of documentation on how to actually get access to the API through gate, which is API gateway for Spinnaker. If you need to get like a token or an X.509, I know that there are people who’ve done that, but it’s just not publicly available information at this point.
Ethan […15:28] we can chat about it. I can certainly help you through it. […15:33] the community is working through more documentation on that. There are a few authentications that are simple. We could just set up simple, basic […15:43]. The X.509 stuff is there as well, and we can chat about […15:49]. So you guys [inaudible]. And […15:54] the UI just kind of work for you guys?
Colin Oh, yeah. The UI has been great. But one of the main reasons that we went with Spinnaker is so the team I’m on is technical operations, which would be DevOps in some organizations. And what we really wanted was to mask enough of just the implementation detail of Kubernetes to hand to our engineers because Kubernetes is a complex system. There are a lot of details. So what we found with Spinnaker is it exposes just enough and it uses the right terminology that a lot of people are used to seeing already. We don’t have to explain all of Kubernetes. We can explain you need to run your application. Here’s how you do it. If you need details, really, the only main detail that you need to know is inter-cluster DNS and how that works, Pod-to-Pod communication over services. And then the rest is just very familiar kind of language. So it’s been pretty nice.
Ethan Yeah. I think we see that as well too. A lot of the companies we work with are large organizations who are running on AWS, OpenStack, and then trying to move a lot of stuff over to Kubernetes. And they see Spinnaker as a path to getting people over to Kubernetes without having them to learn all of the specifics, exactly what you said. We see a lot of people using Spinnaker to accomplish that goal. So it’s good.
Colin Cool. So now that we’ve got our server group up, we’re actually going to pull off the public IP address of that. I’ll tear this down and this won’t exist after this demo. But we’ll just go ahead and click that. And if everything worked as expected, […18:00].
Colin Right. […18:10] points. We have an IP address on that and […18:17]. Okay, so what you would see if it worked is our application exposed to the world. It would be a logo, the squid logo. Let’s just go ahead and update. I promise that it would change. So we’ll create a pipeline. And this is really where Spinnaker shines. So you’ve got from this cluster tab information as to what is running. You’ve got your image tag. You’ve got the health of all of your Kubernetes pods right here. We can see more information about all of this stuff, and it’s really great. You don’t have to keep control of get pod -o YAML to get this information. It’s all right here in front of you.
Ethan […19:18] YAML.
Colin Yeah. […19:21]. And then we can see what our YAML is actually, the generated YAML, which you could go apply manually if you wanted to. I don’t know why you would. Oh, I will point this out.
Colin Right. It’s there if you need it. One thing that I think is super cool about this is if you give cloud driver which is the cloud provider […19:54] the subsystem that interacts with your cloud providers, if you give it credentials to communicate with your Docker registry so that it can cache the list of images, it’ll go ahead and create those secrets for you. So you don’t have to worry about creating image […20:11] secrets for your images. If you can see them from the UI, you can deploy them. When we started using […20:21], that’s something that I was like, “Hey, how do you do this.” And […20:26] was like we just create them for you. So it was super convenient.
So as far as the other part of Spinnaker, this is where it really shines, pipelines. I like to refer to the pieces that you use to build pipelines as deployment primitives, kind of how Kubernetes has deployment primitives, replica sets, […20:54] services and all of that type of stuff. We have bits and pieces that we can put together to build a pipeline, whereas if you were doing it in Jenkins, it would just be a whole bunch of bash scripts, maybe some Groovy, some Pipeline Groovy that you’d have to commit to a repo, push, run it, something broke, you don’t have direct access to that change very easily. So what we’ll do here is we’ll create a pipeline. Call it production. And we will get it created. So the best part about deployments is we can do CD, continuous deployment. And that can be triggered off a lot of things like get repo commit or Docker registry or another job. So we’re just going to use Docker registry for this. My name, that image will do […21:58], so any tag that’s pushed to this registry will use. For this demo, I’m not actually going to rebuild an image and push it up. But this is a little trick that I learned. A lot of people have push-button deployments where they get […22:17] push a button, select which build is going to go out, and that’s going to go out. Actually, using the Docker registry trigger, if you just disable it, you’re not going to get that continuous deployment, but you’re still going to get that continuous delivery. Whereas if you’re pushing an image to your registry, every build you have it right there, ready to go when you need it. So for our production environments and our staging environment, that’s what we do. We have a Docker registry trigger disabled so that we can just pick what image we want and then ship it out. That works really well for us. So we’ll just add a stage. Each stage is one of those deployment primitives that I talked about. We have a lot of things. One that’s really cool for container-based providers, which would be Kubernetes for the world and Titus for anyone who’s in Netflix is not the Jenkins run job but the run job command. So this might look familiar. It’s very similar to a server group. What you’re doing is configuring a Pod to run some arbitrary command. This is something that squid has actually done a lot of work on recently. When we started using Spinnaker, this was kind of… It was there but it wasn’t really flushed out. So we’ve actually added a lot of stuff here. So hopefully, the stuff that’s useful for us is also useful for others. So you can do things like add a service account name, specify your container name, and all of this other stuff. So this keeps you from having a call out to Jenkins. You can just run a bare Pod in Kubernetes and it’ll run. I just wanted to point that out because that’s a container platform specific stage. So we’re going to remove that since we’re not going to use it. And we’ll add a deploy stage. So a deploy stage will let us specify a server group. Since we already have one, we can just go ahead and copy that. And we’ll use that as a template. Now where something like this would come, something like this would be familiar to someone, to someone who’s maybe using helm to deploy to Kubernetes. You might have in your helm chart the deployment templated out. And then for your image, you might have […24:51] Docker image or something like that. Where this is really great is you don’t have to store that chart in a repo and install that chart in every cluster that you’re going to do. You just have a really nice, simple UI to do that.
Ethan As you’re touching on helm, are there any other differences between why you would want to use this over something like helm?
Colin Totally. I could not imagine having to script a canary deployment per se with helm. Actually, one of the main driving points of us towards Spinnaker was we wanted to do these complex deployment scenarios. But we didn’t want to write the bash or some other programming language to orchestrate that.
Ethan Even with helm, it would still require you to write a significant amount of bash. Is that right?
Colin Right. Yeah, you would still have to have some automated way of updating your chart version, pulling your values, wherever you store those, updating those, installing the chart. Managing your replicate numbers is kind of a nightmare. And at this point in the game, Spinnaker is offering way more than anyone else really. So while helm may be good for managing stuff that’s not deployed as frequently, you need a tool like Spinnaker that can really just give you problem that’s been solved at companies way bigger than probably most of the people using Spinnaker. So I feel very passionately that Spinnaker fills the hole that’s missing in the Kubernetes community as far as deploying actual applications multiple times per day.
Colin Sorry, I kind of got out of a bit of a soap box there.
Ethan […27:09]. We love to hear that.
Colin Yeah. So right along, we’re going to use the Red/Black strategy to deploy this new version. We will keep it behind the same load balancer. We’ll keep our capacity at 2. But the difference that we’ll do here is we’ll actually change this image from 1.0.0 to the one that we defined in our trigger, [hellosquid.star – 27:35]. Again, that means any tag that we pushed to our registry is going to get pushed up. So maybe I can figure out why that didn’t actually work. […27:49], that’s right.
Ethan When you created the ELB, was it […27:56] 8000?
Colin I will go back and check that right after this. […28:05] is a 1000. We changed our image. So we’re good. So we’ll go ahead and add that step. So what Red/Black is going to do… I’m actually going to cheat a little bit and look at the tooltip on that. So what Red/Black is going to do is it’s going to actually leave all of your previous pods online, just pulled out of the load balancer. And how Spinnaker does this is it actually just modifies the pods that are running and pulls the load balancer label off of those. So if traffic hits that load balancer, it can’t find a Pod in that Replica Set to serve that traffic. So that’s actually also a really good point and it’s why you might want to use replica sets over deployments is that a deployment is going to take over and it’s not going to leave pods online. It’s just going to start new pods, destroy the old ones. And kind of where that would be an issue is if you have a pretty beefy Docker image (which most of the time you won’t but there are times that you will), you might have to pull gigs of data down onto a new node when that Pod is scheduled. And that’s just going to take a lot longer. If you have some bug that’s really critically affecting your users, that could be a nine on your SLA. So just having pods running and ready to serve traffic when you need them is a huge benefit. So we’ll use Red/Black. We’ll call that done. We’ll save it. So this is obviously a very simple example. What we might do also is we’ll do a manual judgment. So manual judgment says I’m going to pause. If everything looks good, then I’m going to do something else. If it doesn’t, we can conditionally do things based on the judgment there. We can say that everything looks good. It’s just going to ask us a question. We’re going to say yes or no. I’m not going to worry about doing no here. But after we say yes, we will destroy the previous version. Actually, let’s not do that. I want to demo rollbacks and that’ll kind of hurt. So now that we’ve got a pipeline done, it’s really simple. I’ll go back do it one more time. We defined a Docker trigger on our registry. We are going to deploy the new version that we say to deploy. And then we’re just going to say, “Does everything look good? Sure.” And what we’ll do, start a manual execution. And we’ll say 2.0 and run it. So and what took me five minutes without going into further detail, I’ve got something that would take, who knows, an hour, two hours to get just right with some scripting language in Jenkins. I mean who can remember how to set a parameter off the top of their head in Jenkins, really? But here, it’s really simple in Spinnaker.
Colin Yeah, totally. And hopefully, my cache is refreshed fast enough for us to see this. Okay, so we can see here that there are two sets of pods running, four total, 2.0 behind a load balancer and 1.0 behind a load balancer. We’ll jump back over here real quick to this view. And you can see here that we are deploying it. Once it’s done, we’re actually going to go ahead and disable. Like I said earlier, disable is just going to pull those out of the load balancer, and we’ll jump back to the clusters tab. And we should see the V000 pods grey out in a second. You can see here we’ve got to this update. Server group is disabled. Those pods will stop serving traffic in a second. And while we’re doing that, I’m just going to double-check. Yeah, […32:58]. I’ll open that in a new tab and see if [inaudible].
Ethan […33:13]. So when you go to the… so go to the other tab real quick.
Colin […33:25] okay, man. All of this all sort of stuff. Well, now that we have it working, we’ll be able to demo rollbacks and everything will be good, cool. You can see here now they’re greyed out. Those pods aren’t serving traffic anymore. One thing I will do… And this is really Spinnaker information more so than just Kubernetes information. But I think it’s really cool. While that’s running, we’ll go ahead and reconfigure this pipeline. So there’s this thing in Spinnaker called expressions, and that will allow you to inject dynamically information about your pipeline. So if you were triggering based on a Git commit, you could inject the Git commit as an environment variable, for example. What I’ve done is this service that we’re running will display a specific environment variable in the UI. So you can see here that deployment number, none provided because we didn’t set the environment variable. But we’re going to do environment variables on our containers. And this is actually exactly what you would expect out of a Kubernetes YAML. You’ve got a config map. You’ve got a secret. You’ve got a field ref. Actually, one of my co-workers just added support for resource field refs. So you can reference the resources of another container […35:10] your Pod and pull that information off. We’re still working on the UI for that but it’s coming.
Ethan How many of you guys are actually contributing to the open-source project right now?
Colin There are two of us. I have done a lot of it. But my co-worker, who is also a very avid Kubernetes… very passionate about Kubernetes just started. It’s awesome seeing the love that I have found for Spinnaker kind of spreading out further into our team. It’s really cool.
Ethan That’s great.
Colin So we’ll do an expression here. We’ll say execution.id. What that’s going to do is inject the ID of our pipeline run into this environment variable. And like I said, if everything works as expected, we’ll actually see that show up. So we’ll just click done. We’ll save that. Make sure that […36:18] pipelines waiting for manual judgment. So we’ll continue and say, “Yeah, everything looks fine. We just checked it.”
Colin Yeah. Cool, so we’re done. And our clusters tab, we’re actually going to be able to see that this worked […36:35].
Ethan Awesome. […36:42].
Colin Say that again.
Ethan Click on that server group right there, the one that you’re on.
Ethan And then do you […36:50] rollback from here or do you wanted to deploy the pipeline one more time?
Colin I’ll have a demo, the rollback, and then deploy the change with the environment variable because that’s a […37:02] scenario, right? You forgot some information, so you need to roll back. This is actually one of the reasons we went with Spinnaker. One of the things that we love is single-click rollbacks. Again, orchestrating that with Jenkins or some other type of tool is difficult to say the least. But with Spinnaker, we’ve got a very simple rollback mechanism. This is the one I want […37:35] and environment variable. And we’ll submit it. Rollbacks are actually just a collection of other actions that you can take on your server groups. […37:52] enable, a resize, and then a disable. So it’s going to enable the old one that we want. Then it’s going to resize it. So if we resize V001 to five replicas instead of two, obviously, you did that for a reason. So you shouldn’t roll back to an old size. So what that’s going to do is resize it to the current state and then disable the one that you don’t want anymore. And all of this is handled for you. You don’t have to write any code to do this. So we’ll see momentarily from our test tab. We can also watch this because it’s happening asynchronously in the background. We can see what’s going on. I’ll also point out. Another thing that we really love about Spinnaker is this section over here for a user. If you have authentication turned on with GitHub or Google […38:58], the person who actually performed the action, their username will show up right there. It’s kind of a simple auditing mechanism.
Ethan Yeah. We find that as a way that […39:14] are associated with actions. They’re a little bit more careful with what they’re doing.
Colin [laughs] Yeah. Nobody […39:19] destroy server group and get caught.
Ethan Things get messed up […39:25] when there’s accountability.
Colin Yeah. So it looks like we’re pretty much rolled back. V001, which was missing the environment variable, is disabled, shouldn’t be serving traffic anymore. We’ll roll back to 1.0. Come on, yes, all right. So the only thing that changed there was the blue background. I know it’s really simple, but we can see that we’ve rolled back from something with a blue background, something with a white background. And now that we’ve already got the deployment, the environment variable changed there. We’ll just redeploy 2.0. If you were rolling forward because you had an issue, this might be 3.0. And you can just skip right over 2. And this is just going to do exactly what it did beforehand, just going to Red/Black the replica sets in place, switch traffic over the other service and the labels, and we should see our change roll out in a few minutes.
Ethan […40:41] see the server group is created.
Colin So […40:45] running, could you go into… you told me beforehand that you can’t show how you used to do this. So could you describe how you would […40:54] before Spinnaker.
Ethan Yeah. So before Spinnaker, we were using Jenkins. We still use Jenkins in Jenkins pipelines for our builds. But the problem that we ran into – and I’m sure there are a lot of people feeling this pain point – is how do you make those changes. It was a hundred lines of bash and a couple hundred lines of Jenkins pipeline groovy to make this all work. If we needed to add a new cluster, that was a commit in the repo. Maybe you mistyped an account ID. That’s another commit. When you’ve just deployed to us-west-1 or us-west-2 and you’re trying to deploy the EU central one or something like that, you mistyped it. So then you have to go and redeploy to us-west-2 when you just wanted to redeploy the eu-central-1. So it was really just a lot of code, a lot of headaches. Making the updates, we had to keep a Jason template and then do some naïve find and replaces in this JSON template to update the image. Like I said, when you get into more complex deployment scenarios, that gets difficult. Orchestrating that with code is difficult and especially doing that across projects. So the more services you bring on, people are going to be copying and pasting code all over the place. They’re going to change it a little bit. But maybe they want that change everywhere. And it kind of got to be a mess. So rather than keep going down that road, rather than having to teach people Jenkins pipeline DSL on top of Kubernetes, on top of everything else that they have to think about on a daily basis, we went with Spinnaker because it just makes the process a lot easier.
Colin That’s awesome. […43:20]. I think a lot of these questions we’ve kind of already covered. We’ve covered quite a few questions that we had. Is there a trade-off to using Spinnaker? Is there something that you had to give up to get because you guys prioritized whatever Spinnaker could’ve given you in return?
Ethan Yeah. One thing that we really want to be able to do is kind of move all deployment things or deployment-related tasks into Spinnaker. Right now, for a lot of infrastructure stuff, for example, we use [daemon – 44:00] sets which don’t have a support in Spinnaker. So as a technical operations team, we kind of have to live in two different worlds. So we can’t quite transition everything to Spinnaker. So we have to be mindful about the things that we’re doing.
Another thing that we have kind of run into is that there’s no native support for managing config maps and secrets in Spinnaker. So we have to manage those via helm chart. So I think the things that we’ve had to give up in order to use Spinnaker is just kind of the full transition and not being able to fully adopt it in everything that we do because the support just isn’t there yet.
Colin It’s good to know. That’s good to know. Is there a reason why you would not want to use Spinnaker for Kubernetes deployments?
Ethan If you can get away with using deployments and replica sets, I do not see a reason to not use Spinnaker for Kubernetes deployments. Like I said a second ago, Daemon Sets in some of the not so app-focused things but things like Prometheus monitoring, stuff like that, they’re not directly customer-facing but might be internal-facing. You can’t use them for things of this kind. But for any customer-facing application, I can’t see a reason that you wouldn’t use Spinnaker. It just makes things so much easier to do with Kubernetes.
Colin Sounds good. […45:51] deployments [inaudible] there. [inaudible] should be there.
Ethan Yes, it is. […45:58], provided that I don’t have any caching issues… it looks like I might have some caching issues. […46:11] Firefox. This actually works the other day in Firefox and it was having caching problems in Chrome. Hey, there it is.
Colin It’s nice.
Ethan […46:26] nice. And that’s good. You take that ID [inaudible] to Spinnaker where that […46:36].
Colin Sure. Back to Chrome, Spinnaker. We’ve got a pipeline run here. This is our run. I’ll go ahead and click Continue. But you can see […46:51]. Say that again.
Ethan I was going to say we see a lot of folks wanting to come to Spinnaker for the ability to […47:01] things like this and just being able to know like okay, how did this version actually get in production. […47:07] pipeline. Isaac was one who kicked it off at this time. And these are the […47:14]. So being able to kind of track that down when things go wrong, it makes people feel comfortable knowing that Spinnaker just gives you a ton of information for you to be able to debug things, or even from a compliance perspective. We see a lot of folks who work at large companies who have to do socks 2 compliance and […47:34] compliance. And this helps them achieve those goals.
Colin Yup, totally.
Ethan All right. So I think we’ve taken up a lot of your time here. I think the demo is awesome and everything you’ve shown us. Is there anything before we get off that you want to say about Spinnaker and Kubernetes? Or do you think you said it all?
Colin I think I could just reiterate that Spinnaker is what’s missing right now from the Kubernetes community. There’s a lot of focus around the operator. There are a lot of great tools around the operator and the cluster manager. But I think what Spinnaker brings to the table is the focus on the engineer and on the developer who doesn’t necessarily use Kubernetes on a day-to-day basis but still has the responsibility of being responsible and owning their application. From the operator perspective, you may not want to give direct access to your Kubernetes cluster. So doing that through a tool like Spinnaker that is so rich and gives so much detail is a really great way to do that.
Ethan That’s good enough. Thanks, […48:48]. Thank you. Thank you for [inaudible]. Yeah, I appreciate the time. [inaudible].