Choosing Between AWS, GCP, and Azure
May 30, 2017 by Armory
Cloud-providers will specialize over time; the big players such as Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure will look for ways to differentiate themselves from the competition. The past two years has seen GCP and Azure roll out a slew of changes and services meant to catch up to the lead that AWS has had for years, and the end result is a competitive ecosystem of different services. This means that in the near future, companies will want to run workloads on the cloud platform best suited for each application’s need and the company’s deployments landscape may rely on multiple cloud-providers.
We think that moving to the cloud will become the second great digital migration (the first was when companies went digital by cutting reliance on paper), and we have previously covered why businesses should move to the cloud. For the companies that are already in the cloud, we’ve also covered why companies should keep multi-cloud deployments in mind.
To continue helping companies navigate this new cloud environment, we want to take a look into what the next five years may bring: a gradual specialization between cloud-providers as they attempt to differentiate and offer competitive services. For companies looking to adopt the cloud for its competitive edge it will become imperative to understand what differentiates between these cloud-services. We fully expect to see a trend for these services to begin offering specifications that are desirable for different applications to run on.
The motivation to diversifying your cloud-providers won’t be an external push from cloud-providers but an internal one from your deployment teams. Cloud-providers would love to tell you that their service is all you need but we’ve found that some workloads or pricing structures make sense for specific environments. As your teams evaluate services and realize that certain applications have specific environment requirements for efficiency or that some pricing structures don’t make sense for the workloads the organization is running, we believe you will find yourself negotiating contracts with multiple providers.
The choice of a cloud provider won’t become an “either/or” question, but “I have contracts with all these providers, now which one should I run this application on?”
So what will each cloud-provider be great for? There will be a lot of homework that needs to be done to answer that question especially given how quickly innovation has occurred.
As you start to determine the optimal mix of cloud-services for your organization’s needs, here are some insights to consider:
- “Google will soon launch a cloud computing service that provides exclusive access to a new kind of artificial-intelligence chip designed by its own engineers.” from “Google Rattles the Tech World With a New AI Chip for All”
- “Technologies pioneered by Google, like Big Query, Big Table, and Hadoop, are naturally fully supported. Google’s Nearline offers archiving as cheap as Glacier, but with virtually no latency on recovery.” from CloudAcademy
- GCP charges for instances by rounding up the number of minutes used, with a minimum floor of 10 minutes (therefore, if you used 4 minutes, you would be billed for 10. But if you used 13.5 minutes, you would be billed for 14). Google recently announced new sustained-use pricing for compute services that will offer a simpler and more flexible approach to AWS’s reserved instances. Sustained-use pricing will discount the on-demand baseline hourly rate automatically as a particular instance is used for a larger percentage of the month. This means that there are no upfront commitments to receive discounts.
- Google allows for the widest customization of machine offering, which allows for the largest amount of granular control over user-defined machine configurations. This technically means that GCP offers the most variety of Virtual Machines, but one should consider the complications involved with running essentially N-amounts of machine types.
- “Google has the smallest footprint of the three providers, with four regions, comprised of 3-4 “zones” (data centers) each. Other data centers provide regional support against zonal failures and act as redundancy only. Google makes up for its geographical shortcomings with its global network infrastructure, which provides high-speed, low-latency connectivity between its data centers, both on a regional and interregional level (compared to public Internet connectivity with Amazon and Microsoft), as well as a large number of its own PoPs deployed in over 30 countries.” from Cloudyn
Summary of Takeaways for GCP:
At the time of this writing, GCP has the smallest amount of data centers compared to the other two. There should be no problem latency-wise due to the impressive global network infrastructure that Google controls, but it’s possible your workload may desire a data-center in a specific region. GCP seems to offer the best service to providers who value flexibility of service or need to use technologies from Google such as Big Query, Big Table, or even artificial intelligence machine learning systems. Finally, GCP is extremely cost-efficient if you anticipate a sustained amount of usage but would like to not commit to an upfront cost.
- Azure charges customers by rounding up the number of minutes used for on demand. Azure also offers short-term commitments with discounts.
- Note that Microsoft gives the best deals and options to companies that are already in an Enterprise Agreement with them. (Cloudyn)
- Microsoft is quickly catching up to Amazon in terms of global region data centers – they were the first to enter India in 2015
- Smaller server configuration variety, but much more flexibility with server size.
- VMs have better performance and flexibility compared to AWS, but less VM variety
- Data storage is in their Blobs, “Similar to Amazon, it’s offered in four different SLA levels: Locally redundant storage (LRS) where redundant copies of the data are stored within the same data center; zone redundant storage (ZRS), where redundant copies are stored in different data centers within the same region; and geographically redundant storage (GRS) which performs LRS on two distant data centers, for the highest level of durability and availability.” (Cloudyn)
Summary of Takeaways for Azure:
Optimization for general purposes seems to be the key here – Azure is performance and network optimized, giving you less specific selection but hoping to lure you in with a strong foundation of raw computing power. Combine that with Microsoft’s rapid scaling of data centers around the globe, it seems Azure would be great for any workload that wants low latency, a selection of global data centers to choose from, and non-specific VM settings. The pricing model is simple but hardly flexible: you commit or pay as you go, and discounts are only offered to those who pay upfront.
- AWS charges customers by rounding up the number of hours used, so the minimum use is one hour. AWS instances can be purchased using any one of three models:
- on demand – customers pay for what they use without any upfront cost
- reserved – customers reserve instances for 1 or 3 years with an upfront cost that is based on the utilization
- spot – customers bid for the extra capacity available
- Spot bidding is useful for extreme cost savings on workloads that can be interrupted and terminated at any time
- spot blocks (fixed duration instances) won’t be interrupted mid-workload, but the instance will be terminated when time is up
- AWS has the most regions across the globe
- EC2 currently allows for the largest server configuration variety. When using AWS, EC2 is probably the best deployment target instead of Kubernetes due to the seamless integration, as EC2 is also built by Amazon.
- AWS integrates seamlessly with other Amazon services, and is the service that’s been around the longest compared to the others. This means that a lot of sub-services and tools for cloud deployments are all built to integrate with AWS.
Summary of Takeaways for AWS:
The complex pricing models can be overwhelming; AWS is making you choose between convenience, flexibility, and cost-efficiency with their pricing models thereby forcing you to plan accordingly (contrary to how GCP allows you to dive in without planning). However, being the market leader and having a large suite of VM families in addition to a veritable arsenal of services that play in the cloud-deployment space, Amazon still has a competitive edge. The biggest item of note is the amount of servers that AWS has on a global-level.
Other Important Things to Note:
- Depending on your business and location, you may be impacted by privacy laws that influence your decision of region
- If you are under a federal contract, you may have additional restrictions to be federal compliant. Make sure you meet FedRAMP standards.
- Transferring data across regions is going to cost differently depending on provider, but it should be done. You should still be keeping backups in different regions due to the possibility of data centers becoming destroyed by natural disasters.
- Data centers will differ in costs based on region – for instance, AWS’s West region servers cost less than the East regions. You should look into whether or not you actually need the latency of the “closest” server to the users as compared to the cost of the server’s region. As a side note, make sure to always keep in mind whether or not a server’s location may be susceptible to the worst-case scenario of natural disasters, and plan accordingly (like backing up your data to other regions).
- If latency is the biggest priority, look into the provider’s Edge Networks. For instance, Amazon’s Edge Network is CloudFront.
- These providers do hand out “free trials” to young startups and businesses – simply make sure you ask the representative about any free credits or trials they have. It is up to you to determine during the free trial if their service is what you require, and remember to not get locked-in to their service!
As you begin to diversify your cloud dependency your tools will need to change accordingly; deploying to multiple clouds will be a different process to oversee and some existing tools were not built to do so without arduous additions of glue-code. Ensuring the standardization of your teams’ deployments is a best practice for achieving developer efficiency. There are many deployment tools out there to make this easier for your engineering teams, but we recommend using Spinnaker. It is a one-size-fits-all deployment tool to orchestrate all the deployments on one standardized platform, and is already used by big companies like Netflix and Waze for their multi-cloud deployments.
Recently Published Posts
How to Take the Pain of Rollbacks out of Deployments
Software applications have become an integral part of the business climate in most modern organizations. With an ever-increasing demand for new features and enhancement of already-existing ones, software teams constantly face novel challenges, and the pace of software development is growing by the day. To keep up with this fast-paced business climate, software teams […]
Read more →
Monitoring Spinnaker: Part 1
Overview One of the questions that comes up a lot is how you monitor Spinnaker itself. Not the apps Spinnaker is deploying, but Spinnaker itself and how it’s performing. This is a question that has a lot of different answers. There are a few guidelines, but many of the answers are the same as how […]
Read more →
The Importance of Patents: Interview with Nick Petrella, Head of Legal
In honor of Armory’s recent acquisition of a patent for continuous software deployment, we sat down with Nick Petrella, Head of Legal, for a casual conversation covering a wide range of subjects, from patent law to Nick’s background as a software engineer and why he made the leap to the law. Check out […]
Read more →