When you run a Web site, an application, or a service on GCP, Google keeps track of all of the resources it uses – specifically, how much processing power, data storage, database queries, and network connectivity it consumes. Rather than lease a server or a DNS address by the month (which is what you would do with an ordinary Web site provider), you pay for each of these resources on a per-minute or even per-second basis, with discounts that apply when your services are used heavily by your customers on the Web. From the perspective of corporate parent Alphabet, GCP is a separate business unit, addressing the business need for enterprises and, in some cases, individuals to deploy software that is usable via Web browsers or through Web apps. GCP leases software, along with the resources needed to support that software and the tools with which such software is developed, on a pay-as-you-go basis. Must read:
Google Cloud Next: Everything you need to know about the new strategyWhat a hybrid cloud is in the ‘multi-cloud era,’ and why you may already have one
What is a cloud platform, really?
You use a cloud platform when you want the services you present to your users, your customers, or your fellow employees to be an application as opposed to a Web site. Maybe you want to help homebuilders estimate the size and structure of the cabinets they need to rebuild a kitchen. Maybe you’re analyzing the performance statistics of athletes trying out for a college sports club, and you need sophisticated analytics to tell the head coaches whose performance could improve. Or you could be scanning hundreds of thousands of pages of archived newspaper copy, and you need to build a scannable index dating back decades. You use a cloud platform such as GCP when you want to build and run an application that can leverage the power of hyperscale data centers in some way: to reach users worldwide, or to borrow sophisticated analytics and AI functions, or to utilize massive data storage, or to take advantage of cost efficiencies. You pay not for the machine but for the resources the machine uses. By “cloud platform” Google means a software system that deploys functions and applications on an as-needed, automated basis. If your business can host its own applications using a portal that works similarly to GCP, that’s a genuine cloud.
What is Google Cloud’s value proposition?
So why even consider GCP? For now, there’s a particular class of enterprise customer – one probably without its own data center assets on-premises or hosted by colocation providers, but still just large enough to have hired its own software developers. This is the class of organization for which GCP is pitching scale, reliability, and brand familiarity as the key ingredients for its competitive value proposition. Most major markets in any healthy economy loathe a tri-opoly. Usually, it’s the safest bet an analyst can make that the #3 player will be shaken out of contention, and must make itself content with providing “alternative” products or services to niche markets. But Google has the one luxury no other #3 player in any market has. It’s the #1 player in a different, virtually one-player market: online advertising. Its cloud services can be allowed to mature and find their audiences, just as though the survival of the company didn’t rest upon them. A former Microsoft CEO once warned Google that his own company made its mark for being tenacious, tenacious, tenacious. But he’s gone now. And Google Cloud has every reason – including all the time it needs – to keep trying.
Basic Google Cloud services
Here are the principal services that GCP offers its customers:
Google Compute Engine
A “unit” of virtual machine resources (memory, storage, processor power, network throughput) that is assembled to run like a physical server with the same levels of physical resources, is called an instance. Typically, a service provider may charge fixed rates per month for the use of that instance in minutes, as well as other resources it may consume. To be more competitive, GCP charges its customers in increments of seconds instead of minutes. It also gives customers the option of dialing the precise resource buildout they need for their VMs, which is useful for enterprises that still rely upon legacy applications (a nicer way of saying “old programs”) that were tailored specifically outfitted physical machines.
Google Cloud Storage
GCP’s Cloud Storage (GCS) is an object storage system, which is to say, its records maintain both the identity and the structure of any class of data given to it. Unlike a typical storage volume’s file system, where each file or document is rendered as a string of digits whose location is registered in a file allocation table, object storage is an all-purpose block that’s leased to consumers like space in a park-and-lock. It can hold entire organized databases, raw video streams, or matrices for machine learning models.
Nearline
Nearline is a way to utilize Google Cloud Storage for backup and archival data – the kind that you wouldn’t necessarily consider a “database” per se. Data stored here is intended to be accessed no more often than once per month, by one user. Google calls this model “cold storage,” and has adapted its pricing model to enable Nearline to be more price-competitive for such low-utilization purposes as system backups. Also: Google Cloud adds new hybrid file storage partnership with Nasuni
Google Cloud workload deployment services
Although GCP does offer virtual machine instances as table stakes for the cloud computing market, this isn’t really where Google has opted to compete. As the progenitor of Kubernetes, GCP concentrates most of its efforts towards providing enterprises with the means of deploying and operating containerized workloads.
Google Kubernetes Engine
A container (still called in some circles a “Docker container,” after the company that made it popular) is a more modern, flexible, adaptable form of virtualization. Rather than re-creating a physical server, it encapsulates just the resources an application needs to run, then hosts that application on the server’s native operating system. Think of the difference between a container and a virtual machine as analogous to that between a single light bulb and a battery-driven flashlight. GCP’s fully managed, hosted staging environment for containerized applications is now generally known as Google Kubernetes Engine (GKE, having originally been launched as Google Container Engine). A container is designed to be executed on any system or server with the underlying infrastructure required to support it. A Linux container still needs Linux, and a Windows container needs Windows, but besides that distinction, a container is extremely portable. So long as an organization’s developers can produce applications as complete, portable, self-contained units, GKE is designed to deploy and run them. The huge difference here – what makes container engines so much more interesting than VM hosts – is that the customer is not purchasing instances. As a result, you don’t need to over-provision processing power or pre-configure resource limits for the underlying host; you simply give GKE the container, and it finds the right socket for it. Container-based services may then be made discoverable – able to be contacted and utilized by other services in the network – by means of a service mesh. GKE recommends an open-source service mesh called Istio. It’s an interesting kind of “phone book” for modern, scalable applications that are distributed as individual components called microservices. A conventional, contiguous application knows where all of its functions are; a microservices-based application needs to be informed, by something capable of looking up that function and providing an active network address for it. Istio was originally developed as a service mesh by an open-source partnership made up of Google, IBM, and ride-sharing service Lyft. Also: Service mesh: What it is and why it matters so much now
Google App Engine
You’ve heard the term “cloud-native development,” which embraces the idea that an application intended to run on a public cloud platform may be designed, tested, and deployed there, to begin with. Google App Engine (GAE) is GCP’s service for enabling developers to build applications remotely, using the language of their choice (although Google tends to push Python). In a way, GAE is another way of delivering Container Engine, except with the container being created on the same platform where it will be deployed. GAE supplies the interpreters and just-in-time compilers needed to run high-level programs written in Python, Ruby, Node.js (server-side JavaScript), and other well-known languages. These runtime components are the very same language engines a developer would use in building a container. So it is entirely possible that a customer could build an application in App Engine using a runtime that Google does not supply. For example, a customer may choose to supply Microsoft’s .NET runtime component, which is needed to run applications in Microsoft’s languages such as C#, Visual Basic, and even F#. In November 2020, Microsoft unified its .NET platform components, effectively merging the open source .NET Core branch with the original .NET branch. Google immediately made provisions to support .NET 5.0 in its Cloud Run service (introduced below), after Microsoft introduced it.
Cloud Run
This streamlined deployment platform for containerized applications, named after the old “RUN” command on early microcomputers, represents Google’s effort to drive so-called serverless development through automation. It gives organizations that build their own containerized applications (built for Kubernetes orchestration) to deploy them to GCP without pre-configuring their virtual servers first. The platform determines the infrastructure resources the application will need, by examining its manifest (usually its Dockerfile, which is an XML document outlining how the container is put together, and how it should be unpacked). Cloud Run is marketed as a fully managed service, meaning its IT management and upkeep are personally handled by GCP personnel. As a result, Google’s pricing model for Cloud Run is its own beast, as will be explained later. Also: What serverless computing really means, and everything else you need to know
Anthos
As Google’s first multi-cloud deployment platform, Anthos not only covers hybrid cloud (which incorporates customers’ IT assets on-premises) but also AWS-based (with Azure still forthcoming), all managed collectively under the auspices of GCP. The idea is to enable the distributed computing system that many enterprise customers are asking for, where they can pick-and-choose storage systems, VM instance hosts, and container hosts on a market-driven basis, while maintaining control of the gateway. The premise is that Kubernetes clusters are designed to be distributed. Anthos enables an application that incorporates multiple clusters to divide groups of those clusters among cloud platforms. For now, public cloud-based clusters may be deployed on either or both GCP and AWS, with no surcharge for using some of each. Customers may then enable their own on-premises servers to host portions of Anthos-based applications, for hourly or monthly fees. On-premises Anthos clusters may be installed on bare metal (basic, off-the-shelf servers) or incorporated into their existing VMware environments. Thus far, Anthos has been adopted by organizations with highly distributed IT requirements (for instance, their own ATMs or kiosks, and that also operate their own branches). These customers may need to run applications as close to the customer as possible, without always resorting to public cloud deployments wherever they can avoid it, to save costs.
Google Cloud database services
BigQuery
Google engineers like to say that their official term for “big data” is “data.” GCP’s tool for applying relational database insights to massive quantities of data is BigQuery. Like Kubernetes, BigQuery was spawned by a tool Google created for its own purposes – specifically, to perform drill-down queries on its Gmail data stores. That tool was called “Dremel,” but for obvious reasons, it couldn’t use that brand commercially. For its query model, BigQuery uses standard ANSI SQL, the language most often used in relational databases. A typical relational database stores its data in tables, which are divided into records. Elements of data that are related to one another are written together in a single tier, or at least stored in such a way that their retrieval makes it appear that way. That model is reasonably efficient but slows down exponentially as data volumes grow in size linearly. BigQuery takes this storage model and turns it on its ear, or at least where its ear would be if it had ears. It uses a columnar, non-relational storage model, which you might think is more difficult to interpret when it comes time to assign relations. As it turns out, the storage system is much easier to compress, which in turn becomes easier to index, thus reducing the overall time a query consumes for a large volume of data.
Cloud Bigtable
Formerly called BigTable, Cloud Bigtable is a highly distributed data system that organizes related data into a multi-dimensional assembly of key/value pairs, based on the large-scale storage system Google created for its own use in storing search indexes. Such an assembly is easier for analytics applications to manage than a very large index for a colossal relational database with multiple tables whose records would have to be joined at query time.
Google Cloud advanced and scientific services
Pub/Sub
Short for “publish-and-subscribe,” Pub/Sub is a mechanism that replaces the message queues used by middleware during the earlier era of client/server applications. For applications that are designed to cooperate without being explicitly connected (“asynchronously”), Pub/Sub serves as a kind of post office for events, so one application can notify others of their progress or about requests they may have.
Cloud AutoML
Based on recent efforts to automate the process of learning patterns in data without the need to create extra code, Cloud AutoML is a pre-configured service capable of “ingesting” pre-existing data and employing machine learning models on that data to detect patterns.
TensorFlow Enterprise
Deep learning systems require a class of component called an inference engine, which is capable of analyzing data sets and identifying patterns within them. TensorFlow (actually a separate commercial product) distributes its full-scale Enterprise edition, which incorporates such an engine, through Google Cloud. This way, developers can integrate capabilities such as video scanning, fraud detection, and behavioral prediction, directly into their containerized applications.
Google Cloud pricing models
Each of GCP’s services consumes fundamental resources of cloud computing: processor power, memory, data storage, and connectivity. Like other cloud service providers, GCP charges its customers for the resources these services consume. So whatever you choose to do with GCP, you pay for the resources they consume. BigQuery and BigTable can incur some significant expenses in data storage consumption. The formulas for determining the actual prices for resource consumption are actually somewhat complex. There is a separate pricing model, particularly for Cloud Run, GCP’s automated workload deployment mechanism. That model will be explained momentarily.
How much does using Google Cloud typically cost?
For more general usage models, Google offers a pricing calculator using formulas that are updated up-to-the-minute. But to use that calculator, your ballpark estimates of what resources you plan to consume, need to be within a surprisingly narrow ballpark. For example, to obtain a price estimate for Google Kubernetes Engine, you’d need to know the maximum number of compute nodes you’d be scaling out to, how much persistent disk storage your application will require (as opposed to ephemeral storage), and which availability zone you feel would be most efficient for load balancing, among other factors.
Google Compute Engine allows customers to choose a machine instance that may be pre-empted when not in active use. Unlike a pricing scenario where you pay for the instance plus the resources it uses, a GCE customer pays for the instance’s availability, which may then be discounted by 70 percent when resources are not in use. (Uploading a custom disk image to a VM instance, however, does incur a surcharge.)GCP does allow customers to create custom usage types, which enables individuals to select virtual machine buildouts that are unique from pre-defined models. However, Google no longer commits to ensuring discounts for using custom types instead of pre-defined types.GCP applies so-called sustained use discounts on persistently available workloads, on a roughly linear scale starting with workloads used over 25 percent of all available time during a given month. A workload that runs every minute of a billing period may be discounted as much as 30 percent.Google will discount certain customers as much as 57 percent for committing up-front to resource usage from between 1 to 3 years of sustained service.Enterprise customers anticipating heavy data consumption can sign up for a program called Storage Growth Plan, which entitles them to discounts if they commit to a minimum price per month for 12 months. This is for very heavy data consumers – not small businesses, but enterprises that plan to have GCS host massive data stores.
How much does using Cloud Run cost, and why is it different?
GCP’s pricing calculator is capable of projecting costs under its own model. Unlike general usage, Cloud Run utilizes an entirely separate meter, which ticks how many seconds (not minutes) that the platform runs the customer’s application in a one-gigabyte vCPU instance. (Google sometimes calls this same volume gibibyte, probably because Google loves to give readers new reasons to Google something.) A Cloud Run instance is an independent resource meant solely to run the application package deployed to it. This instance pre-empts itself when not in use. For the first 50 hours of its existence on the platform, it incurs no charge at all. GCP then charges the equivalent of between $0.086 and $0.12 per hour of vCPU, and between $0.009 and $0.013 per hour of storage, again depending on where in the world you deploy your workloads. There’s an additional charge of $0.40 per 1 million service requests over the network, after the first 2 million free requests. So Cloud Run is clearly a premium service, possibly incurring 4 times the charges of standard Google Compute Engine service, on account of its being fully managed and free from the customer-supplied configuration.
How much does using Anthos cost?
The Anthos pricing model is, once again, altogether different. It’s based on the understanding that its users require server clusters, as opposed to more granular requirements such as compute and storage times. So it charges each subscriber for each virtual CPU on an hourly or monthly basis: at the time of this writing, $0.012 per vCPU per hour, or $9 per vCPU per month. Management charges for on-premises equipment incur a premium of $0.10 per vCPU per hour or $75 per month. Google then offers customers the option of committing to an extended-term for a discount of 30 percent.
How does Google Cloud fare against competitors?
In such an analogy, Google Cloud is like Ikea: It sells itself to you first based on its overall experience. It tries to make you feel comfortable and at ease. It offers a unique and surprisingly diverse collection of the functional and the odd, the lowball and the premium, side-by-side in perfect harmony. And it openly acknowledges it isn’t the only game in town.
What are Google Cloud’s competitive strengths?
Automating the deployment of modern applications. An app is made of many moving parts, which is why some developers prefer to build their apps in the cloud, to begin with (“cloud-native”). Google is the originator of Kubernetes, which is an orchestrator for applications comprised of many components. Early on, Google took a proactive approach to automating the deployment of these multifaceted apps to the cloud: for example, opening itself to Kubo, an automation platform created to help developers using Cloud Foundry to deploy their applications from dev platforms to the cloud.Creative cost control. Rather than being the low-cost leader, Google’s strategy with GCP is to enable cost competitiveness in certain “sweet spot” scenarios. For example, Google offers a lifecycle manager for its object data storage, which enables the offloading or deletion of objects that haven’t been used in 30 days or more.Friendlier hand-holding for first-time users. A cloud services platform can be an overwhelming concept for a newcomer to digest. Just as it wasn’t obvious to many consumers what the purpose of a microcomputer actually was, a public cloud is a new and foreign beast for folks who are accustomed to seeing and touching the machine they’re using. GCP offers step-by-step examples of doing many of the most common tasks – for example, spinning up a Linux-based virtual machine, which is like claiming and setting up your own, brand new computer out of thin air.
Also: Top cloud providers in 2021: AWS, Microsoft Azure, and Google Cloud, hybrid, SaaS players Also: Google Cloud vs. AWS: Two vastly different profit pictures
Google Cloud vs. Microsoft Azure
Azure’s original service (when it was “Windows Azure”) was as a cloud-based deployment platform for applications written in any of Microsoft’s .NET languages. As such, Azure organically built its service portfolio out, based on its tight relationship with software developers. So an accurate picture of the core Azure customer could be summarized by the phrase “the Visual Studio user.” GCP began as a consumer business model around one of the core functions it created for its own purposes: distributed software orchestration. It doesn’t help you or your organization build software as much as it would help you deploy it. As the creator of Kubernetes, Google’s success is in getting software to the point where it can be distributed globally. It solved the problem of distributing updates to its search engine and e-mail service, and it then scaled down this solution to a form that’s usable by a small business. Any business that knows what distributed software is, let alone what it wants to do with the stuff, is already pretty tech-savvy. But that’s not really the market Google would prefer to cater to. So it makes the effort to make this technology more approachable, which at one scale is not unlike instructing home gardeners in how to make better use of nuclear reactors. This ends up being the key differentiator between Azure and GCP: To someone who may not be fully versed in the subject matter, Google has made further strides (so far) in adapting its services to people who may not understand them yet. You may be able to get a handle on BigQuery or Cloud Storage more readily.
Related articles
Elsewhere
How Google Cloud Run Combines Serverless with Containers by Janakiram MSV, The New StackA tale of two cloud providers: Google Cloud and AWS numbers reveal a balancing act inside each firm by Stuart Lauchlan, DiginomicaWhat’s the best cloud storage for you?by Steven J. Vaughan-Nichols, Cloud: The Report