GCP Best Practices

10 GCP Best Practices You Should Know

As the fastest growing major cloud provider, Google Cloud Platform (GCP) is making a significant impact on the cloud adoption choices by several retail users & enterprises these days. More and more new users are getting attracted by the convenience and features offered by this platform. With the rapid adoption rates, concerns related to the susceptibility of security and related things can also take place here. Accordingly, users have to keep an eye on the top GCP best practices that can benefit them to effortlessly meet their business objectives with fewer security concerns.

If you are a GCP user or want to adopt this platform for your business, make sure to follow the best practices of Google Cloud Platform. Adopting GCP best practices can help you not only to tackle cloud security issues but to aid in many other areas including best practices for reducing speed in GCP, ensuring continuous delivery, storage issues and much more. 

Google Cloud is coming up with something new every year. Let’s go through Google Cloud Trends for 2019!

List of Must-Known GCP Best Practices

You can see the list of GCP best practices below. It is listed out in no specific order or pattern. While some of these practices are effective to tackle multiple issues faced by GCP customers, some are exclusive for specific issues. Let’s read on!

1. Optimizing Persistent Disk Performance

If you are looking to figure out Google cloud storage best practices, you can’t neglect the optimization of persistent disks. Here, we shall try to explain the scenario with minimal tech jargon. When a virtual machine Compute Engine is launched in GCP, a disk is attached to act as the local storage for the application. When this Compute Engine is terminated, there are chances that the unattached disk will be still running. Even though the disks are not being used, GCP will continue to charge for the full price of the disk. This can cost you a lot of money from your cloud allocation.

So, removing such unattached persistent disks would be one of the Google cloud storage best practices that can save a lot from your monthly bill. With Google compute engine, it is a hassle-free task that is described below.

Step 1: Open the list of projects from Google Cloud Engine.

Step 2: Find out the disks which are unattached to any instance

Step 3: Get the label key/value of unattached disks.

Step 4: Finally, execute the “delete” command on the selected disk.

Hope, you have learned one of the GCP Best Practices that are very much important to a retail customer. These disks can only cost you money if these are still running even the engine is inactive. So, you need to continuously check for unattached disks in your GCP infrastructure to avoid unwanted expenses.

2. Ensuring Continuous Delivery

While thinking about the best practices for continuous delivery on GCP, there are 4 main practices you can follow. First one is operational integration that takes care of the process flow of the software development which can go back and forth. The next factor is automation which can bring consistency to your Continuous Delivery process. Formulating effective deployment strategies can be considered as the 3rd factor. Finally, immutable infrastructure – creating infrastructure components with a clear set of specifications without any changes. All these 4 practices altogether can be considered as the best practices for continuous delivery on GCP.

3. Firewall Rules

Sometimes, you need to configure VPC firewall rules in Google Cloud platform for allowing specific network access only to certain hosts who are having legitimate requirements. Even though this type of configuring isn’t practical in many situations, this can become very crucial while considering google cloud security best practices.

There are certain text attributes known by the name “network tags” which can be added to instances. It is recommended to leverage these tags to apply firewall rules. Also, these tags can be used for routing to logically related instances. Using these tags can save a lot of effort compared to working with IP addresses.

4. VPC Flow Logs

This is a feature that allows you to capture traffic information which is moving back and forth in VPC network interfaces. If you enable flow logs for network subnets which are hosting active instances, you can easily troubleshoot specific traffic when it is not reaching an instance. Also, it can help you to perform detailed analysis of expenses occurred and find out the ways to optimize it. Enabling VPC flow logs is one of the GCP best practices to ensure cloud security by monitoring the traffic which is reaching instances.

You can view these flow logs in Stackdriver Logging and can be able to export these logs into a destination that are supported by Stackdriver Logging. For example BigQuery, Cloud Pub/Sub etc. 

5. Logging and Versioning of Cloud Storage Buckets

While considering google cloud security best practices, logging and versioning of cloud storage buckets find its own place. These features must be enabled for cloud storage buckets as it contains very important data. As it helps to maintain, access and change logs for storage buckets, it can be very helpful during the inspection of security incidents. Versioning enables you to keep multiple variants of an object in the same storage bucket. In GCP, versioning helps to maintain and retrieve different versions of objects stored in the buckets. If you enable versioning, objects in buckets can be recovered both from application failures and user actions. 

Although object versioning can result in increased storage costs, this can be partially reduced by implementing object lifecycle management process to older versions. However, these practices can always put into the list of the GCP best practices you can follow in order to ensure security and version control of your GCP infrastructure.

Also Read: Top Reasons to Choose Google Cloud Hosting

6. Stackdriver Logging and Monitoring

Enabling the effective configuration of Stackdriver logging and monitoring is one of the best practices of the Google Cloud Platform to monitor the uptime and performance of GCP projects and their resources. As soon as you enable the Stackdriver logging, it is required to make sure that the monitoring alerts are configured. It will provide real-time alerts on various issues related to the resources. When one of the configured events trigger an alert condition, Stackdriver creates an incident in monitoring console. If notifications are properly configured, Stack will send notifications to third-party services as well to the point of contact. 

Also, keep in mind that the Stackdriver retention period is limited to 30 days. So for getting logs for an extended period, export sinks should be configured correctly. In the list of GCP Best Practices, this method is the one that provides real-time insights from the very large volume of system log files.

7. Zombie Instances

These are infrastructure components running on a cloud environment which are seldom or never used for any purpose. For example, there can be compute engine virtual machines that were used before, but no longer in use now. These instances will not be turned off after their consumption or it could be protected with some kinds of flags like ‘deletionProtection’. These instances can also get created by the failure of Compute Engine VMs, idle load balancers, and so on.

Whatever be the cause of this zombie assets, you will get charged for them as long as these items remain active. It is mandatory to terminate these kinds of assets so as to follow best practices on GCP as we have done till now. But, make sure to take backup of each asset to ensure the chances of recovery at a later time.

8. Committed & Sustained Use Discounts

For stable workloads, Google Cloud Platform offers a discount in purchasing a specific amount of compute and memory. At a commitment of up to 3 years and no upfront payment, customers can save money up to 57% of the normal price with this purchase. Availing these discounts can be one among GCP best practices as these discounts can be utilized for standard, highcpu, highmem and custom machine types and node groups which are sole-tenant. When these discounts expire, Compute Engine Virtual Machines get charged at the normal price. Also, note that once these committed discounts got purchased, customers can’t cancel these. 

Suppose you want to get discounts for a long period nevertheless you haven’t opted for these committed discounts, there is a way out. GCP has a plan called “Sustained Use Discounts” which you can avail when you consume certain resources for a better part of a billing month. As these discounts are applicable to a lot of resource like sole-tenant nodes, GPU devices, custom machine, etc. opting for these discounts would be another best practice on GCP. 

9. Limiting the Use of Cloud Identity and Access Management (IAM) Primitive Roles

As per the top GCP best practices, it is recommended to grant predefined roles to identities whenever possible, since they provide more granular access as compared to primitive roles. The use of primitive roles should be limited to few cases as given below

  • In case of projects working under small teams
  • When there is a requirement of changing the permissions of a project by a member
  • When there is a requirement to allow broader permissions for a project
  • When the platform won’t provide a role that includes desired permissions.

10. Delete Persistent Disk Snapshots

The persistent disk snapshots are created for the purpose of backup of disks in case of data loss. Anyhow, it will cost you a lot of money if you haven’t properly monitored these snapshots. Effective management of these snapshots can be one of GCP best practices which can help you work effortlessly. You can set a standard in your organization for how many of these snapshots should be retained per Compute Engine Virtual Machine. Also, note that recovery can occur from the most recent snapshots most of the time.

A Google Cloud Certification helps to demonstrate your knowledge and expertise on Google Cloud Platform. Read our previous blog to know more about Google Cloud Certifications.

Final Words

Apart from those are listed here, there are many options in GCP such as containers which can be the best practice for reducing speed in GCP  as well as memory overhead. Anyway, we’ve discussed most of the GCP best practices that you can follow in order to improve the performance of your GCP infrastructure and to reduce cost. Even for reducing the time incurred for operations, these can be GCP best practices that won’t cost you anything. There is no rule of thumb that you need to follow all these practices in your Google Cloud. But, following these practices will definitely bring notable improvements in your GCP infrastructure. 

Achieving a Google Cloud certification gives you expertise and thus enables you to implement GCP best practices. We at Whizlabs are aimed to help you in your certification preparation and so provide practice test series for the Google Cloud Certified Professional Cloud Architect and Google Cloud Certified Professional Data Engineer certification exams.

Become a Google Cloud certified professional now!

About Girdharee Saran

Girdharee Saran has a glorious 13 years of experience transforming the way e-learning and SaaS start-ups approach digital marketing for their organisations. He has successfully chartered tangible results, which have proven beneficial. Working in the spaces of content marketing and SEO for a considerable amount of time, he is well conversant in his art. Having taken a deep interest in content and growth marketing, his urge to learn more is perpetual. His current role at Whizlabs as VP Marketing is about but not limited to driving SEO, conversion optimisation, marketing automation, link building and strategising result driven content.

Leave a Comment

Your email address will not be published. Required fields are marked *


Scroll to Top