{"id":80185,"date":"2021-12-03T01:38:15","date_gmt":"2021-12-03T07:08:15","guid":{"rendered":"https:\/\/www.whizlabs.com\/blog\/?p=80185"},"modified":"2023-11-30T21:16:24","modified_gmt":"2023-12-01T02:46:24","slug":"gcp-professional-cloud-architect-free-questions","status":"publish","type":"post","link":"https:\/\/www.whizlabs.com\/blog\/gcp-professional-cloud-architect-free-questions\/","title":{"rendered":"25 Free Practice Questions &#8211; GCP Certified Professional Cloud Architect"},"content":{"rendered":"<p><span style=\"font-weight: 400;\">A Google Cloud Certified Professional Cloud Architect enables organisations to leverage Google Cloud Technologies.They possess a thorough understanding of Google Cloud, its architecture and are capable of designing and developing robust, secure and scalable dynamic solutions to drive business objectives.<\/span><\/p>\n<h3>What does a professional cloud architect do?<\/h3>\n<p>Google cloud professional cloud architect understands the cloud environment and google technology and enables companies to make use of google cloud services. The role of the cloud architect is as follows:<\/p>\n<ul>\n<li aria-level=\"1\">He designs cloud solutions according to the client&#8217;s needs.<\/li>\n<\/ul>\n<ul>\n<li aria-level=\"1\">Once the solution is designed he implements the cloud solutions<\/li>\n<\/ul>\n<ul>\n<li aria-level=\"1\">Develop secure, scalable, and reliable cloud solutions<\/li>\n<\/ul>\n<ul>\n<li aria-level=\"1\">He manages the multi tired distributed applications spans around multi &amp; hybrid cloud<\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">The <a href=\"https:\/\/www.whizlabs.com\/google-cloud-certified-professional-cloud-architect\/\">Google Certified Professional Cloud Architect certification<\/a> assesses your ability to<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Design and plan a cloud solution architecture<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Manage and provision the cloud solution infrastructure<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Design for security and compliance<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Analyze and optimize technical and business processes<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Manage implementations of cloud architecture<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Ensure solution and operations reliability<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">It is always highly recommended to go through practice exams \/ practice questions to make yourself familiar with the real exam pattern. Whizlabs offers a great set of practice questions for this certification exam.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Please find the below set as a test exercise that might help you to understand the exam pattern.<\/span><\/p>\n<h4><b>Q No. 1\u00a0<\/b><em><span style=\"font-weight: 400;\">For this question, refer to the<\/span><a href=\"https:\/\/services.google.com\/fh\/files\/blogs\/master_case_study_terramearth.pdf\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Terram Earth<\/span><\/a><span style=\"font-weight: 400;\">\u00a0case study.\u00a0<\/span><\/em><\/h4>\n<h4><em>Terram Earth receives daily data in the Cloud using network interconnects with private on-premises data centers.A subset of the data is transmitted and processed in real time and the rest daily, when the vehicles return to home base.\u00a0You have been asked to prepare a complete solution for the ingestion and management of this data, which must be both fully stored and aggregated for analytics with Bigquery.<\/em><\/h4>\n<p><span style=\"font-weight: 400;\">Which of the following actions do you think is the best solution (pick 2)?<\/span><\/p>\n<p><strong>A.<\/strong> Real-time data is streamed to BigQuery, and each day a job creates all the required aggregate processing<\/p>\n<p><strong>B.<\/strong> Real-time data is sent via Pub \/ Sub and processed by Dataflow that stores data in Cloud Storage and computes the aggregates for BigQuery.<\/p>\n<p><strong>C<\/strong>. The Daily Sensor data is uploaded to Cloud Storage with parallel composite uploads and at the end with a Cloud Storage Trigger a Dataflow procedure is activated<\/p>\n<p><span style=\"font-weight: 400;\"><strong>D.<\/strong><\/span><span style=\"font-weight: 400;\"> Daily Sensor data is loaded quickly with BigQuery Data Transfer Service and processed on demand via job<\/span><\/p>\n<p><b>Correct answer: B, C<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Pub\/Sub is the solution recommended by Google because it provides flexibility and security. Flexibility because, being loosely coupled with a publish \/ subscribe mechanism, it allows you to modify or add functionality without altering the application code.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Security because it guarantees reliable, many-to-many, asynchronous messaging with at-least-once delivery.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Uploading to both Cloud Storage and Bigquery is important because you want to store the data both in its entirety and in aggregate form.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Parallel composite uploads are recommended because the daily files are of considerable size (200 to 500 megabytes).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Using Dataflow allows you to manage processing in real-time and to use the same procedures for daily batches.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">A is wrong because it stores data only in BigQuery and does not provide real-time processing when the requirements are to have both global and aggregated data.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">D is wrong because, also here, data is stored only in BigQuery and because BigQuery Data Transfer Service involves passing through cloud sources, not from on-premise archives. It also doesn&#8217;t talk about how the data is decompressed and processed.<\/span><\/li>\n<\/ul>\n<h4><b>Q No. 2\u00a0<\/b><span style=\"font-weight: 400;\"><em>To ensure that your application will handle the load even if an entire zone fails, what should you do? Select all correct options.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\"><strong>A.<\/strong><\/span><span style=\"font-weight: 400;\"> Don&#8217;t select the &#8220;Multizone&#8221; option when creating your managed instance group.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>B.<\/strong><\/span><span style=\"font-weight: 400;\"> Spread your managed instance group over two zones and overprovision by 100%. (for Two Zone)<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>C<\/strong><\/span><span style=\"font-weight: 400;\">. Create a regional unmanaged instance group and spread your instances across multiple zones.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"><strong>D.<\/strong><\/span><span style=\"font-weight: 400;\"> Overprovision your regional managed instance group by at least 50%. (for Three Zones)<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Correct answer B and D<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Feedback<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B is correct if one zone fails you still have 100% desired capacity in another zone<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C is incorrect because it won&#8217;t be able to handle the full load since, it&#8217;s unmanaged group and won&#8217;t auto scale accordingly.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D is correct since you have at least total 150% desired capacity spread over 3 zones, each zone has 50% capacity. You\u2019ll have 100% desired capacity in two zones if any single zone failed at given time.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If you are creating a regional managed instance group in a region with at least three zones, Google recommends overprovisioning your instance group by at least 50%.<\/span><\/p>\n<h4><b>Q No. 3\u00a0<\/b><em><span style=\"font-weight: 400;\">For this question, refer to the<\/span><a href=\"https:\/\/cloud.google.com\/certification\/guides\/professional-cloud-architect\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">EHR Healthcare<\/span><\/a><span style=\"font-weight: 400;\"> case study.<\/span><\/em><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>The case study explains that: EHR hosts several legacy file-based and API integrations with on-site insurance providers, which are expected to be replaced in the coming years. Hence, there is no plan to upgrade or move these systems now.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>But EHR wants to use these APIs from its applications in Google Cloud so that they remain on-premise and private, securely exposing them.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>In other words, EHR wants to protect these APIs and the data they process, connect them only to its VPC environment in Google Cloud, with its systems in a protected DMZ that is not accessible from the Internet. Providers will be able to access integrations only through applications and with all possible precautions.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Which technique allows you to fulfill these requirements?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. Gated Egress and VPC Service Controls<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Cloud Endpoint<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Cloud VPN<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Cloud Composer<\/span><\/p>\n<p><b>Correct Answer: A<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Gated egress topology lets APIs in on-premise environments be available only to processes inside Google Cloud without direct public internet access.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Applications in Google Cloud communicate with APIs in on-premise environments only with private IP addresses and are eventually exposed to the public via an Application Load Balancer and using VPC Service Controls.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">VPC Service Controls create additional security for Cloud applications:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Isolate services and data<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Monitor against data theft and accidental data loss<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Restrict access to authorized IPs, client context, and device parameters<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong <\/b><span style=\"font-weight: 400;\">because Cloud Endpoint is an API Gateway that could create an application facade as required. But Cloud Endpoint does not support on-premises endpoints.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>C is wrong <\/b><span style=\"font-weight: 400;\">because Cloud VPN is just a way to connect the local network to a VPC.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D is wrong <\/b><span style=\"font-weight: 400;\">because Cloud Composer is a workflow management service.<\/span><\/li>\n<\/ul>\n<h4><b>Q No. 4\u00a0<\/b><span style=\"font-weight: 400;\">F<\/span><span style=\"font-weight: 400;\"><em>or this question, refer to the<\/em><\/span><em><a href=\"http:\/\/cloud.google.com\/certification\/guides\/professional-cloud-architect\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Helicopter Racing League<\/span><\/a><span style=\"font-weight: 400;\"> (HRL) case study.<\/span><\/em><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>Helicopter Racing League (HRL) wants to migrate their existing cloud service to a new platform with solutions that allow them to use and analyze video of the races both in real-time and recorded for broadcasting, on-demand archive, forecasts, and deeper insights.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>There is the need to migrate the recorded videos from another provider without service interruption.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>The idea is to switch immediately the video service to GCP while migrating selected contents.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>Users cannot directly access the contents wherever they are stored, but only through the correct and secure procedure specially setup.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\">Which of the following strategies do you think could be feasible for serving the contents and migrating the videos with minimal effort\u00a0 (pick 3)?<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A. Use Cloud CDN with internet network endpoint group<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Use a Cloud Function that can fetch the video from the correct source<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Use Apigee<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Use Cloud Storage Transfer Service<\/span><\/p>\n<p><span style=\"font-weight: 400;\">E. Use Cloud Storage streaming service<\/span><\/p>\n<p><span style=\"font-weight: 400;\">F. Use Google Transfer appliance<\/span><\/p>\n<p><b>Correct Answers: A, C and D<\/b><\/p>\n<p><span style=\"font-weight: 400;\">(A) Cloud CDN can serve content from external backends\u00a0 (on-premises or in another cloud).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">External backends are also called custom origins; their endpoints are called NEG, network endpoint group.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">So the content URL is masked and the origin must be accessible only with the CDN service.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">(D) For Video Migration, since they are stored in a Cloud Provider, the best service is Cloud Storage Transfer Service, because it is aimed to perform large-scale data transfer between on-premises and multi-cloud over online networks with 10s of Gbps. Easy and fast.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">(C) Apigee is the most powerful GCP Api Management and it is capable of managing application services in GCP, on-premises, or in a multi-cloud.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong<\/b><span style=\"font-weight: 400;\"> because it is complicated, it will not work well and you need to write code for this solution.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>E. is wrong <\/b><span style=\"font-weight: 400;\">because the Cloud Storage streaming service is able to acquire the streaming data without having to archive the file first. It is used when you need to upload data from a process or on-the-fly.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>F. is wrong <\/b><span style=\"font-weight: 400;\">because Google Transfer Appliance is when you have to transfer a large amount of data stored locally; so, it is faster to ship a storage device (Google Transfer Appliance) without the use of telecommunication lines.<\/span><\/li>\n<\/ul>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-80188\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Google-Transfer-Application-300x169.png\" alt=\"Google Transfer Application\" width=\"600\" height=\"338\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Google-Transfer-Application-300x169.png 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Google-Transfer-Application.png 512w\" sizes=\"(max-width: 600px) 100vw, 600px\" \/><\/p>\n<h4><\/h4>\n<h4><b>Q No. 5\u00a0<\/b><span style=\"font-weight: 400;\">A digital Media company has recently moved its infrastructure from On-premise to Google Cloud, they have several instances under a Global HTTPS load balancer, a few days ago the Application and Infrastructure were subjected to DDOS attacks, they are looking for a service that would provide a defense mechanism against the DDOS attacks. Please select the relevant service.<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A. Cloud Armor<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Cloud-Identity Aware Proxy<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. GCP Firewalls<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. IAM policies<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Answer: Option A is the CORRECT choice because Cloud Armor delivers defense at scale against infrastructure and application Distributed Denial of Service (DDoS) attacks using Google\u2019s global infrastructure and security systems.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Option B is INCORRECT because, Cloud-Identity Aware Proxy lets you establish a central authorization layer for applications accessed by HTTPS, so you can use an application-level access control model instead of relying on network-level firewalls.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Option C is INCORRECT because GCP firewalls rules don\u2019t apply for HTTP(S) Load Balancers, while Cloud Armor is delivered at the edge of Google\u2019s network, helping to block attacks close to their source.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Option D. IAM policies doesn\u2019t help in mitigating DDOS attacks.<\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-80189\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Cloud-Armor-300x145.jpg\" alt=\"Cloud Armor\" width=\"611\" height=\"295\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Cloud-Armor-300x145.jpg 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Cloud-Armor.jpg 512w\" sizes=\"(max-width: 611px) 100vw, 611px\" \/><\/p>\n<h4><b>Q No. 6\u00a0<\/b><span style=\"font-weight: 400;\"><em>You work in an international company and manage many GCP Compute Engine instances using SSH and RDS protocols.<\/em><\/span><span style=\"font-weight: 400;\"><em>The management, for security reasons, asks you that VMs cannot have multiple public IP addresses. So you are actually no longer able to manage these VMs.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>How is it possible to manage in a simple and secure way, respecting the company rules, access and operations with these systems?<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A. Bastion Hosts<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Nat Instances<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. IAP&#8217;s TCP forwarding<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Security Command Center<\/span><\/p>\n<p><b>Correct Answer: C<\/b><\/p>\n<p><span style=\"font-weight: 400;\">IAP- Identity-Aware Proxy is a service that lets you use SSH and RDP on your GCP VMs from the public internet, wrapping traffic in HTTPS and validating user access with IAM.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Inside GCP there is a Proxy server with a listener that translates the communication and lets you operate in a safe way without the public exposure of your GCP resources.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>A is wrong <\/b><span style=\"font-weight: 400;\">because a Bastion Host needs a Public IP, so it is not feasible.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong <\/b><span style=\"font-weight: 400;\">because a Nat Instance needs a Public IP, too. In addition, it is aimed at outgoing connectivity to the internet, blocking inbound traffic, thus preventing exactly what we need.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D\u00a0 is wrong <\/b><span style=\"font-weight: 400;\">because Security Command Center is a reporting service for security that offers monitoring against vulnerabilities and threats.<\/span><\/li>\n<\/ul>\n<h4><b>Q No. 7\u00a0<\/b><em><span style=\"font-weight: 400;\">For this question, refer to the<\/span><a href=\"http:\/\/cloud.google.com\/certification\/guides\/professional-cloud-architect\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Helicopter Racing League<\/span><\/a><span style=\"font-weight: 400;\"> (HRL) case study.<\/span><\/em><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>Helicopter Racing League (HRL) wants to create and update predictions on the results of the championships, with data that collects during the rages.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>HRL wants to create long-term Forecasts with the data from video collected both while taking (first processing) and during streaming for users.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>HLR want to exploit also existing video content that is stored in object storage with their existing cloud\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>provider<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">On the advice of the Cloud Architects, they decided to use the following strategies:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. Creating experimental forecast models with minimal code in the powerful GCP environment, using also the data already collected<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. The ability and culture to develop highly customized models that are continuously improved with the data that it gradually collects.\u00a0 They plan to try multiple open source frameworks<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. To Integrate teamwork and create\/optimize MLOps processes<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. To Serve the models in an optimized environment<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Which of the following GCP\u00a0 services do you think are the best given these requirements?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. Video Intelligence<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. TensorFlow Enterprise and KubeFlow for the customized models<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. BigQuery ML<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Vertex-AI<\/span><\/p>\n<p><span style=\"font-weight: 400;\">E. Kubernetes and TensorFlow Extend<\/span><\/p>\n<p><b>Correct Answer: D<\/b><\/p>\n<p><span style=\"font-weight: 400;\">All the answers are correct, but the best solutions are:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Vertex AI is a platform that integrates multiple ML tools and lets you improve MLOps pipelines aimed at model maintenance and improvement.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Vertex AI exploits Auto ML Video that can create experimental forecast models with minimal or no code, even with external data. Usually, data is imported in Cloud Storage, so as to obtain minimal latency.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Vertex AI can build and deploy models developed with many open source frameworks and supports continuous modeling and retraining using TensorFlow Extended and Kubeflow Pipelines.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In addition offers services for Feature engineering, hyperparameter tuning, model serving, tuning, and model understanding.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>A is wrong<\/b><span style=\"font-weight: 400;\"> because GCP Video Intelligence API is composed of pre-trained machine learning models for the recognition of items,\u00a0 places, and actions. It lacks the personalized features the HLR needs.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong <\/b><span style=\"font-weight: 400;\">because TensorFlow Enterprise and KubeFlow cover only the requirements for highly customized models and MLOps.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>C is wrong <\/b><span style=\"font-weight: 400;\">because for BigQuery ML you need to transform data and it can integrate customized models but not develop them.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>E is wrong <\/b><span style=\"font-weight: 400;\">because Kubernetes and Tensorflow can develop and serve customized models but they are not the right tools for easy experimentations.<\/span><\/li>\n<\/ul>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-80190\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Helicopter-Racing-League-300x169.png\" alt=\"Helicopter Racing League\" width=\"565\" height=\"318\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Helicopter-Racing-League-300x169.png 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Helicopter-Racing-League.png 512w\" sizes=\"(max-width: 565px) 100vw, 565px\" \/><\/p>\n<h4><b>Q No. 8\u00a0<\/b><em><span style=\"font-weight: 400;\"><a href=\"http:\/\/cloud.google.com\/certification\/guides\/professional-cloud-architect\" target=\"_blank\" rel=\"noopener\">Helicopter Racing League<\/a><\/span> <span style=\"font-weight: 400;\">(HRL) offers premium contents and, among their business requirements, has:<\/span><\/em><\/h4>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\">\n<h4><span style=\"font-weight: 400;\"><em>To increase the number of concurrent viewers and<\/em><\/span><\/h4>\n<\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\">\n<h4><span style=\"font-weight: 400;\"><em>To create a merchandising revenue stream<\/em><\/span><\/h4>\n<\/li>\n<\/ul>\n<h4><span style=\"font-weight: 400;\"><em>So, they want to offer service subscriptions for their and partner services and manage monetization, pay-as-use management, flat-use control, and rate-limiting. All the functionalities that can assure a managed revenue stream in the simplest way.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Which is the best GCP Service to achieve that?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. Cloud Endpoints<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Apigee<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Cloud Tasks<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Cloud Billing<\/span><\/p>\n<p><span style=\"font-weight: 400;\">E. API Gateway<\/span><\/p>\n<p><b>Correct Answer: B<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Apigee is the GCP top product for API management.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">It offers all the functionalities requested: monetization, traffic control, throttling, security and hybrid (third -parties) integration.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">GCP offers 3 different products for API management: Apigee, Cloud Endpoints (only GCP) and API Gateway (for Serverless workloads).<\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-80191\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/API-League-300x156.png\" alt=\"API League\" width=\"538\" height=\"280\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/API-League-300x156.png 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/API-League.png 512w\" sizes=\"(max-width: 538px) 100vw, 538px\" \/><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>A is wrong <\/b><span style=\"font-weight: 400;\">because\u00a0 Cloud Endpoints is an API product, too but doesn&#8217;t support monetization and hybrid<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>C. is wrong <\/b><span style=\"font-weight: 400;\">because Cloud Tasks is a dev tool for thread management<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D is wrong<\/b><span style=\"font-weight: 400;\"> because Cloud Billing is for GCP services accounting, billing and reporting, not for end-user services<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\"><strong>E<\/strong><\/span><span style=\"font-weight: 400;\">\u00a0 is wrong because API Gateway is an API product, too but doesn&#8217;t support monetization and hybrid<\/span><\/li>\n<\/ul>\n<h4><b>Q No. 9\u00a0<\/b><em><span style=\"font-weight: 400;\">For this question, refer to the<\/span><a href=\"https:\/\/services.google.com\/fh\/files\/blogs\/master_case_study_terramearth.pdf\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">TerramEarth<\/span><\/a><span style=\"font-weight: 400;\"> case study.\u00a0<\/span><span style=\"font-weight: 400;\">TerramEarth needs to migrate legacy monolithic applications into containerized RESTful microservices.\u00a0<\/span><span style=\"font-weight: 400;\">The development team is experimenting with the use of packaged procedures with containers in a completely serverless environment, using Cloud Run.\u00a0<\/span><span style=\"font-weight: 400;\">Before migrating the existing code into production it was decided to perform a lift and shift of the monolithic application and to develop the new features that are required with serverless microservices.<\/span><\/em><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>So, they want to carry out a gradual migration, activating the new microservice functionalities while maintaining the monolithic application for all the other activities.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>The problem now is how to integrate the legacy monolithic application with the new microservices to have a consistent interface and simple management.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Which of the following techniques can be used (pick 3)?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. Use an HTTP(S) Load Balancer<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Develop a proxy inside the monolithic application for integration<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Use Cloud Endpoints\/Apigee<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Use Serverless NEGs for integration<\/span><\/p>\n<p><span style=\"font-weight: 400;\">E. Use App Engine flexible edition<\/span><\/p>\n<p><b>Correct answers: A, C and D<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The first solution (A+D) uses HTTP(S) Load Balancing and NEGs.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Network endpoint groups (NEG) let you design serverless backend endpoints for external HTTP(S) Load Balancing. Serverless NEGs became target proxies and the forwarding is performed with the use of URL maps. In this way, you may integrate seamlessly with the legacy application.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">An alternative solution is API Management, which creates a facade and integrates different applications. GCP has 3 API Management solutions: Cloud Endpoints, Apigee, and API Gateway. API Gateway is only for serverless back ends.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong <\/b><span style=\"font-weight: 400;\">because developing a proxy inside the monolithic application for integration means, keep on updating the old app with possible service interruptions and useless toil.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>E is wrong <\/b><span style=\"font-weight: 400;\">because App Engine\u2019s flexible edition manages containers but cannot integrate the legacy monolithic application with the new functions.<\/span><\/li>\n<\/ul>\n<h4><b>Q No. 10\u00a0<\/b><span style=\"font-weight: 400;\">Y<\/span><span style=\"font-weight: 400;\"><em>our company has reserved a monthly budget for your project. You want to be informed automatically of your project spend so that you can take action when you approach the limit. What should you do?<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A. Link a credit card with a monthly limit equal to your budget.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Create a budget alert for desired percentages such as 50%, 90%, and 100% of your total monthly budget.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. In App Engine Settings, set a daily budget at the rate of 1\/30 of your monthly budget.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. In the GCP Console, configure billing export to BigQuery. Create a saved view that queries your total spend.<\/span><\/p>\n<p><b>Correct answer B<\/b><\/p>\n<p><b>Feedback<\/b><\/p>\n<p><span style=\"font-weight: 400;\">A is not correct because this will just give you the spend but will not alert you when you approach the limit.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B Is correct because a budget alert will warn you when you reach the limits set.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C Is not correct because those budgets are only on App Engine, not other GCP resources. Furthermore, this makes subsequent requests fail, rather than alert you in time so you can mitigate appropriately.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D is not correct because if you exceed the budget, you will still be billed for it. Furthermore, there is no alerting when you hit that limit by GCP.<\/span><\/p>\n<h4><b>Q No. 11\u00a0<\/b><em><span style=\"font-weight: 400;\">For this question, refer to the<\/span><a href=\"https:\/\/services.google.com\/fh\/files\/blogs\/master_case_study_mountkirk_games.pdf\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Mountkirk Games<\/span><\/a><span style=\"font-weight: 400;\"> case study.<\/span><\/em><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>MountKirk Games uses Kubernetes and Google Kubernetes Engine. For the management, it is important to use an open platform, cloud-native, and without vendor lock-ins.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>But they also need to use advanced APIs of GCP services and want to do it securely using standard methodologies, following Google-recommended practices but above all efficiently with maximum security.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Which of the following solutions would you recommend?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. API keys<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Service Accounts<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Workload identity<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Workload identity federation<\/span><\/p>\n<p><b>Correct Answer: C<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The preferred way to access services in a secured and authorized way is with Kubernetes service accounts, which are not the same as GCP service accounts.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">With Workload Identity, you can configure a Kubernetes service account so that workloads will automatically authenticate as the corresponding Google service account when accessing GCP APIs.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Moreover, Workload Identity is the recommended way for applications in GKE to securely access GCP APIs because it lets you manage identities and authorization in a standard, secure and easy way.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>A is wrong <\/b><span style=\"font-weight: 400;\">because API keys offer minimal security and no authorization, just identification.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong <\/b><span style=\"font-weight: 400;\">because GCP Service Accounts are GCP proprietary. Kubernetes is open and works with Kubernetes service accounts.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D is wrong <\/b><span style=\"font-weight: 400;\">because Workload identity federation is useful when you have an external identity provider such as Amazon Web Services (AWS), Azure Active Directory (AD), or an OIDC-compatible provider.<\/span><\/li>\n<\/ul>\n<h4><em><b>Q No. 12\u00a0<\/b><span style=\"font-weight: 400;\">When creating firewall rules, what forms of segmentation can narrow which resources the rule is applied to? (Choose all that apply)<\/span><\/em><\/h4>\n<p><span style=\"font-weight: 400;\">A. Network range in source filters<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Zone<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Region<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Network tags<\/span><\/p>\n<p><b>Correct Answer A and D<\/b><\/p>\n<p><b>Explanation<\/b><\/p>\n<p><span style=\"font-weight: 400;\">You can restrict network access on the firewall by network tags and network ranges\/subnets.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Here is the console screenshot showing the options when you create firewall rules<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8211; network tags and network ranges\/subnets are highlighted<\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-80192\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Firewall-League-158x300.png\" alt=\"Firewall League\" width=\"361\" height=\"686\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Firewall-League-158x300.png 158w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Firewall-League-221x420.png 221w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Firewall-League.png 270w\" sizes=\"(max-width: 361px) 100vw, 361px\" \/><\/p>\n<h4><em><b>Q No. 13 <\/b><span style=\"font-weight: 400;\"><a href=\"http:\/\/cloud.google.com\/certification\/guides\/professional-cloud-architect\" target=\"_blank\" rel=\"noopener\">Helicopter Racing League<\/a><\/span> <span style=\"font-weight: 400;\">(HRL) wants to migrate their existing cloud service to the GCP platform with solutions that allow them to use and analyze video of the races both in real-time and recorded for broadcasting, on-demand archive, forecasts, and deeper insights.<\/span><\/em><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>During a race filming, how can you manage both live playbacks of the video and live annotations so that they are immediately accessible to users without coding (pick 2)?<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A.Use HTTP protocol<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Use Video Intelligence API Streaming API<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Use DataFlow<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Use HLS protocol<\/span><\/p>\n<p><span style=\"font-weight: 400;\">E. Use Pub\/Sub<\/span><\/p>\n<p><b>Correct Answers: B and D<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D is correct<\/b><span style=\"font-weight: 400;\"> because HTTP Live Streaming is a technology from Apple for sending live and on\u2010demand audio and video to a broad range of devices.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">It supports both live broadcasts and prerecorded content, from storage and CDN.<\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-80193\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Dataflow-300x137.png\" alt=\"Dataflow\" width=\"661\" height=\"302\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Dataflow-300x137.png 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Dataflow.png 512w\" sizes=\"(max-width: 661px) 100vw, 661px\" \/><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B\u00a0 is correct <\/b><span style=\"font-weight: 400;\">because Video Intelligence API Streaming API is capable of analyzing and getting important metadata from live media, using the\u00a0 AIStreamer ingestion library.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>A is wrong <\/b><span style=\"font-weight: 400;\">because HTTP protocol alone cannot manage live streaming video.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>C is wrong<\/b><span style=\"font-weight: 400;\"> because Dataflow manages streaming data pipelines but cannot derive metadata from binary data, unless you use customized code.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>E is wrong<\/b><span style=\"font-weight: 400;\"> because Pub\/Sub could ingest metadata, but not analyze and getting labels and other info from videos.<\/span><\/li>\n<\/ul>\n<h4><b>Q No.14\u00a0<\/b><span style=\"font-weight: 400;\"><em>What is the best practice for separating responsibilities and access for production and development environments?<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A. Separate project for each environment, each team only has access to their project.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Separate project for each environment, both teams have access to both projects.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Both environments use the same project, but different VPC&#8217;s.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Both environments use the same project, just note which resources are in use by which group.<\/span><\/p>\n<p><b>Correct Answer A<\/b><\/p>\n<p><b>Explanation<\/b><\/p>\n<p><span style=\"font-weight: 400;\">A (Correct answer) &#8211; Separate project for each environment, each team only has access to their project.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">For least privilege and separation of duties, the best practice is to separate both environments into different projects, development or production team gets their own accounts, and each team is assigned to only their projects.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The best practices:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"> \u00a0 \u00a0 <\/span> <span style=\"font-weight: 400;\">You should not use same account for both Development and production environments regardless how do you create projects inside that account for different environments. You should use different account for each environment which associated with different group of users. You should use project to isolate user access to resource not to manage users.<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 \u00a0 <\/span> <span style=\"font-weight: 400;\">Using a shared VPC allows each team to individually manage their own application resources, while enabling each application to communicate between each other securely over RFC1918 address space. So VPC&#8217;s isolate resources but not user\/service accounts.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">B, C, and D are incorrect<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Answer B is the scenario that use same account for both development and production environments attempting to isolate user access with different projects<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Answer C is the scenario that use same account for both development and production environments with same project attempting to isolate user access with network separation.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Answer D is the scenario that use same account for both development and production environments with same project attempting to isolate user access with user group at resource level.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">You may grant roles to group of users to set policies at organization level, project level, or (in some cases) the resource (e.g., existing Cloud Storage and BigQuery ACL systems as well as and Pub\/Sub topics) level.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The best practice: Set policies at the Organization level and at the Project level rather than at the resource level. This is because as new resources get added, you may want them to automatically inherit policies from their parent resource. For example, as new Virtual Machines gets added to the project through auto scaling, they automatically inherit the policy on the project.<\/span><\/p>\n<h4><b>Q No. 15\u00a0<\/b><span style=\"font-weight: 400;\"><em>What is the command for creating a storage bucket that has once per month access and is named &#8216;archive_bucket&#8217;?<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A. gsutil rm -coldline gs:\/\/archive_bucket<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. gsutil mb -c coldline gs:\/\/archive_bucket<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. gsutil mb -c nearline gs:\/\/archive_bucket<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. gsutil mb gs:\/\/archive_bucket<\/span><\/p>\n<p><b>Correct answer C<\/b><\/p>\n<p><span style=\"font-weight: 400;\">mb is to make the bucket. Nearline buckets are for once per month access. Coldline buckets require only accessing once per 90 days and would incur additional charges for greater access.<\/span><\/p>\n<p><b>Further Explanation<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Synopsis<\/span><\/p>\n<p><span style=\"font-weight: 400;\">gsutil mb [-c class] [-l location] [-p proj_id] url&#8230;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If you don&#8217;t specify a -c option, the bucket is created with the default storage class Standard Storage, which is equivalent to Multi-Regional Storage or Regional Storage, depending on whether the bucket was created in a multi-regional location or regional location, respectively.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If you don&#8217;t specify a -l option, the bucket is created in the default location (US). -l option can be any multi-regional or regional location.<\/span><\/p>\n<h4><b>Q No. 16\u00a0<\/b><span style=\"font-weight: 400;\"><em>You need to deploy an update to an application in Google App Engine. The update is risky, but it can only be tested in a live environment. What is the best way to introduce the update to minimize risk?<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A. Deploy a new version of the application but use traffic splitting to only direct a small number of users to the new version.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Deploy the application temporarily and be prepared to pull it back if needed.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Warn users that a new app version may have issues and provide a way to contact you if there are problems.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Create a new project with the new app version, then redirect users to the new version.<\/span><\/p>\n<p><b>Correct Answer A<\/b><\/p>\n<p><b>Explanation<\/b><\/p>\n<p><span style=\"font-weight: 400;\">A (Correct Answer) &#8211; Deploying a new version without assigning it as the default version will not create downtime for the application. Using traffic splitting allows for easily redirecting a small amount of traffic to the new version and can also be quickly reverted without application downtime<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B &#8211; Deploy the application temporarily and be prepared to pull it back if needed. Deploying the application new version as default requires moving all traffic to the new version. This could impact all users and disable the service during the new version\u2019s live time.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C &#8211; Warn users that a new app version may have issues and provide a way to contact you if there are problems. We won\u2019t recommend this practice.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D &#8211; Create a new project with the new app version, then redirect users to the new version.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Deploying a second project requires data synchronization and having an external traffic splitting solution to direct traffic to the new application. While this is possible, with Google App Engine, these manual steps are not required.<\/span><\/p>\n<h4><b>Q No. 17\u00a0<\/b><span style=\"font-weight: 400;\">Your team is redacting a new application that is about to go into production.\u00a0<\/span><span style=\"font-weight: 400;\">During testing, it emerges that a developer code allows user input to be used to modify the application and execute commands.<\/span><span style=\"font-weight: 400;\">This event has thrown everyone into despair and has generated the fear that there are other problems of this type in the system.<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Which of the following services may help you?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Cloud Armor<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Web Security Scanner<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Security Command Center<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Shielded GKE nodes<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: B<\/b><\/p>\n<p><span style=\"font-weight: 400;\">What you need is a service that examines your code and finds out if something is vulnerable or insecure. Web Security Scanner does exactly this:\u00a0 it performs managed and custom web vulnerability scanning.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">It performs scans for OWASP, CIS GCP Foundation, PCI-DSS (and more) published findings.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>A is wrong <\/b><span style=\"font-weight: 400;\">because Cloud Armor is a Network Security Service, with WAF rules, DDoS and application attacks defenses.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>C\u00a0 is wrong <\/b><span style=\"font-weight: 400;\">because the Security Command Center suite contains Web Security Scanner and many other services.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D\u00a0 is wrong <\/b><span style=\"font-weight: 400;\">because Shielded GKE nodes are special and secured VMs.<\/span><\/li>\n<\/ul>\n<h4><b>Q No.18\u00a0<\/b><span style=\"font-weight: 400;\"><em>Your company&#8217;s development teams use, as required by internal rules, service accounts.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>They just forget to delete the service accounts that are no longer used.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>A coordinator noticed the problem and ordered them to clean up.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>Now your team is faced with a huge, boring, and potentially dangerous job and has asked you for help.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">What advice can you give him?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. Service account insights<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Cloud Audit Logs<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Activity Analyzer<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Flow logs<\/span><\/p>\n<p><b>Correct Answers: A and C<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The best way to find out service accounts usage are:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Service account insights<\/b><span style=\"font-weight: 400;\">, that lists\u00a0 service accounts not used in the past 90 days and<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Activity Analyzer<\/b><span style=\"font-weight: 400;\">, which reports about service account\u2019s last usages.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">So they let you control the opposite aspects.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong<\/b><span style=\"font-weight: 400;\"> because Cloud Audit Logs contain audit trials, that is user activity and services modifications in GCP.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D is wrong<\/b><span style=\"font-weight: 400;\"> because Flow logs contain only network information to and from VM instances.<\/span><\/li>\n<\/ul>\n<h4><b>Q No. 19\u00a0<\/b><em><span style=\"font-weight: 400;\">For this question, refer to the<\/span><a href=\"https:\/\/services.google.com\/fh\/files\/blogs\/master_case_study_mountkirk_games.pdf\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Mountkirk Games<\/span><\/a><span style=\"font-weight: 400;\"> case study.<\/span><\/em><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>Mountkirk Games is building a new multiplayer game that they expect to be very popular.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>They want to be able to improve every aspect of the game and the infrastructure. To do this, they plan to create a system for telemetry analysis.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>They want to minimize effort, maximize flexibility, and ease of maintenance.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>They also want to be able to perform real-time analyses.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\">Which of the following services may help to fulfill these requirements?<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A. Pub\/Sub and Big Table<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Kubeflow<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Pub\/Sub, Dataflow and BigQuery<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Pub\/Sub and Cloud Spanner<\/span><\/p>\n<p><b>Correct Answer: C<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Pub\/Sub ingests and stores these messages, both from the user devices or the Game Server.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dataflow can transform data in schema-based and process it in real-time<\/span><\/p>\n<p><span style=\"font-weight: 400;\">BigQuery will perform analytics.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>A is wrong<\/b><span style=\"font-weight: 400;\"> because Big Table is not the service for real-time analytics<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong<\/b><span style=\"font-weight: 400;\"> because Kubeflow is used for Machine Learning pipelines.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D\u00a0 is wrong<\/b><span style=\"font-weight: 400;\"> because Cloud Spanner is a global SQL Database and not an analytics tool.<\/span><\/li>\n<\/ul>\n<h4><b>Q No. 20\u00a0<\/b><span style=\"font-weight: 400;\">Y<\/span><span style=\"font-weight: 400;\"><em>ou work for a multinational company and are migrating an Oracle database to a multi-region Spanner cluster.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>You have to plan the migration activities and the DBAs have told you that the migration will be almost immediate because no non-standard ISO \/ IEC features or stored procedures are used.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>But you know that there is an element that will necessarily require some maintenance work.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Which is this element?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. You need to drop the secondary indexes<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. You have to change most of the primary keys<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. You need to manage table partitions<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. You have to change the schema design of many tables<\/span><\/p>\n<p><b>Correct Answer: B<\/b><\/p>\n<p><span style=\"font-weight: 400;\">With traditional SQL databases, it is advisable to use numerical primary keys in sequence. Oracle DB, for example, has an object type that creates progressive values, called a sequence.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Instead, it is important that distributed databases do not use progressive keys, because the tables are split among the nodes in primary key order and therefore all the inserts would take place only at one point, degrading performance.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This problem is called <\/span><b>hotspotting<\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>A is wrong <\/b><span style=\"font-weight: 400;\">because Spanner handles secondary indexes.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>C is wrong<\/b><span style=\"font-weight: 400;\"> because Spanner automatically manages the distribution of data in the clusters.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D is wrong <\/b><span style=\"font-weight: 400;\">because we already know that the structuring of the tables follows the standards.<\/span><\/li>\n<\/ul>\n<h4><b>Q No. 21\u00a0<\/b><span style=\"font-weight: 400;\"><em>You need to take streaming data from thousands of Internet of Things (IoT) devices, ingest it, run it through a processing pipeline, and store it for analysis. You want to run SQL queries against your data for analysis. What services in which order should you use for this task?<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">A. Cloud Dataflow, Cloud Pub\/Sub, BigQuery<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Cloud Pub\/Sub, Cloud Dataflow, Cloud Dataproc<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Cloud Pub\/Sub, Cloud Dataflow, BigQuery<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. App Engine, Cloud Dataflow, BigQuery<\/span><\/p>\n<p><b>Correct Answer C<\/b><\/p>\n<p><b>Explanation<\/b><\/p>\n<p><span style=\"font-weight: 400;\">C (Correct answer) &#8211; Cloud Pub\/Sub, Cloud Dataflow, BigQuery<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Cloud Pub\/Sub is a simple, reliable, scalable foundation for stream analytics and event-driven computing systems. As part of Google Cloud\u2019s stream analytics solution, the service ingests event streams and delivers them to Cloud Dataflow for processing and BigQuery for analysis as a data warehousing solution. Relying on the Cloud Pub\/Sub service for delivery of event data frees you to focus on transforming your business and data systems with applications such as:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\"> \u00a0 \u00a0 <\/span> <span style=\"font-weight: 400;\">check Real-time personalization in gaming<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 \u00a0 <\/span> <span style=\"font-weight: 400;\">check Fast reporting, targeting and optimization in advertising and media<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 \u00a0 <\/span> <span style=\"font-weight: 400;\">check Processing device data for healthcare, manufacturing, oil and gas, and logistics<\/span><\/li>\n<li><span style=\"font-weight: 400;\"> \u00a0 \u00a0 <\/span> <span style=\"font-weight: 400;\">check Syndicating market-related data streams for financial services<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Also, Use Cloud Dataflow as a convenient integration point to bring predictive analytics to fraud detection, real-time personalization and similar use cases by adding TensorFlow-based Cloud Machine Learning models and APIs to your data processing pipelines.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">BigQuery provides a flexible, powerful foundation for Machine Learning and Artificial Intelligence. BigQuery provides integration with CloudML Engine and TensorFlow to train powerful models on structured data. Moreover, BigQuery\u2019s ability to transform and analyze data helps you get your data in shape for Machine Learning.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Other solutions may work one way or other but only the combination of theses 3 components integrate well in data ingestion, collection, and real-time analysis, and data mining in a highly durable, elastic, and parallel manner.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A \u2013 Wrong order. You don\u2019t normally ingest IoT data directly to DataFlow<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C \u2013 DataProc is GCP version of Apache Hadoop\/Spark. Although it has the SQL-like Hive, it does not provide SQL interface as sophisticated as BigQuery does.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D \u2013 App Engine is compute resources. It is not designed to ingest IoT data like PubSub. Also. It\u2019s rare use case App Engine ingests data to DataFlow directly.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Below two pictures illustrate the typical toles played by DataFlow and PubSub<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dataflow<\/span><\/p>\n<p><b><img decoding=\"async\" class=\"aligncenter wp-image-80194\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Dataflow-1-300x137.png\" alt=\"Dataflow\" width=\"414\" height=\"189\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Dataflow-1-300x137.png 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Dataflow-1.png 512w\" sizes=\"(max-width: 414px) 100vw, 414px\" \/>\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">PubSub<\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-80195\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pub-Sub-300x196.png\" alt=\"Pub Sub\" width=\"509\" height=\"333\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pub-Sub-300x196.png 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pub-Sub.png 512w\" sizes=\"(max-width: 509px) 100vw, 509px\" \/><\/p>\n<h4><b>Q No. 22\u00a0<\/b><span style=\"font-weight: 400;\"><em>You work in a multinational company that is migrating to Google Cloud.\u00a0<\/em><\/span><span style=\"font-weight: 400;\"><em>The head office has the largest data center and manages a connection network to offices in various countries around the world.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>Each country has its own projects to manage the specific procedures of each location, but the management wants to create an integrated organization while maintaining the independence of the projects for the various branches.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">How do you plan to organize Networking?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Peered VPC<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Cloud Interconnect<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Shared VPC<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Cloud VPN and Cloud Router<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: C<\/b><\/p>\n<p><span style=\"font-weight: 400;\">The headquarters office manages the global network so the networking specialists mainly work over there.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Shared VPC lets create a single, global VPC organized by a central project (host project).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">All the other projects (service projects) maintain their independence but they don\u2019t have the burden of network management.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">So we can have a balance between control policies at the network level and freedom to manage application projects<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>A is wrong <\/b><span style=\"font-weight: 400;\">because with VPC peering there is no organization hierarchy.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong <\/b><span style=\"font-weight: 400;\">because Cloud Interconnect is for on-premises networking.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D is wrong<\/b><span style=\"font-weight: 400;\"> because Cloud VPN and Cloud Router are used for Cloud and on-premises telecommunications.<\/span><\/li>\n<\/ul>\n<h4><b>Q No. 23\u00a0<\/b><span style=\"font-weight: 400;\">You work as an architect in a company that develops statistical studies on big data and produces reports for its customers.\u00a0<\/span><span style=\"font-weight: 400;\">Analysts often allocate VMs to process data with ad hoc development procedures.<\/span><\/h4>\n<h4><span style=\"font-weight: 400;\">You have been called by the administrative department because they have been billed for a very large number of Compute Engine instances, which you also consider excessive in relation to operational needs.<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">How can you check, without controlling them one by one, which of these systems can be accidentally left active by junior technicians?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. Use the Recommender CLI Command<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Use Cloud Billing Reports<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Use Idle Systems Report in GCP Console<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. Use Security Command Center Reports<\/span><\/p>\n<p><b>Correct Answer: A<\/b><\/p>\n<p><span style=\"font-weight: 400;\">This command :<\/span><\/p>\n<p><b>gcloud recommender recommendations list<\/b><\/p>\n<p><b><i>&#8211;recommender = google.compute.instance.IdleResourceRecommender<\/i><\/b><\/p>\n<p><span style=\"font-weight: 400;\">gives to all the idle VMs based on Cloud Monitoring metrics of the previous 14 days.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">There is no equivalent in the Console.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is wrong <\/b><span style=\"font-weight: 400;\">because Cloud Billing Reports don\u2019t give details about activities.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>C\u00a0 is wrong <\/b><span style=\"font-weight: 400;\">because there is no Idle Systems Report in the GCP Console.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D\u00a0 is wrong<\/b><span style=\"font-weight: 400;\"> because the Security Command Center is used for Security threats, not for ordinary technical operations.<\/span><\/li>\n<\/ul>\n<h4><em><b>Q No. 24\u00a0<\/b><span style=\"font-weight: 400;\">You are now working for an international company that has many Kubernetes projects on various Cloud platforms. These projects involve mainly microservices web applications and are executed either in GCP or in AWS.\u00a0<\/span><span style=\"font-weight: 400;\">They have many inter-relationships and there is the involvement of many teams related to development, staging, and production environments.\u00a0<\/span><\/em><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>Your new task is to find the best way to organize these systems.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>You need a solution for gaining control on application organization and networking: monitor functionalities, performances, and security in a complex environment.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Which of the following services may help you?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A. Traffic Director<\/span><\/p>\n<p><span style=\"font-weight: 400;\">B. Istio on GKE<\/span><\/p>\n<p><span style=\"font-weight: 400;\">C. Apigee<\/span><\/p>\n<p><span style=\"font-weight: 400;\">D. App Engine Flexible Edition<\/span><\/p>\n<p><b>Correct Answer: A<\/b><\/p>\n<p><span style=\"font-weight: 400;\">What you need is Service Management with capabilities of real-time monitoring, security, and telemetry data collection in a multi-cloud microservices environment.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">They are called Service Mesh.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The most popular product in this category is ISTIO, which collects traffic flows and telemetry data between microservices, enforcing security, with the help of proxies that operate without changes to application code.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Traffic Director can help in a global service mesh because it is a fully managed Service Management control plane.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">With Traffic Director, you can manage on-premise and multi-cloud destinations, too.<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>B is incorrect <\/b><span style=\"font-weight: 400;\">because Istio on Google Kubernetes Engine is a tool for GKE that offers automated installation and management of Istio Service Mesh. So, only inside GCP.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>C is incorrect <\/b><span style=\"font-weight: 400;\">because Apigee is a powerful tool for API Management suitable also for on-premise and multi-cloud environments. But API Management is for managing application APIs and Service Mesh is for managing Service to Service communication, security, Service Levels, and control. Similar services with different scopes.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>D\u00a0 is incorrect <\/b><span style=\"font-weight: 400;\">because App Engine Flexible Edition is a PaaS for microservices applications within Google Cloud.<\/span><\/li>\n<\/ul>\n<h4><img decoding=\"async\" class=\"aligncenter wp-image-80196\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Service-Mesh-300x97.png\" alt=\"Service Mesh\" width=\"671\" height=\"217\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Service-Mesh-300x97.png 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Service-Mesh.png 512w\" sizes=\"(max-width: 671px) 100vw, 671px\" \/><\/h4>\n<h4><em><b>Q No. 25\u00a0Case Study<\/b><a href=\"https:\/\/cloud.google.com\/certification\/guides\/cloud-architect\/casestudy-terramearth-rev2\/\" target=\"_blank\" rel=\"noopener\"> <b>TerramEarth<\/b><\/a><b>\u00a0 2<\/b><\/em><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>In order to speed up the transmission, TerramEarth deployed 5g devices in their vehicles with the goal of achieving an unplanned vehicle downtime to a minimum.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>But a set of older vehicles will be still using the old technology for a while.<\/em><\/span><\/h4>\n<h4><span style=\"font-weight: 400;\"><em>So, on these vehicles, data is stored locally and can be accessed for analysis only when a vehicle is serviced. In this case, data is downloaded via a maintenance port.<\/em><\/span><\/h4>\n<p><span style=\"font-weight: 400;\">You need to integrate this old procedure with the new one, building a workflow, in the simplest way.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Which of the following tools would you choose?<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Cloud Composer<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Cloud Interconnect<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Appengine<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Cloud Build<\/span><\/li>\n<\/ol>\n<p><b>Correct Answer: A<\/b><\/p>\n<p><b>A is correct<\/b><span style=\"font-weight: 400;\">.<\/span><a href=\"https:\/\/cloud.google.com\/composer\/docs\/\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Cloud Composer<\/span><\/a><span style=\"font-weight: 400;\"> is a fully managed workflow service that can author, schedule, and monitor pipelines that span across clouds and on-premises data centers.<\/span><\/p>\n<p><b>B is wrong<\/b><span style=\"font-weight: 400;\">. Cloud Interconnect gives fast (10\/100Gb) connections to your Google VPC. It is too expensive to connect the fields\u2019 offices in this way.<\/span><\/p>\n<p><b>C is wrong<\/b><span style=\"font-weight: 400;\">. Appengine is a PaaS, so you have to prepare a program for that. It is not simple at all.<\/span><\/p>\n<p><b>D is wrong<\/b><span style=\"font-weight: 400;\">.<\/span><a href=\"https:\/\/cloud.google.com\/cloud-build\/\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Cloud Build<\/span><\/a><span style=\"font-weight: 400;\"> is a service that builds tour code on GCP for deploy; any kind of code.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A Cloud Composer task, when started with automated commands, uses<\/span><a href=\"https:\/\/cloud.google.com\/iap\/\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Cloud Identity-Aware Proxy<\/span><\/a><span style=\"font-weight: 400;\"> for security, controls processing, and manage storage with<\/span><a href=\"https:\/\/cloud.google.com\/storage\" target=\"_blank\" rel=\"noopener\"> <span style=\"font-weight: 400;\">Cloud Storage<\/span><\/a><span style=\"font-weight: 400;\"> bucket.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In this way, it is possible in a simple, standard, and safe way to automate all the processes.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Once the files are correctly stored, a triggered procedure can start the new and integrated procedures.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">For more details, please refer to the URLs below:<\/span><\/p>\n<h4>Reference<\/h4>\n<p><span style=\"font-weight: 400;\"><a href=\"https:\/\/cloud.google.com\/composer\/\" target=\"_blank\" rel=\"noopener\">https:\/\/cloud.google.com\/composer\/<\/a><\/span><\/p>\n<p><span style=\"font-weight: 400;\"><a href=\"https:\/\/cloud.google.com\/ml-engine\/\" target=\"_blank\" rel=\"noopener\">https:\/\/cloud.google.com\/composer\/docs\/concepts\/cloud-storage<\/a><\/span><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A Google Cloud Certified Professional Cloud Architect enables organisations to leverage Google Cloud Technologies.They possess a thorough understanding of Google Cloud, its architecture and are capable of designing and developing robust, secure and scalable dynamic solutions to drive business objectives. What does a professional cloud architect do? Google cloud professional cloud architect understands the cloud environment and google technology and enables companies to make use of google cloud services. The role of the cloud architect is as follows: He designs cloud solutions according to the client&#8217;s needs. Once the solution is designed he implements the cloud solutions Develop secure, scalable, [&hellip;]<\/p>\n","protected":false},"author":220,"featured_media":80210,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"default","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"default","adv-header-id-meta":"","stick-header-meta":"default","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[10,12],"tags":[4869],"class_list":["post-80185","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cloud-computing-certifications","category-google-cloud","tag-google-certified-professional-cloud-architect-certification"],"uagb_featured_image_src":{"full":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",560,315,false],"thumbnail":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect-150x150.jpg",150,150,true],"medium":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect-300x169.jpg",300,169,true],"medium_large":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",560,315,false],"large":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",560,315,false],"1536x1536":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",560,315,false],"2048x2048":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",560,315,false],"profile_24":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",24,14,false],"profile_48":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",48,27,false],"profile_96":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",96,54,false],"profile_150":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",150,84,false],"profile_300":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",300,169,false],"tptn_thumbnail":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect-250x250.jpg",250,250,true],"web-stories-poster-portrait":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",560,315,false],"web-stories-publisher-logo":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",96,54,false],"web-stories-thumbnail":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2021\/11\/Pro-Cloud-Architect.jpg",150,84,false]},"uagb_author_info":{"display_name":"Aditi Malhotra","author_link":"https:\/\/www.whizlabs.com\/blog\/author\/aditi\/"},"uagb_comment_info":224,"uagb_excerpt":"A Google Cloud Certified Professional Cloud Architect enables organisations to leverage Google Cloud Technologies.They possess a thorough understanding of Google Cloud, its architecture and are capable of designing and developing robust, secure and scalable dynamic solutions to drive business objectives. What does a professional cloud architect do? Google cloud professional cloud architect understands the cloud&hellip;","_links":{"self":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts\/80185","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/users\/220"}],"replies":[{"embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/comments?post=80185"}],"version-history":[{"count":12,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts\/80185\/revisions"}],"predecessor-version":[{"id":92350,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts\/80185\/revisions\/92350"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/media\/80210"}],"wp:attachment":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/media?parent=80185"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/categories?post=80185"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/tags?post=80185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}