← all lessons/ Appendix · Other Clouds & Platforms/#82
Lesson 3 of 5 in Appendix · Other Clouds & Platforms

Google Cloud Platform

Appendix · Other Clouds & PlatformsIntermediate~11 min read
Recommended prerequisite:#81 Microsoft Azure

The 30-Second Pitch

Google Cloud Platform (GCP) is a suite of cloud computing services that runs on the same infrastructure Google uses internally for its end-user products. It solves the problem of building, deploying, and scaling applications and services without managing physical hardware. A team would pick GCP for its deep integration with data analytics, machine learning, and AI services (like Vertex AI), its global, high-performance network, and its strong offerings in container orchestration (GKE) and serverless computing. For an AI-focused role, GCP's pre-trained models, MLOps tools, and unified data platform make it a compelling choice, especially when paired with its open-source integrations.

How It Actually Works

GCP's architecture is built on a global infrastructure of regions and zones. A region is a specific geographical location (e.g., us-central1), and each region contains 3 or more zones (e.g., us-central1-a), which are isolated failure domains. This design allows for high availability and low-latency deployments.

The core mental model revolves around projects. Every GCP resource belongs to a project, which acts as the organizing entity for billing, APIs, permissions, and quotas. Identity and Access Management (IAM) binds users, service accounts, and groups to roles for fine-grained resource control.

Data flows through Google's private fiber network, which connects its data centers globally. This is key to services like Cloud Storage (unified object storage), BigQuery (serverless data warehouse), and Cloud Spanner (horizontally scalable relational database). For compute, you have a spectrum:

  • IaaS: Compute Engine (VMs)
  • CaaS: Google Kubernetes Engine (GKE) for managed Kubernetes
  • FaaS/Serverless: Cloud Run (containers), Cloud Functions (event-driven functions)
  • PaaS: App Engine (fully managed app platform)

For AI/ML, Vertex AI is the unified platform. It provides a pipeline to take data from BigQuery or Cloud Storage, train a model (using AutoML, custom containers, or pre-trained APIs like Vision or Natural Language), deploy it to an endpoint, and manage it with MLOps tools like Vertex AI Pipelines and Model Monitoring.

[User/Service] --> [Global Load Balancer] --> [GKE Cluster / Cloud Run / App Engine]
        |                  |
        v                  v
  [Cloud IAP / IAM]    [Cloud Storage / Firestore]
        |                  |
        v                  v
  [BigQuery / Pub/Sub] --> [Vertex AI (Training / Prediction)]
        |                  |
        v                  v
  [Cloud Logging & Monitoring] --> [Operations Suite (Alerting, Dashboards)]

Patterns You Should Know

1. Serverless AI Inference Pipeline with Cloud Functions & Vertex AI

This pattern is ideal for event-driven model inference, like processing uploaded images or reacting to new data in a database.

python
# cloud_function_main.py
import functions_framework
from google.cloud import storage, aiplatform

# Initialize clients (lazy-loaded for cold starts)
storage_client = None
aiplatform_client = None

def get_clients():
    global storage_client, aiplatform_client
    if storage_client is None:
        storage_client = storage.Client()
    if aiplatform_client is None:
        aiplatform_client = aiplatform.gapic.PredictionServiceClient()
    return storage_client, aiplatform_client

@functions_framework.cloud_event
def process_new_file(cloud_event):
    """Triggered by a new file uploaded to Cloud Storage."""
    data = cloud_event.data
    bucket_name = data["bucket"]
    file_name = data["name"]

    storage_client, prediction_client = get_clients()

    # 1. Read the file from Cloud Storage
    bucket = storage_client.bucket(bucket_name)
    blob = bucket.blob(file_name)
    file_content = blob.download_as_bytes()

    # 2. Prepare payload for Vertex AI endpoint
    # (Assuming an image classification endpoint)
    endpoint_id = "projects/{project}/locations/{region}/endpoints/{endpoint_id}"
    instance = {"image_bytes": {"b64": file_content}} # Base64 encoded
    instances = [instance]

    # 3. Call the deployed model
    response = prediction_client.predict(
        endpoint=endpoint_id,
        instances=instances
    )

    # 4. Write predictions to Firestore or Pub/Sub
    predictions = response.predictions
    # ... write logic ...

    print(f"Processed {file_name}, prediction: {predictions[0]}")

2. Managed Batch Processing with Cloud Run Jobs & BigQuery

For scheduled or on-demand batch data processing and model training, Cloud Run Jobs offers a simpler alternative to managing a full Kubernetes CronJob.

yaml
# cloudrun-job.yaml
apiVersion: run.googleapis.com/v1
kind: Job
metadata:
  name: nightly-feature-engineering
  annotations:
    run.googleapis.com/launch-stage: BETA
spec:
  template:
    spec:
      containers:
      - image: gcr.io/my-project/feature-engineer:latest
        env:
        - name: GOOGLE_CLOUD_PROJECT
          value: my-project
        - name: DATASET_ID
          value: production
        resources:
          limits:
            cpu: 2
            memory: 4Gi
      maxRetries: 3
      timeoutSeconds: 3600 # 1 hour
---
# Inside the container (Python script)
from google.cloud import bigquery
import pandas as pd
from sklearn.preprocessing import StandardScaler

client = bigquery.Client()

# Read raw data
query = """
    SELECT user_id, session_duration, clicks, purchase_amount
    FROM `my-project.production.raw_events`
    WHERE DATE(timestamp) = CURRENT_DATE() - 1
"""
df = client.query(query).to_dataframe()

# Perform feature engineering
scaler = StandardScaler()
df[['normalized_duration', 'normalized_clicks']] = scaler.fit_transform(df[['session_duration', 'clicks']])

# Write features back to BigQuery for model training
table_id = "my-project.production.user_features_daily"
job = client.load_table_from_dataframe(df, table_id)
job.result()

3. Hybrid Cloud Pattern with GKE & Private Service Connect

Given the role's context with AWS and Azure, a common pattern is connecting GCP services to external clouds or on-prem networks securely.

bash
# 1. Create a VPC network in GCP for isolation
gcloud compute networks create hybrid-vpc --subnet-mode=custom

# 2. Create a subnet for GKE
gcloud compute networks subnets create gke-subnet \
    --network=hybrid-vpc \
    --range=10.0.0.0/20 \
    --region=us-central1 \
    --enable-private-ip-google-access

# 3. Create a GKE cluster with private control plane and nodes
gcloud container clusters create my-hybrid-cluster \
    --network=hybrid-vpc \
    --subnetwork=gke-subnet \
    --enable-ip-alias \
    --enable-private-nodes \
    --master-ipv4-cidr=172.16.0.0/28 \
    --no-enable-basic-auth \
    --enable-master-authorized-networks \
    --master-authorized-networks=<AWS_VPC_CIDR>,<AZURE_VNET_CIDR> \
    --enable-dataplane-v2

# 4. Create a Private Service Connect endpoint for a Cloud SQL instance.
# This allows your AWS/Azure applications to reach Cloud SQL via a private IP.
# (Configuration is done in the Cloud Console or via Terraform)

4. CI/CD for ML Models with Cloud Build & Vertex AI Pipelines

A robust MLOps pipeline that rebuilds and redeploys a model when new training code is pushed.

yaml
# cloudbuild.yaml
steps:
# 1. Build the training container image
- name: 'gcr.io/cloud-builders/docker'
  args: ['build', '-t', 'gcr.io/$PROJECT_ID/trainer:latest', '-f', 'Dockerfile.train', '.']
  id: 'build-train-image'

# 2. Push the image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
  args: ['push', 'gcr.io/$PROJECT_ID/trainer:latest']
  waitFor: ['build-train-image']

# 3. Compile and run the Vertex AI Pipeline
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
  entrypoint: 'bash'
  args:
    - '-c'
    - |
      pip install -r requirements.txt
      python compile_pipeline.py --output pipeline.json
      gcloud ai pipelines run --project=$PROJECT_ID \
          --region=us-central1 \
          --pipeline-json-file=pipeline.json \
          --pipeline-root=gs://$PROJECT_ID-pipeline-root/$(date +%Y%m%d-%H%M%S)
  env:
    - 'GOOGLE_CLOUD_PROJECT=$PROJECT_ID'
  waitFor: ['build-train-image']

What Interviewers Actually Ask

Q: Explain how GCP's global load balancing works and why its network performance is often considered an advantage. A: GCP's Global Load Balancer is a single anycast IP that fronts all backend services (in GKE, Compute Engine, etc.). User traffic is routed to the nearest Google Front End (GFE) point of presence via BGP anycast, which then uses Google's private fiber backbone to reach your backend, even if it's in another region. This reduces last-mile latency and provides built-in DDoS protection. The advantage comes from Google's massive, owned global network, which offers lower latency and higher throughput between regions compared to stitching together public internet links.

Q: When would you choose Cloud Run over Google Kubernetes Engine (GKE), and vice versa? A: Choose Cloud Run when you want the fastest path to deploy a containerized application without managing infrastructure (nodes, scaling, clusters). It's perfect for stateless HTTP services, event-driven processing, and workloads with variable traffic. Choose GKE when you need fine-grained control over the Kubernetes API, need to run stateful workloads with custom storage classes, require specific node configurations (GPUs, local SSDs), or are running a complex microservices architecture that uses advanced K8s features (operators, service meshes like Anthos Service Mesh, or custom networking).

Q: You have a Vertex AI model endpoint that is suddenly returning high latency. What's your debugging process? A: First, check Cloud Monitoring for the endpoint's metrics: request latency, QPS, and error rates. Correlate with deployment logs in Cloud Logging. Check if traffic has spiked, triggering scaling. If it's a custom container model, examine the container logs for errors or slow dependencies. Use Cloud Profiler to see if there's a CPU/memory bottleneck in your prediction code. Also, verify the underlying machine type (e.g., n1-standard-4 vs. n1-highcpu-8) and consider scaling vertically or horizontally. Finally, check network dependencies (e.g., is it calling a slow external API or database?).

Q: How does BigQuery achieve its speed on massive datasets without requiring you to manage indexes or partitions upfront? A: BigQuery is a serverless columnar database that uses Colossus for distributed storage and Dremel for query execution. Data is automatically partitioned and stored in a columnar format (Capacitor), which allows it to scan only the necessary columns for a query. It uses a massively parallel processing (MPP) architecture, dynamically allocating thousands of "slots" (compute units) to each query. The separation of storage and compute, combined with this architecture, enables fast scans and aggregations without manual index tuning. For even better performance, you can use partitioning and clustering.

Q: How would you design a system on GCP to ingest, process, and serve real-time analytics from IoT device data? A: Devices would publish telemetry to Cloud Pub/Sub topics for reliable, scalable ingestion. A Dataflow (Apache Beam) streaming pipeline would consume from Pub/Sub to perform real-time transformations, aggregation (e.g., rolling averages), and anomaly detection, writing results to BigQuery for analytics and Cloud Bigtable for low-latency key-value lookups (e.g., latest device state). An API fronted by Cloud Endpoints or Cloud Run would serve processed data to dashboards. Cloud IoT Core (now integrated into other services) could handle device registry and MQTT connectivity.

Q: When would you NOT recommend using GCP? A: I would hesitate to recommend GCP if the organization is deeply entrenched in another ecosystem (e.g., heavily invested in AWS's specific managed services like DynamoDB or Azure's tight integration with Microsoft products like Active Directory). Also, if the primary requirement is a specific SaaS offering only available on another cloud, or if there's a regulatory need for a region where GCP doesn't have a presence. For very small, simple projects, the complexity might outweigh the benefits compared to simpler PaaS offerings.

Q: Explain the difference between IAM Roles, IAM Conditions, and Service Accounts. A: IAM Roles are collections of permissions (e.g., roles/storage.objectViewer). They are assigned to members (users, groups, or service accounts). Service Accounts are special accounts used by applications and VMs, not people. They are identities that hold roles. IAM Conditions are optional, attribute-based rules you attach to a role binding to grant access only under specific circumstances (e.g., request.time < timestamp('2023-12-31') or resource.name.startsWith('projects/my-project/objects/confidential-')). They enable fine-grained, context-aware access control.

Q: How does Cloud Spanner provide both horizontal scalability and strong consistency, which is traditionally difficult? A: Cloud Spanner uses a combination of a globally distributed, synchronous replication protocol (Paxos-based) and a distributed, partitioned, shared-nothing architecture. Data is sharded into "splits" and distributed across many nodes globally. Each split is replicated synchronously across zones/regions using TrueTime, a globally synchronized clock API. This allows it to offer external consistency (linearizability) for transactions across the entire database, while still scaling horizontally by adding more nodes to handle more splits and traffic.

How It Connects to This Role's Stack

As a Senior Full Stack AI Engineer, you'll use GCP as a powerful orchestration and execution layer within a potentially multi-cloud environment.

  • With LlamaIndex: You can use GCP as the retrieval backend. Store your vector embeddings in Matching Engine (Vertex AI's managed vector similarity search), your documents in Cloud Storage, and your metadata in Firestore. Run your LlamaIndex pipelines on Vertex AI Workbench (managed Jupyter) or schedule them with Cloud Composer (Airflow).
  • With Node.js: Deploy your Node.js APIs and backend services on Cloud Run for effortless scaling or on GKE for more control. Use Cloud Endpoints or API Gateway to manage, secure, and monitor these APIs. Integrate with GCP services using the Node.js client libraries.
  • In a Multi-Cloud (AWS/Azure) Context: GCP can be the dedicated AI/ML hub. Use Anthos or GKE with multi-cluster services to manage workloads across clouds consistently. Leverage BigQuery Omni (preview) to query data residing in AWS S3 or Azure Blob Storage directly, centralizing analytics. Use Cloud Interconnect or Partner Interconnect for secure, high-bandwidth private connections between clouds.
  • With CI/CD & Microservices: Cloud Build can orchestrate builds, run tests, and deploy to GKE or Cloud Run. Artifact Registry stores your Docker images. Cloud Deploy provides managed delivery pipelines. For microservices, use Anthos Service Mesh (based on Istio) for observability, security, and traffic management across GKE clusters, even on other clouds.
  • With Docker & Kubernetes: GKE is the first-party managed Kubernetes service. It offers auto-pilot mode for a hands-off experience or standard mode for full control. It integrates seamlessly with GCP's networking, IAM, and logging/monitoring stack. Cloud Run is the fully managed container platform that abstracts away Kubernetes entirely.

Red Flags to Avoid

  • "I just use the Console." Senior engineers are expected to use Infrastructure as Code (Terraform, Deployment Manager) and the CLI (gcloud) or SDKs for reproducibility.
  • Confusing GCP services with their AWS/Azure analogs without understanding nuances. Don't just say "BigQuery is like Redshift." Explain it's serverless, uses a columnar format, and separates storage and compute.
  • Not understanding IAM and the principle of least privilege. Saying you'd give a service account roles/owner is a major red flag. Discuss specific, granular roles and service accounts per workload.
  • Overlooking cost management. Not mentioning tools like Recommender, Quotas, Budgets, and Billing Alerts suggests a lack of production experience.
  • Treating Vertex AI as a black box. You should be able to discuss the components: Datasets, Training Pipelines, Models, Endpoints, and Feature Store, and when to use AutoML vs. custom training.
  • Ignoring networking fundamentals. Not being able to discuss VPCs, subnets, firewall rules, Private Google Access, or VPC Service Controls shows a gap in building secure, production-grade systems.
  • Saying "GCP is just cheaper." The decision is rarely just about list price. Focus on the value: developer productivity, managed services, AI/ML capabilities, and network performance. Be prepared to discuss committed use discounts (CUDs) and sustained use discounts.
← PreviousMicrosoft AzureNext →Docker