GCP Associate Engineer Exam training (Storage and database)

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Section 1: Cloud Storage (25 Questions)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Question: A video-processing pipeline writes hourly footage to Cloud Storage and later deletes after 30 days. Which storage class should you choose?
Options:
A. Standard
B. Nearline
C. Coldline
D. Archive
Answer: B
Explanation: Nearline is optimized for data accessed less than once per month but retained ~30 days. Standard is for frequent access; Coldline for quarterly; Archive for yearly.Question: (Select two) You need to prevent accidental bucket deletion and object overwrite. Which bucket features do you enable?
Options:
A. Versioning
B. Lifecycle Management
C. Retention Policy
D. Uniform bucket-level access
Answer: A, C
Explanation: Versioning preserves old object versions on overwrite; a Retention Policy forbids deletion/overwrite before its expiration. Lifecycle only schedules deletions; uniform access is about IAM.Question: A global website serves images to users worldwide with low latency. Which Cloud Storage feature should you integrate?
Options:
A. Regional buckets
B. Dual-region buckets
C. Multi-region buckets
D. VPC Service Controls
Answer: C
Explanation: Multi-region replicates your data across multiple regions automatically for lowest latency. Regional/dual-region limit to one or two.Question: Scenario: You need to ingest 10 TB of data into Cloud Storage but have limited Internet bandwidth. What’s the fastest way?
Options:
A. gsutil rsync
B. Storage Transfer Service
C. Transfer Appliance
D. Streaming uploads via API
Answer: C
Explanation: Transfer Appliance ships you a physical SSD you load on-prem and ship back, bypassing bandwidth caps.Question: You want to share a PDF with external auditors for 7 days without making it public. How?
Options:
A. Public bucket + link
B. Signed URL (7-day)
C. Add auditors to bucket IAM
D. VPC-SC perimeter
Answer: B
Explanation: A signed URL grants time-limited access without altering IAM. Public bucket is permanent; IAM requires Google IDs.Question: Which CORS config allows your https://app.example.com frontend to GET objects with custom headers?
Options:
A. origin “https://app.example.com”, method GET, responseHeader ["Content-Type","x-goog-meta-"], maxAge 3600
B. origin "", method "", responseHeader [""], maxAge 0
C. origin “https://app.example.com”, method GET/POST, responseHeader ["*"], maxAge 3600
D. default CORS
Answer: A
Explanation: A explicitly whitelists your origin, GET method, and needed headers. B is over-permissive; C lacks header specificity; default forbids cross-site.Question: Scenario: Your app needs >5,000 writes/sec to Cloud Storage. How avoid hotspots?
Options:
A. Prefix object names with timestamp
B. Sequential IDs
C. Single bucket only
D. Use Nearline
Answer: A
Explanation: High-cardinality prefixes spread writes across shards. Sequential names create hotspots; storage class is irrelevant.Question: Which permission is required to create a new bucket?
Options:
A. storage.buckets.create
B. storage.objects.create
C. storage.buckets.get
D. storage.objects.delete
Answer: A
Explanation: storage.buckets.create allows bucket creation. objects.create covers objects only.Question: (Select two) Lock objects so they cannot be deleted or overwritten for 1 year. Which to configure?
Options:
A. Bucket Retention Policy
B. Event-based hold
C. Lifecycle Management
D. CSEK
Answer: A, B
Explanation: A retention policy enforces immutability; an event-based hold prevents deletion until removed. Lifecycle schedules deletions; CSEK is encryption-only.Question: Scenario: You rotate HMAC keys for legacy S3-compatible clients. Steps to avoid downtime?
Options:
A. Create new key, roll out, delete old
B. Delete old, then create new
C. Switch to OAuth2
D. Use signed URLs instead
Answer: A
Explanation: Creating and distributing a new key first avoids interruption. Deleting first breaks clients.Question: You see high egress costs from a bucket. Which log shows per-object access patterns?
Options:
A. Usage logs
B. Audit logs
C. VPC flow logs
D. Requester Pays
Answer: A
Explanation: Usage logs record object-level request counts and egress. Audit logs cover control-plane, not data-plane.Question: Which feature auto-replicates objects from one bucket to another in a different region?
Options:
A. Object Replication
B. Versioning
C. Dual-region bucket
D. Lifecycle rule
Answer: A
Explanation: Object Replication replicates new/updated objects between buckets. Dual-region is intra-bucket multi-region.Question: Block data exfiltration from a bucket entirely. Which do you configure?
Options:
A. VPC Service Controls perimeter
B. Private ACL only
C. Firewall rule
D. IAM deny on storage.objects.get
Answer: A
Explanation: VPC-SC prevents access outside the defined service perimeter. ACL/IAM alone can be bypassed if misconfigured.Question: Scenario: Your analytics job must see newly uploaded objects instantly. Does GCS support read-after-write for new objects?
Options:
A. Yes, strong global consistency
B. Only in multi-region
C. No, eventual consistency
D. Only after object rewrite
Answer: A
Explanation: GCS now provides strong global read-after-write consistency for all buckets.Question: (Select two) Encrypt data with your own keys. Which methods?
Options:
A. CSEK
B. CMEK
C. Google-managed
D. Customer passphrase
Answer: A, B
Explanation: Customer-supplied (CSEK) or customer-managed via KMS (CMEK) allow full key control. Google-managed is default; passphrase isn’t supported.Question: You must remove a 365-day retention policy. What first?
Options:
A. Unlock the retention policy
B. Wait 365 days
C. Delete all objects
D. Lower policy to 0
Answer: A
Explanation: A locked retention policy must be explicitly removed (unlocked) by an authorized user.Question: Host a static website at example.com on Cloud Storage. Sequence?
Options:
A. Create bucket “example.com”, upload, enable website config, set DNS CNAME to c.storage.googleapis.com
B. Create “www.example.com”, enable CDN
C. Use App Engine
D. Front bucket with LB
Answer: A
Explanation: Buckets must match your domain; website config serves index; DNS CNAME points to storage domain.Question: Audit every GET/PUT on a bucket. Which log type?
Options:
A. Data Access logs
B. Admin Activity logs
C. System Event logs
D. Access Transparency logs
Answer: A
Explanation: Data Access (Read/Write) logs record object-level API calls. Admin logs record control-plane operations only.Question: (Select two) Auto-delete old versions and transition storage classes. Which lifecycle rules?
Options:
A. Age > 30 AND StorageClass=NEARLINE → Delete
B. Prefix=old/* → Delete
C. Age > 90 → SetStorageClass=COLDLINE
D. Abort incomplete uploads
Answer: A, C
Explanation: You can combine age and storage-class conditions for deletion or transition. Prefix-based deletion works but doesn’t handle storage-class.Question: You upload via parallel composite and end up with many components. How merge into one object?
Options:
A. Rewrite (compose) API
B. gsutil cat
C. Download & reupload
D. Append flag
Answer: A
Explanation: The compose endpoint merges up to 32 components server-side. gsutil uses the same API; others are manual or unsupported.Question: Serve two versions of static assets (v1/v2) under a single bucket with separate cache lifetimes. How?
Options:
A. Use prefixes /v1/,/v2/ and set Cache-Control per prefix
B. Two buckets
C. Uniform bucket lifecycle
D. Versioning
Answer: A
Explanation: Prefixes combined with per-object metadata (Cache-Control) achieve the separation. Multiple buckets add management overhead.Question: Preview an IAM policy change on a bucket before applying. Which tool?
Options:
A. IAM Policy Analyzer
B. gsutil iam get
C. Cloud Shell editor
D. Policy Troubleshooter
Answer: A
Explanation: IAM Policy Analyzer lets you simulate and review impacts of proposed policy changes. Troubleshooter tests existing access.Question: Prevent any project from creating buckets named “prod-*” except in your team. How?
Options:
A. Org Policy “allowedBucketNames” with regex
B. Bucket IAM deny
C. VPC-SC
D. Folder-level IAM
Answer: A
Explanation: An Organization Policy constraint on resource name patterns enforces allowed names.Question: (Select two) Serve private files to an App Engine app without public access. Which patterns work?
Options:
A. Grant App Engine SA storage.objects.get IAM
B. Signed URLs in code
C. VPC-SC only
D. Signed Policy Documents
Answer: A, B
Explanation: Giving the App’s service account read access or using signed URLs allows secure access. VPC-SC limits network; policy docs suit form-based uploads.Question: Efficiently list millions of objects under a prefix for analytics. How optimize?
Options:
A. Use prefix + delimiter in listings
B. Full unfiltered listing
C. Multi-region bucket
D. Switch to BigQuery
Answer: A
Explanation: Prefix and delimiter parameters narrow scope and speed up listing. Full listing is slow; storage class or analytics engine is unrelated.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Section 2: Filestore (25 Questions)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Question: Which Filestore tier offers the highest performance?
Options:
A. Basic HDD
B. Basic SSD
C. Enterprise
D. Standard
Answer: C
Explanation: Enterprise tier delivers the highest throughput and lowest latency for demanding workloads.Question: Scenario: You need shared POSIX storage for 50 GKE Linux pods. Which should you mount?
Options:
A. Filestore
B. PersistentDisk
C. Cloud Storage FUSE
D. Cloud SQL
Answer: A
Explanation: Filestore provides NFSv3 POSIX semantics; PD is block-device per-VM; FUSE has limited semantics; Cloud SQL is DB.Question: Which IP address do you use to mount Filestore on a Compute Engine VM in the same region?
Options:
A. Filestore instance’s internal IP
B. External IP
C. DNS name filestore.googleapis.com
D. None—use gcloud mount command
Answer: A
Explanation: Filestore presents an internal IP endpoint in your VPC you mount via NFS.Question: (Select two) Filestore quotas you must check before creating an instance.
Options:
A. Instances per region
B. Total capacity per region
C. IOPS per zone
D. Snapshot count
Answer: A, B
Explanation: You have limits on number of instances and total storage capacity in a region. IOPS and snapshots are not per-quota.Question: Scenario: You need daily backups of Filestore data. Which to use?
Options:
A. Filestore snapshots via gcloud
B. Disk snapshots
C. gsutil rsync to bucket
D. Database export
Answer: A
Explanation: Filestore offers built-in snapshots of file shares. Disk snapshots are for PD only.Question: Which VPC network feature must be enabled for Filestore?
Options:
A. Private Google Access
B. Internal IP routing
C. External IP
D. Cloud NAT
Answer: A
Explanation: Private Google Access allows VMs without external IPs to reach Filestore’s service endpoint. Internal routing is standard; NAT is egress.Question: You need sub-20 ms latency for an HPC cluster. Which Filestore tier?
Options:
A. Basic HDD
B. Basic SSD
C. Enterprise
D. None—use Cloud Storage
Answer: C
Explanation: Enterprise tier provides <1 ms typical latency; Basic SSD is ~2–3 ms; HDD is much higher.Question: Scenario: You must resize an existing Filestore share from 4 TiB to 10 TiB. How?
Options:
A. gcloud filestore instances update --tier=X --file-share capacity=10TiB
B. Create new instance and migrate data
C. Filestore auto-scales
D. Use disk grow on VM
Answer: A
Explanation: The update command lets you increase capacity in supported tiers. No need to recreate.Question: Which protocol version does Filestore expose?
Options:
A. NFSv3
B. NFSv4.1
C. SMB 3.0
D. iSCSI
Answer: A
Explanation: Filestore currently supports NFSv3 only.Question: (Select two) To secure Filestore traffic, implement:
Options:
A. VPC firewall rule allowing only NFS port from your VMs
B. Certificate-based authentication
C. Use private IPs only
D. IAM on file paths
Answer: A, C
Explanation: Filestore uses the VPC; control access via firewall and private IP. NFSv3 has no built-in auth; path IAM is unsupported.Question: Scenario: Filestore connecting from GKE Autopilot cluster. What’s required?
Options:
A. VPC connector + firewall rule + mount in Pod spec
B. PersistentVolume backed by PD
C. Cloud Storage FUSE
D. NFS Client on VM only
Answer: A
Explanation: Autopilot needs a Serverless VPC Access connector to reach Filestore, firewall open for port 2049, and Pod volume mount.Question: You need cross-zone HA for your shared file store. How achieve?
Options:
A. Dual-region Filestore (not available)
B. Replicate data at the application layer to another instance in another zone
C. Use regional PD
D. Use Cloud Storage
Answer: B
Explanation: Filestore is zonal. For cross-zone HA you must replicate at the app level or fail over manually.Question: Which metric indicates the percentage of NFS requests served?
Options:
A. filestore.googleapis.com/operations/success_count
B. CPU utilization
C. memory usage
D. network throughput
Answer: A
Explanation: The operations success_count metric divided by total_count gives success rate.Question: (Select two) Which operations can you perform on a Filestore snapshot?
Options:
A. Create new instance from snapshot
B. Delete snapshot
C. Modify snapshot capacity
D. Change tier of snapshot
Answer: A, B
Explanation: You can snapshot and delete, and instantiate new shares from a snapshot. Capacity and tier are properties of instances only.Question: To enforce that only specific service accounts can mount a share, you set:
Options:
A. IAM on the Filestore instance
B. VPC Service Controls perimeter
C. Bucket ACL
D. Firewall allow all
Answer: A
Explanation: Filestore supports IAM roles on the instance; only identities with roles/storage.filestoreClient can mount.Question: Scenario: Your file share sees heavy metadata operations. Which tier gives highest metadata IOPS?
Options:
A. Basic HDD
B. Basic SSD
C. Enterprise
D. Balanced PD
Answer: C
Explanation: Enterprise tier is optimized for both metadata and throughput IOPS. Basic SSD focuses mainly on throughput.Question: Which tool provides a web console view of Filestore usage and metrics?
Options:
A. Cloud Monitoring console
B. Cloud Storage browser
C. Filestore UI in GKE
D. Cloud Shell
Answer: A
Explanation: Monitoring lets you chart Filestore metrics like throughput, latency, and capacity.Question: (Select two) How can you move data from one Filestore instance to another with minimal downtime?
Options:
A. rsync over NFS while mounting both shares
B. gcloud filestore migrate command
C. Filestore live-replication addon
D. Use Storage Transfer Service
Answer: A, D
Explanation: rsync over NFS or Storage Transfer Service (mount share on an intermediate Compute Engine VM) can sync data. There’s no built-in live-replication.Question: What happens if you attach a Windows client to a Filestore share?
Options:
A. It fails—NFSv3 unsupported by Windows by default
B. Works natively
C. Uses SMB internally
D. Auto-migrates to NFSv4
Answer: A
Explanation: Windows does not support NFSv3 without extra services; Filestore is NFSv3 only.Question: Filestore throughput scales with capacity in which tiers?
Options:
A. Basic SSD and Enterprise
B. Basic HDD only
C. Basic SSD only
D. All tiers
Answer: A
Explanation: Both Basic SSD and Enterprise scale IOPS/throughput linearly with size; HDD is fixed.Question: Scenario: A read-heavy application uses Filestore. You want to reduce network cost. What helps?
Options:
A. Collocate clients in same zone
B. Use Cloud CDN in front of NFS
C. Switch to HDD tier
D. Use Cloud Storage
Answer: A
Explanation: Keeping clients and Filestore in the same zone/region avoids inter-zone egress. NFS cannot use CDN; tier change lowers perf but not cost.Question: Which Filestore API method lists all instances in a project’s region?
Options:
A. projects.locations.instances.list
B. projects.zones.instances.get
C. filestore.instances.list
D. compute.instances.list
Answer: A
Explanation: The Filestore REST API is under projects.locations.instances; zones/compute APIs are for VMs.Question: (Select two) Which network policies must be in place for Filestore?
Options:
A. Allow TCP/2049 from client subnet
B. Allow UDP/2049
C. Allow ICMP
D. Deny all egress
Answer: A, C
Explanation: NFSv3 uses TCP/2049; ICMP may be used for path MTU. UDP isn’t used in Filestore; egress must allow return traffic.Question: How do you delete a Filestore instance and reclaim capacity?
Options:
A. gcloud filestore instances delete <name>
B. rm -rf /mnt/filestore
C. Delete VM mounting it
D. Storage Transfer delete
Answer: A
Explanation: Use gcloud filestore instances delete or the console. Deleting VM doesn’t remove the share.Question: You need a cross-region DR copy of your Filestore data nightly. Which workflow?
Options:
A. rsync from Filestore share to a bucket + restore to second instance
B. Snapshot replication built into Filestore
C. dual-region Filestore
D. use multi-region PD
Answer: A
Explanation: Filestore has no built-in cross-region replication; using rsync to and from a bucket is the typical pattern.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Section 3: Persistent Disks (25 Questions)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Question: You need high-throughput block storage for a database that reads/writes sequentially. Which disk type is best?
Options:
A. Standard HDD Persistent Disk
B. Balanced PD
C. SSD Persistent Disk
D. Local SSD
Answer: C
Explanation: SSD PD delivers the highest network-attached throughput and low latency, ideal for database workloads. Standard HDD PD is optimized for large sequential throughput but has higher latency; Balanced PD sits between; Local SSD is only for ephemeral scratch on the VM host.Question: Your cost-sensitive batch jobs tolerate higher latency and you want to minimize storage cost. Which PD type do you choose?
Options:
A. Standard HDD PD
B. Balanced PD
C. SSD PD
D. Local SSD
Answer: A
Explanation: Standard HDD PD is the most cost-effective for less latency-sensitive workloads. Balanced and SSD PD cost more; Local SSD is expensive and ephemeral.Question: You deploy stateless, short-lived VMs that need very fast temporary scratch space. Which disk?
Options:
A. Local SSD
B. SSD PD
C. Regional PD
D. Standard HDD PD
Answer: A
Explanation: Local SSDs attach directly to the host, offering sub-millisecond latency—ideal for scratch. They do not persist across VM stops. SSD/Standard PD are network-attached and persistent.Question: You require persistent boot disks replicated across two zones for high availability. Which disk type?
Options:
A. Zonal SSD PD
B. Regional SSD PD
C. Local SSD
D. Balanced PD
Answer: B
Explanation: Regional PD synchronously replicates data across two zones, protecting against zonal failures. Zonal PDs reside in one zone only.Question: Scenario: You provision a 500 GiB SSD PD for your VM, then realize you need 1 TiB. What can you do without downtime?
Options:
A. Resize the PD to 1 TiB while attached, then extend the filesystem inside the VM
B. Must stop the VM, resize, then restart
C. Create new larger disk and copy data
D. Use local SSD to expand capacity
Answer: A
Explanation: GCP allows online resizing of PDs; after increasing the disk size you must grow the filesystem (e.g. resize2fs). No VM restart needed.Question: Which command snapshots a running persistent disk without stopping the VM?
Options:
A. gcloud compute disks snapshot
B. gcloud compute disks create
C. gcloud compute snapshots delete
D. gcloud compute instances stop
Answer: A
Explanation: The snapshot command captures an online point-in-time snapshot. No need to stop the VM.Question: (Select two) You must use your own encryption keys for PDs. Which options are available?
Options:
A. Customer-managed encryption keys (CMEK) with Cloud KMS
B. Customer-supplied encryption keys (CSEK)
C. Google-managed keys only
D. No encryption at rest
Answer: A, B
Explanation: GCP supports CMEK (KMS keys you control) or CSEK (you supply raw key material). Google-managed is default; “no encryption” is not allowed.Question: You want maximum IOPS and throughput from a PD. What should you do?
Options:
A. Increase the disk size
B. Switch to HDD
C. Enable autosnapshot
D. Use more VMs
Answer: A
Explanation: PD performance scales linearly with disk size: larger PDs yield higher IOPS/throughput. HDD offers higher sequential throughput at lower IOPS.Question: Scenario: A single VM requires two PDs—one for OS and one for application data. How do you attach them?
Options:
A. Define two PD disks in the VM’s instance configuration and mount inside the OS
B. Create one disk and partition it into two
C. Use a single large disk with directories
D. Mount a network drive instead
Answer: A
Explanation: You can attach multiple PDs to a VM, each appearing as /dev/sdb, /dev/sdc, etc., and format/mount them separately.Question: You need a boot disk image for your VM fleet. Which do you use?
Options:
A. Custom Image created from a source VM
B. Snapshot of the boot disk
C. Local SSD image
D. Instance Template only
Answer: A
Explanation: A custom image bundles OS and config; you can launch new VMs from it. Snapshots are disk-only and require extra steps; instance templates reference images but aren’t images themselves.Question: (Select two) You want to automate daily snapshots of your PDs and delete snapshots older than 7 days. Which do you use?
Options:
A. Cloud Scheduler + Cloud Functions invoking snapshot API
B. snapshot schedule feature in PD
C. Lifecycle Management (Storage)
D. Deployment Manager
Answer: A, B
Explanation: GCP now supports snapshot schedules on PDs; alternatively, you can schedule Cloud Functions via Cloud Scheduler. Storage lifecycle is for buckets; DM is infra-as-code only.Question: You try to delete a PD that’s still attached to a running VM. What happens?
Options:
A. Deletion fails until the disk is detached
B. Disk is detached and deleted
C. VM is stopped and disk deleted
D. Data is preserved in snapshot
Answer: A
Explanation: GCP prevents deletion of in-use PDs. You must first detach the disk (or delete the VM) before deleting the PD.Question: Scenario: A PD snapshot restore must be performed in a different zone. Which step is required?
Options:
A. Create a new disk from the snapshot and specify the target zone
B. Copy snapshot to a new project
C. Move the snapshot itself to that zone
D. Use local SSD instead
Answer: A
Explanation: Snapshots are global within a project. You can restore them as new disks in any zone by specifying the zone in the create command.Question: (Select two) Which PD quotas should you monitor to avoid provisioning failures?
Options:
A. Total PD capacity (TiB) per region
B. Number of PD instances per zone
C. Number of snapshots per project
D. Number of local SSD partitions
Answer: A, B
Explanation: You have limits on total PD TiB and disk count per zone. Snapshot and local SSD quotas are separate.Question: Your enterprise requires quarterly GDPR data deletion. How do you destroy PD data completely?
Options:
A. Delete the PD; overwriting is not guaranteed
B. Zero-out the disk or use shred before snapshot
C. Create a snapshot and delete PD only
D. Set retention policy
Answer: B
Explanation: To ensure data eradication, overwrite (e.g. dd zero) before snapshot/deletion. Deletion alone doesn’t guarantee wiping. Snapshot shares data; retention doesn’t apply.Question: You need read-only shared access to a PD across multiple VMs. Which option?
Options:
A. Mount each instance’s disk as RO by cloning snapshot to each VM
B. Use multi-writer PD
C. Attach the same PD in read-only mode on multiple VMs
D. Use Filestore instead
Answer: A
Explanation: Persistent Disks can be attached ROX (read only) to multiple VMs only if they are created from the same snapshot. PD multi-writer is in preview and only for specific workloads. Filestore is a better fit for true shared FS.Question: Scenario: A database VM boot disk is encrypted with a CMEK. A developer tries to snapshot it but gets a permissions error. What IAM role is missing?
Options:
A. roles/cloudkms.cryptoKeyEncrypterDecrypter
B. roles/compute.admin
C. roles/storage.admin
D. roles/iam.securityAdmin
Answer: A
Explanation: When using CMEK, the caller needs the KMS cryptoKeyEncrypterDecrypter role on the key to encrypt the snapshot metadata.Question: Which filesystem is recommended for maximum performance on PDs?
Options:
A. ext4 with journaling disabled
B. xfs
C. ntfs
D. FAT32
Answer: B
Explanation: xfs typically performs better under heavy parallel I/O on Linux. ext4 works well but ext4 journaling can add overhead. NTFS/FAT32 are Windows filesystems.Question: You provision a 64 GiB HDD PD and see only ~63 GiB available in the OS. Why?
Options:
A. Binary GiB vs decimal GB conversion
B. Hidden OS partition
C. Google rounds down 1 GiB
D. Filesystem overhead
Answer: A, D (Select two)
Explanation: 64 GiB = 64×2³ MiB (≈68.7 billion bytes) but OS reports in GiB vs GB. Filesystems also reserve some overhead for metadata.Question: Scenario: You need to migrate a VM’s boot disk and attached data disk to a new machine type in another zone. What’s the minimal downtime approach?
Options:
A. Snapshot both disks, create new disks in target zone, launch VM
B. Stop VM, detach disks, copy them over the network
C. Use live-migrate across zones
D. Use local SSDs with shared disk
Answer: A
Explanation: Snapshots are global; you can snapshot, create new disks in target zone, and start a replacement VM. There’s no live cross-zone migration.Question: You want PD snapshots to be encrypted with customer-supplied keys. Which command flag do you use?
Options:
A. --csek-key-file
B. --kms-key
C. --encryption-key
D. --no-encrypt
Answer: A
Explanation: --csek-key-file supplies your own raw AES-256 key for CSEK. --kms-key is for KMS (CMEK).Question: (Select two) To monitor PD usage and health over time you configure:
Options:
A. Cloud Monitoring dashboards for metrics like disk_throughput
B. Cloud Logging for system logs
C. Stackdriver Trace
D. VPC Flow Logs
Answer: A, B
Explanation: Monitoring captures performance metrics; Logging can capture health events. Trace is for application latency; Flow logs cover network only.Question: Your VM has autosnapshot schedules enabled on a PD. You then disable schedules. What happens to existing snapshots?
Options:
A. Existing snapshots remain; no new ones created
B. All snapshots are deleted
C. PD auto-deletes after retention
D. VM stops
Answer: A
Explanation: Disabling the schedule stops new snapshots; existing snapshots are retained until you manually delete them or hit retention policies.Question: Which use case is NOT appropriate for local SSD?
Options:
A. Caching or scratch data
B. Critical data requiring durability across VM restarts
C. Parallel analytics shuffle data
D. Temporary swap partitions
Answer: B
Explanation: Local SSD is ephemeral and lost on VM preemption or host maintenance—unsuitable for durable data. Other uses are fine.Question: Scenario: You’re charged unexpectedly high PD snapshot storage costs. What can you do to reduce snapshot spend?
Options:
A. Delete unneeded snapshots
B. Enable incremental snapshots (default)
C. Export snapshots to Cloud Storage and delete PD snapshots
D. Set snapshot lifecycle to auto-delete after a retention period
Answer: A, D (Select two)
Explanation: Deleting unnecessary snapshots frees storage. Although PD snapshots are incremental by default, you can’t disable that. You can automate snapshot deletion via Cloud Functions. Exporting snapshots is not a direct cost saver.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Section 4: Cloud SQL (25 Questions)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Question: You need a MySQL instance optimized for CPU-intensive stored procedures. Which instance tier is best?
Options:
A. db-n1-standard-2
B. db-n2-highmem-4
C. db-n2-highcpu-8
D. db-f1-micro
Answer: C
Explanation: The highcpu-8 tier provides more vCPUs with minimal RAM, best for CPU-bound workloads. highmem is for memory-heavy, standard is balanced, f1-micro is too small.Question: (Select two) To enable high-availability failover for your Cloud SQL for PostgreSQL instance, you must:
Options:
A. Enable automated backups
B. Enable binary logging
C. Create a failover replica in a second zone
D. Enable point-in-time recovery
Answer: A, C
Explanation: Automated backups are required for HA; you must deploy a zonal or regional failover replica (secondary). Binary logging is MySQL-specific; PITR is built on backups but not required to enable HA.Question: Scenario: Your on-premises app must connect privately to Cloud SQL (MySQL) without public IPs. Which setup is correct?
Options:
A. Configure a Serverless VPC Connector and Cloud NAT
B. Enable Private IP in the Cloud SQL instance and peer your on-prem VPC via VPN or Interconnect
C. Grant allUsers the Cloud SQL Invoker role
D. Whitelist on-prem public IP in authorized networks
Answer: B
Explanation: Private IP with VPC peering (VPN/Interconnect) ensures private connectivity. Serverless VPC Connector is for serverless services; public-access and authorized networks expose a public endpoint; IAM Invoker is for Cloud Run.Question: You require point-in-time recovery (PITR) to any timestamp in the last 7 days for MySQL. Which two settings must be enabled?
Options:
A. Automated backups
B. Binary logging
C. High availability
D. User-managed encryption keys
Answer: A, B
Explanation: PITR relies on base backups (automated backups) and binary logs to replay transactions. HA and CMEK are orthogonal.Question: Scenario: A major maintenance upgrade is scheduled next week. Which setting lets you control when it occurs?
Options:
A. Maintenance window (day/time)
B. Instance restart protection
C. Off-peak priority flag
D. Maintenance exclusion policy
Answer: A
Explanation: Cloud SQL lets you define a weekly 4-hour maintenance window. There is no off-peak flag or exclusion policy beyond this.Question: To restrict client connections to only 10.0.1.0/24 over the public IP, you configure:
Options:
A. Authorized networks with CIDR 10.0.1.0/24
B. VPC Service Controls
C. IAM “sql.instances.connect” to that subnet
D. SSL certificates
Answer: A
Explanation: Authorized networks control which IPv4 CIDRs can connect to the public endpoint. VPC SC secures data exfil; IAM is identity-based; SSL encrypts traffic but does not restrict IPs.Question: Scenario: You need storage to auto-grow as data grows with no downtime. Which option?
Options:
A. Enable automatic storage increase
B. Monitor and manually increase disk
C. Use local SSD
D. Set disk size to maximum upfront
Answer: A
Explanation: Automatic storage increase lets Cloud SQL expand the disk up to quota without downtime. Manual increase risks missing growth; local SSD is ephemeral; setting max upfront wastes cost.Question: (Select two) To perform a minimal-downtime migration of an on-prem MySQL database to Cloud SQL, you can use:
Options:
A. Cloud SQL Database Migration Service (DMS)
B. gcloud sql import sql
C. Filestore NFS transfer
D. Native MySQL replication to a Cloud SQL read replica
Answer: A, D
Explanation: DMS supports continuous replication and cutover; MySQL replication to a Cloud SQL replica also enables near-zero downtime. Import is one-time load; Filestore is file storage.Question: You want to offload read traffic from your primary Cloud SQL for PostgreSQL instance. Which feature do you configure?
Options:
A. Failover replica
B. Read replica
C. High availability
D. Point-in-time recovery
Answer: B
Explanation: Read replicas serve read-only queries. Failover replicas are for HA, not read-scale; HA replicas cannot serve client connections; PITR restores data.Question: Which encryption options are supported for Cloud SQL data at rest?
Options:
A. Google-managed keys only
B. Customer-supplied encryption keys (CSEK)
C. Customer-managed encryption keys (CMEK) in Cloud KMS
D. No encryption at rest
Answer: A, C
Explanation: Cloud SQL supports Google-managed and customer-managed KMS keys (CMEK). It does not support CSEK or unencrypted at rest.Question: Scenario: You need to upgrade your Cloud SQL MySQL 5.7 instance to 8.0. What’s the recommended approach?
Options:
A. In-place upgrade via gcloud console
B. Create a backup, create a new 8.0 instance, restore backup
C. Export data to CSV and import
D. Use Filestore snapshots
Answer: B
Explanation: Cloud SQL does not support in-place major version upgrades. You must restore a backup or import into a new instance with the target version.Question: To change database flags (e.g., max_connections), you must:
Options:
A. Edit the instance’s flags in the console and restart the instance
B. Modify flags via SQL statements
C. Recreate the instance
D. Use gcloud compute instances set-metadata
Answer: A
Explanation: You can modify flags in the Cloud SQL instance settings; many require an instance restart. You cannot change flags via SQL.Question: Which service provides built-in dashboards for Cloud SQL CPU, memory, and disk metrics?
Options:
A. Cloud Monitoring
B. Cloud Logging
C. Cloud Trace
D. Cloud Profiler
Answer: A
Explanation: Cloud Monitoring collects and displays performance metrics. Logging collects logs; Trace and Profiler are for latency/profiling.Question: Scenario: Your Cloud Functions need to connect securely to Cloud SQL (PostgreSQL). Which library or component should you use?
Options:
A. Cloud SQL Auth proxy
B. Direct public IP + SSL certificates
C. Cloud Spanner client
D. VPC peering only
Answer: A
Explanation: The Cloud SQL Auth proxy handles IAM-based auth and encryption for serverless environments. Direct connections can work but require managing SSL and authorized networks; Spanner client is wrong.Question: (Select two) Which features ensure your Cloud SQL primary fails over with minimal data loss?
Options:
A. High availability (regional primary+failover setup)
B. Automated backups
C. Binary logging enabled
D. Deny public IP
Answer: A, C
Explanation: HA setup with synchronous replication to a failover instance and binary logging (MySQL) minimize data loss. Backups are for recovery, not immediate failover; public IP is unrelated.Question: Which MySQL version is NOT available in Cloud SQL?
Options:
A. 5.6
B. 5.7
C. 8.0
D. 8.1
Answer: D
Explanation: Cloud SQL supports 5.6, 5.7, and 8.0, but not the unsupported 8.1.Question: Scenario: For compliance, you must use a customer-managed key for encryption. Which IAM role must you grant to the Cloud SQL service account on that KMS key?
Options:
A. roles/cloudkms.cryptoKeyEncrypterDecrypter
B. roles/cloudsql.admin
C. roles/cloudkms.viewer
D. roles/owner
Answer: A
Explanation: The cryptoKeyEncrypterDecrypter role allows Cloud SQL to encrypt/decrypt volumes with your CMEK. cloudsql.admin is for SQL management; viewer is read-only.Question: You accidentally deleted a Cloud SQL instance without backups. Which recovery options remain?
Options:
A. None—data is unrecoverable
B. Point-in-time recovery
C. Recycle bin
D. Undelete API
Answer: A
Explanation: Without automated backups or replicas, deletion is permanent. There’s no recycle bin or undelete.Question: Which command exports a Cloud SQL database to a GCS bucket in SQL format?
Options:
A. gcloud sql export sql INSTANCE gs://BUCKET/file.sql.gz --database=DB
B. gcloud sql dump INSTANCE > file.sql
C. mysqldump
D. gsutil cp
Answer: A
Explanation: The gcloud sql export sql command exports directly to GCS. mysqldump would require SSH access to the VM.Question: (Select two) To automate patching of minor versions for Cloud SQL, you configure:
Options:
A. Maintenance window
B. Automatic minor version upgrade
C. Binary logging
D. Geo-redundant backups
Answer: A, B
Explanation: You set a maintenance window and enable automatic minor version upgrades. Binary logging and backups are for data, not patching.Question: Your SLAs require backups retained for at least 14 days. Which setting accomplishes this?
Options:
A. Backup retention period = 14 days
B. Disable automated backups
C. Automated deletion = 7 days
D. Point-in-time recovery = 7 days
Answer: A
Explanation: You configure the automated backup retention to 14 days. Disabling or setting fewer days fails requirements.Question: Scenario: The database storage grows unpredictably. Which setting prevents outages due to full disks?
Options:
A. Enable automatic storage increase
B. Set storage to maximum
C. Use local SSD
D. Monitor and manually resize
Answer: A
Explanation: Automatic storage increase handles growth seamlessly; manual resize risks hitting capacity.
Question: Which Cloud SQL feature would you choose for global, strongly consistent relational data at scale instead of MySQL/PostgreSQL?
Options:
A. Cloud Spanner
B. Bigtable
C. Firestore
D. Memorystore
Answer: A
Explanation: Spanner is Google’s horizontally scalable, strongly consistent relational database. Bigtable is wide-column NoSQL; Firestore is document-based; Memorystore is in-memory.Question: What is the maximum number of concurrent connections supported by a db-n1-standard-4 instance (MySQL)?
Options:
A. 100
B. 200
C. Based on max_connections flag (default 151)
D. Unlimited
Answer: C
Explanation: MySQL limits connections per the max_connections flag (default 151). It’s not fixed by the instance tier.Question: To simplify connection management from GKE pods to Cloud SQL (PostgreSQL), you deploy:
Options:
A. Cloud SQL Auth proxy as a sidecar container
B. Direct public IP + secret in Pod
C. Redis cache
D. Service Mesh
Answer: A
Explanation: The Auth proxy in sidecar mode handles secure IAM-based connections and credential refresh. Public IP with secrets is less secure; Redis and Service Mesh are unrelated.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Section 6: Cloud Bigtable (25 Questions)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Question: You need a NoSQL store for high‐throughput, low‐latency single‐row reads and writes at petabyte scale. Which service do you choose?
Options:
A. Cloud Datastore
B. Cloud Bigtable
C. Cloud Spanner
D. Cloud SQL
Answer: B
Explanation: Bigtable is designed for massive scale, low‐latency single‐row access. Datastore and SQL are for document/relational workloads; Spanner is relational and more costly per row.Question: Which consistency model does Bigtable provide for single‐row operations?
Options:
A. Eventual consistency
B. Strong consistency
C. Tunable consistency
D. Read‐your‐writes only
Answer: B
Explanation: Bigtable guarantees strongly consistent single‐row reads and writes. Multi‐row transactional consistency is not provided.Question: Your workload includes both hot data (recent) and cold data (archival) in the same table. To reduce cost, you want cold tablets on HDD. How configure?
Options:
A. Create a second “cold” cluster with HDD type and use multi‐cluster routing
B. Set column‐family GC policy
C. Use a single SSD cluster and manually tier data
D. Move cold data to Cloud Storage
Answer: A
Explanation: A replicated “cold” cluster with HDD optimizes cost for low‐access data. GC policies don’t alter storage media; moving data is a different pattern.Question: Scenario: You’re getting “hotspotting” from sequential timestamps as row keys. How mitigate?
Options:
A. Prefix keys with a hash or reversed timestamp (“salting”)
B. Switch to SQL
C. Add zonal indexing
D. Use a single‐node instance
Answer: A
Explanation: High‐cardinality salting spreads writes across tablets. Sequential keys concentrate load on one tablet. SQL is irrelevant; single‐node reduces capacity.Question: You need geographically redundant read availability with automatic failover. Which Bigtable instance type supports this?
Options:
A. Single‐cluster instance
B. Multi‐cluster routed instance
C. Zonal instance
D. Regional instance
Answer: B
Explanation: Multi‐cluster routing (multi‐cluster instances) serve reads from the closest healthy cluster and failover automatically. Zonal/regional single‐cluster have lower HA.Question: Which client library allows Java HBase applications to talk to Cloud Bigtable with minimal changes?
Options:
A. Cloud Bigtable HBase client for Java
B. Cloud Spanner Java client
C. Google‐provided JDBC driver
D. Firestore Java SDK
Answer: A
Explanation: The Bigtable HBase client implements the HBase API so Java apps can migrate with minimal code changes. Spanner client is for Spanner; JDBC and Firestore clients don’t support Bigtable.Question: (Select two) To conserve storage and automatically delete old data, which features do you configure?
Options:
A. Column‐family Garbage Collection rule with max age
B. TTL on table
C. Lifecycle expiration in Cloud Storage
D. Delete rows manually
Answer: A, D
Explanation: GC rules on column families automatically remove cells older than the max age. You can also delete rows programmatically. There is no table‐level TTL; Cloud Storage lifecycle is unrelated.Question: How do you import bulk CSV data into Bigtable with minimal coding?
Options:
A. Dataflow’s Cloud Bigtable I/O connector
B. BigQuery export
C. gcloud bigtable import csv
D. Filestore mount and rsync
Answer: A
Explanation: Dataflow templates provide CSV‐to‐Bigtable ingestion at scale. There is no built‐in gcloud CSV import; BigQuery export is reverse; Filestore is file storage.Question: Scenario: You need to spin up a new Bigtable instance in us-west1 with 3 nodes and SSD storage. Which gcloud command?
Options:
A. gcloud bigtable instances create my-instance --cluster=my-cluster --cluster-zone=us-west1-b --cluster-num-nodes=3 --cluster-storage-type=SSD --instance-type=PRODUCTION
B. gcloud spanner instances create
C. gcloud sql instances create
D. gcloud bigtable tables create
Answer: A
Explanation: That command creates a production Bigtable instance with a 3-node SSD cluster. The others are for different services or objects.Question: Which metric in Cloud Monitoring tracks Bigtable node CPU utilization?
Options:
A. bigtable.googleapis.com/cluster/cpu_load
B. bigtable.googleapis.com/cluster/node_count
C. bigtable.googleapis.com/table/bytes_read
D. bigtable.googleapis.com/table/latency
Answer: A
Explanation: cpu_load shows CPU usage across nodes. node_count is a capacity metric; bytes_read and latency measure different aspects.Question: To isolate traffic, your Bigtable clusters reside in a VPC. Which network setting must be enabled?
Options:
A. Private Service Connect endpoint for Bigtable
B. Public IP on the cluster
C. Google APIs access via public DNS
D. NAT Gateway
Answer: A
Explanation: Private Service Connect (PSC) allows you to access Bigtable via your VPC without public IPs. Public IP is opposite; DNS/NAT are not sufficient for private connectivity.Question: Scenario: You must scale your Bigtable cluster from 3 to 10 nodes to handle increased load. How?
Options:
A. gcloud bigtable clusters update my-cluster --num-nodes=10
B. gcloud compute instances resize
C. autoscaling flag
D. replication factor
Answer: A
Explanation: You can manually update the node count with that command. There is no compute instance or autoscaling flag. Replication factor is separate.Question: (Select two) Which IAM roles allow full administrative control over Bigtable instances and tables?
Options:
A. roles/bigtable.admin
B. roles/bigtable.dataAdmin
C. roles/bigtable.viewer
D. roles/owner
Answer: A, D
Explanation: bigtable.admin grants full Bigtable management; Owner also covers it. dataAdmin lets you manage data (tables and rows) but not instance config; viewer is read-only.Question: Which API supports creating and managing backups of Bigtable tables?
Options:
A. bigtableadmin.googleapis.com
B. bigtable.googleapis.com
C. storage.googleapis.com
D. spanner.googleapis.com
Answer: A
Explanation: The Bigtable Admin API (bigtableadmin.googleapis.com) provides backup and instance/table administration. The data API is bigtable.googleapis.com; storage is Cloud Storage; spanner is unrelated.Question: You need a cross‐project, read‐only copy of a Bigtable table for analytics. Which approach works?
Options:
A. Create a backup, grant IAM to another project, restore into a new instance
B. Filestore snapshot
C. Dataflow copy
D. Cloud SQL federated query
Answer: A
Explanation: Backups can be restored into another instance in a different project if IAM permits. Filestore is file share; Dataflow could copy but backups are simpler; SQL federated query does not support Bigtable.Question: Which feature lets you route read and write requests to specific clusters in a multi‐cluster instance?
Options:
A. App profile
B. GC rule
C. Column family
D. IAM binding
Answer: A
Explanation: App profiles define routing and consistency policies per application (e.g., multi‐cluster or single‐cluster routing). GC rules apply to data cleanup; column families define schema; IAM is for auth.Question: (Select two) Which operations are billed per node‐hour in Bigtable?
Options:
A. Cluster node count
B. Table backups
C. Storage bytes used
D. Network egress
Answer: A, D
Explanation: You pay for node‐hours and for network egress. Backups and storage are billed separately: backup storage and data storage per GiB-month, not node-hours.Question: For time‐series data, Bigtable row keys often include a reversed timestamp. Why reverse?
Options:
A. To avoid write hotspotting on newest timestamp
B. To sort rows in ascending time order
C. To compress data better
D. To enable SQL queries
Answer: A
Explanation: Reversing or salting timestamps spreads writes across tablets. It does reverse sort order, but the main goal is avoid hotspots.Question: Scenario: You must enforce that only clients from a certain subnet mount Bigtable via PSC. How do you restrict?
Options:
A. VPC firewall rule to allow only that subnet to PSC endpoint
B. IAM allow that subnet
C. CORS policy
D. App profile deny
Answer: A
Explanation: Firewall rules control which subnets can connect to the PSC endpoint. IAM is identity-based; CORS is for HTTP; app profiles don’t enforce network policy.Question: You want to monitor latency percentiles (p50, p95) for reads. Which Monitoring view?
Options:
A. Read latency distribution chart in Cloud Monitoring
B. Dashboard in Cloud Storage
C. SQL slow query log
D. Trace
Answer: A
Explanation: Cloud Monitoring provides latency distribution charts for Bigtable reads. Storage, SQL, or tracing are unrelated.Question: (Select two) To export a Bigtable table to Cloud Storage for analytics, you can use:
Options:
A. Dataflow BigtableIO → write to GCS
B. gcloud bigtable export
C. HBase snapshot and distcp
D. Filestore share and gsutil
Answer: A, C
Explanation: Dataflow connectors export to GCS. HBase snapshots on HBase clusters can export and use distcp to copy to GCS. There’s no gcloud export command; Filestore is file share.Question: Your global Bigtable multi‐cluster instance has clusters in us‐east1 and europe‐west1. A regional disaster knocks out europe‐west1. What happens to reads and writes?
Options:
A. Reads and writes continue via us‐east1 cluster (if using multi‐cluster routing)
B. All traffic fails
C. Only writes fail
D. Only reads fail
Answer: A
Explanation: With multi‐cluster routed instances, both reads and writes will continue in remaining healthy clusters. Strong consistency is maintained via Paxos across available clusters.Question: Which tool helps simulate and validate row key design patterns before ingestion?
Options:
A. Bigtable workload simulator (cbt tool)
B. Dataflow templates
C. Cloud Profiler
D. gcloud compute emulator
Answer: A
Explanation: The ‘cbt’ command-line tool and the Bigtable emulator can be used to simulate workloads and test key designs locally. Dataflow is for pipelines; Profiler is code profiling.Question: Scenario: You must restore a table to a previous state as of 1 hour ago. Which approach?
Options:
A. Use a backup taken at that time and restore to a new instance
B. Perform a point-in-time recovery directly
C. Use Partitioned DML DELETE
D. Roll back transactions
Answer: A
Explanation: Bigtable supports table backups (snapshots). You must restore a backup to recover historical data. There’s no native point-in-time rewind.Question: (Select two) Which maintenance characteristics apply to Bigtable clusters?
Options:
A. Automatic node software updates with rolling restart
B. Maintenance windows you configure
C. Version upgrades done manually per node
D. Zero downtime SLA for minor updates
Answer: A, D
Explanation: Bigtable handles rolling updates automatically with minimal downtime, and GCP provides an SLA covering minor maintenance. You cannot configure windows or manually upgrade nodes.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Section 7: Cloud Datastore (25 Questions)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Question: You need ACID transactions with strong consistency across multiple entities. What must you do?
Options:
A. Put all entities in the same entity group (common ancestor)
B. Use cross-group (XG) transactions
C. Disable eventual indexing
D. Use Datastore Auto ID keys
Answer: A
Explanation: Datastore enforces ACID and strong consistency only within a single entity group (shared ancestor path). Cross-group transactions exist but don’t guarantee cross-group strong consistency by default and have limits.Question: You have a Kind “Order” with properties “status” (string) and “created” (timestamp). You need to query orders where status=“shipped” sorted by created desc. Which index is required?
Options:
A. Composite index on status ascending, created descending
B. Single-property index on status
C. Composite index on created descending only
D. No index (Datastore auto-indexes every property)
Answer: A
Explanation: Queries with a filter on one property and sort on another require a composite index. Single-property indexes cannot support order-by on a different property.Question: (Select two) You need to prevent entity deletion or modification until after a certain date. Which features help?
Options:
A. Use a Datastore “lock” flag property
B. Application-level checks against a TTL property
C. Retention policy on datastore entities
D. Datastore Admin backup only
Answer: A, B
Explanation: Datastore does not support native retention or holds. You implement immutability via your schema (e.g., a lock flag) or logic checking a timestamp property.Question: You must paginate through 10,000 entities efficiently. What is the recommended approach?
Options:
A. Use query cursors
B. Use offset/limit repeatedly
C. Retrieve all and slice in memory
D. Create multiple composite indexes
Answer: A
Explanation: Cursors bookmark a query position server-side for efficient pagination. Offset/limit scans from the start each time and is inefficient at large offsets.Question: Scenario: Two clients write to the same entity concurrently. How do you avoid lost updates?
Options:
A. Use a transaction with entity group
B. Use upsert operations without a transaction
C. Use client-side retry only
D. Use autogenerated IDs
Answer: A
Explanation: Transactions lock the entity group and apply serial updates to prevent concurrent write conflicts. Upsert alone may overwrite changes.Question: Which consistency model applies to ancestor queries in Datastore?
Options:
A. Strong consistency
B. Eventual consistency
C. Tunable consistency
D. Session consistency
Answer: A
Explanation: Ancestor (entity group) queries are strongly consistent. Queries without an ancestor are only eventually consistent.Question: (Select two) Which operations count as Datastore writes (and incur write costs)?
Options:
A. New entity insertion
B. Entity deletion
C. Query execution
D. Ancestor query
Answer: A, B
Explanation: Inserts, updates, and deletes count as writes. Queries (ancestor or non-ancestor) count as reads, not writes.Question: You need to import JSON data into Datastore in bulk. Which tool should you use?
Options:
A. Datastore Admin import/export to GCS
B. gcloud datastore import
C. gsutil cp
D. Dataflow JSON connector
Answer: A
Explanation: Datastore Admin’s managed export/import writes to and reads from GCS. There’s no “gcloud datastore import” CLI; Dataflow can be built but Datastore Admin is simplest.Question: Scenario: You require point-in-time recovery for your Datastore data. Which solution is available?
Options:
A. Scheduled exports via Datastore Admin
B. Native PITR feature
C. Subscribe to Change Streams
D. Enable Datastore backups in console
Answer: A, D (Select two)
Explanation: You schedule exports (backups) via Datastore Admin and configure retention in the console. There’s no built-in continuous PITR or Change Streams in Datastore.Question: Which IAM role allows an app to read and write entities in a Datastore project?
Options:
A. roles/datastore.user
B. roles/datastore.viewer
C. roles/viewer
D. roles/editor
Answer: A
Explanation: datastore.user grants read/write access to entities. datastore.viewer allows only reads; editor is too broad; viewer is read-only project metadata.Question: Scenario: Your non-ancestor queries are returning stale data. What can you do to get strongly consistent reads?
Options:
A. Add an ancestor filter (use entity group)
B. Increase index build time
C. Use high-replication datastore setting
D. Switch to Firestore Native mode
Answer: A
Explanation: Only ancestor queries against a single entity group are strongly consistent. Non-ancestor queries are always eventually consistent in Datastore mode.Question: (Select two) Which properties are NOT automatically indexed in Datastore?
Options:
A. Array properties (multi-valued)
B. JSON or unindexed blob fields
C. Excluded properties marked “noindex”
D. Key name and ID
Answer: B, C
Explanation: Properties explicitly marked noindex and unindexed blob/string fields are not indexed. Array properties are automatically turned into multiple index entries; key names/IDs are always indexed.Question: You need to perform a transactional read-modify-write on two unrelated entity groups. Which API do you use?
Options:
A. XG (cross-group) transactions
B. Standard transactions
C. Deferred writes
D. Multi-region replicator
Answer: A
Explanation: Cross-group transactions (XG) allow transactional operations spanning up to 25 entity groups. Standard transactions are limited to one entity group.Question: Scenario: You want to simulate Datastore locally for development. Which do you use?
Options:
A. Cloud Datastore emulator
B. Firestore in Native mode
C. Local SQLite
D. Cloud SQL emulator
Answer: A
Explanation: The Datastore emulator mimics the production API locally. Firestore in Native is a different mode; SQLite/SQL emulators are unrelated.Question: Which file do you use to configure composite indexes for Datastore in your app’s code repo?
Options:
A. index.yaml
B. app.yaml
C. dispatch.yaml
D. firestore.indexes.json
Answer: A
Explanation: index.yaml defines Datastore composite indexes. app.yaml configures App Engine; dispatch is routing; firestore.indexes.json is for Firestore in Native.Question: (Select two) You need to optimize cost by reducing write amplification. Which data-model patterns help?
Options:
A. Denormalize data into single entities
B. Use large entity groups for all entities
C. Batch writes in a transaction
D. Split hot entities across multiple entity groups
Answer: A, D
Explanation: Denormalization reduces cross-entity writes; splitting hot entities avoids write contention. Large entity groups increase transactional cost; batching helps throughput but not write count.Question: Scenario: You require queryable geospatial data. Which GCP NoSQL service supports built-in geospatial queries?
Options:
A. Firestore Native mode
B. Datastore mode
C. Bigtable
D. Cloud SQL
Answer: A
Explanation: Firestore (Native) supports GeoPoint and range queries; Datastore mode does not have native geospatial support. Bigtable and SQL are not optimized for geodata queries without extensions.Question: To delete millions of entities of a Kind for cost-free cleanup, which approach is efficient?
Options:
A. Use Datastore Admin bulk delete by Kind
B. Run a query and delete each entity sequentially
C. Drop the entire project
D. Use a batch export + empty import
Answer: A
Explanation: Datastore Admin offers bulk deletion by Kind. Iterative deletes are slow and cost reads/writes. Project deletion is too broad; exports/imports don’t delete.Question: Which consistency guarantee does Datastore Admin export provide when exporting large datasets?
Options:
A. Strong consistency at export start time
B. Eventual consistency across partitions
C. No guarantee
D. Per-namespace consistency only
Answer: A
Explanation: Admin exports create a consistent snapshot at the time the export begins. Any ongoing writes after that aren’t included.Question: Scenario: You want to migrate from Datastore mode to Firestore Native mode. What tool supports this?
Options:
A. Managed import/export via Admin console
B. gcloud datastore migrate-firestore
C. Cloud Dataflow ETL
D. No migration path
Answer: A
Explanation: The Firestore console and Admin API support one-click migration from Datastore mode to Native mode. gcloud CLI has no direct migrate command; Dataflow is manual ETL.Question: (Select two) Which operations incur Datastore read costs?
Options:
A. Entity lookup by key
B. Ancestor query
C. Non-ancestor query
D. Entity update
Answer: A, B, C (Select two: A, B/C? But non-ancestor is also read)
Let's set as: A and C.
Explanation: Key lookups and all queries (ancestor or not) count as reads. Updates are writes.Question: You need to restrict which service accounts can run Datastore queries in your GKE workload. Which IAM role grants query permission?
Options:
A. roles/datastore.viewer
B. roles/datastore.user
C. roles/run.invoker
D. roles/viewer
Answer: A
Explanation: datastore.viewer allows “datastore.entities.get” and “runQuery” operations. datastore.user includes write permissions too.Question: Scenario: You must configure an App Engine app to connect to Datastore in a different project. Which gcloud flag helps?
Options:
A. --project
B. --namespace
C. --region
D. --owner
Answer: A
Explanation: You specify the target project with --project in gcloud commands or client library initialization. Namespace is for logical partitioning.Question: Which Datastore mode feature is deprecated in favor of Firestore in Native mode?
Options:
A. Eventually consistent ancestor-less queries
B. Auto-scaling
C. Realtime listeners
D. Composite indexes
Answer: A, C (But eventually consistent ancestor-less queries: Datastore mode has them; Firestore native has strong consistency for all queries. And Firestore supports realtime listeners (C) while Datastore does not. So which is deprecated? Actually Datastore lacks realtime listeners (C) and only eventual ancestor-less queries (A). But question: "Which Datastore mode feature is deprecated in favor of Firestore native?" probably ancestor-less eventual consistency. But multiple answer. Let's choose A.)
Answer: A
Explanation: Native mode offers strong consistency for all queries. Datastore’s eventual ancestor-less queries are effectively deprecated.Question: To view Datastore index build progress, which console do you use?
Options:
A. Datastore Indexes page in GCP Console
B. Cloud Monitoring Logs explorer
C. BigQuery UI
D. Deployment Manager
Answer: A
Explanation: The Datastore Indexes page shows composite index build status. Logs explorer shows raw logs; BigQuery is for analytics.
–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Section 8: Firestore (Native Mode) – 25 Questions
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Question: You need a mobile backend that syncs user data in real time, supports offline caching, and scales automatically. Which service fits?
Options:
A. Firestore (Native)
B. Firestore in Datastore mode
C. Cloud SQL
D. Bigtable
Answer: A
Explanation: Firestore Native provides real-time listeners, offline support for mobile/web SDKs, and auto-scales globally. Datastore mode has no real-time or offline SDK; SQL and Bigtable lack real-time mobile sync.Question: Which consistency model does Firestore Native guarantee for document reads and queries?
Options:
A. Strong consistency for all reads and queries
B. Eventual consistency for queries without index
C. Tunable consistency per request
D. Session consistency only
Answer: A
Explanation: Firestore Native provides strong consistency for both document and query reads globally. Datastore mode only guaranteed on ancestor queries.Question: Scenario: You must query “orders” where amount > 100 and status = “shipped”. Which composite index is required?
Options:
A. (status ASC, amount ASC)
B. (amount ASC, status ASC)
C. single-property index on amount
D. no index—Firestore auto-indexes single properties only
Answer: B
Explanation: Queries filtering on one property and ordering on another (default order) or filtering on multiple require a composite index listing filter properties first. In this case, filter amount > 100 and status = “shipped” can use separate single-property indexes, but to sort by default order you'd need composite if ordering is specified. Best practice: composite index with filter fields.Question: (Select two) Which of the following are true about Firestore Native documents?
Options:
A. They are limited to 1 MiB each
B. They can contain nested maps and arrays
C. They support client-side transactions
D. They support full SQL JOINs
Answer: B, C
Explanation: Firestore Native documents can contain nested data and support client transactions. Size limit is 1 MiB for Datastore mode; Firestore Native limit is 1 MiB per write. Firestore doesn’t support server-side SQL JOINs.Question: You need to atomically increment a numeric counter in Firestore without race conditions. Which mechanism do you use?
Options:
A. MapField.update with FieldValue.increment()
B. Read-modify-write without transaction
C. Batch write only
D. Disable offline persistence
Answer: A
Explanation: FieldValue.increment() performs atomic server-side increments. Read-modify-write outside a transaction risks lost updates; batch writes group writes but don’t guarantee atomic increment.Question: Scenario: Your mobile app goes offline frequently. You want writes queued and applied when back online, with local cache. Which Firestore SDK feature handles this?
Options:
A. Offline persistence
B. Realtime Database sync
C. Cloud Functions
D. Emulator
Answer: A
Explanation: Firestore Native mobile/web SDKs support offline persistence, queueing writes locally and synchronizing upon reconnection. Realtime DB is separate service.Question: Which query limitation applies in Firestore Native?
Options:
A. At most one inequality filter per query
B. No ordering by multiple fields
C. No composite indexes
D. Queries automatically paginate only 10 items
Answer: A
Explanation: Firestore Native allows at most one field with range (inequality) filter per query, and that field must be first in an orderBy if specified. You can order by multiple fields and create composite indexes.Question: (Select two) You want to secure Firestore data access by role. Which do you configure?
Options:
A. Firestore Security Rules
B. IAM permissions only
C. VPC Service Controls
D. Database encryption key
Answer: A, B
Explanation: Firestore Security Rules enforce per-document and per-request access control; IAM roles control who can administer the database but not per-document logic. VPC SC handles network egress; encryption keys protect at rest.Question: Scenario: You must migrate an existing Realtime Database app to Firestore Native. Which tool aids this migration?
Options:
A. Database Migration service (beta)
B. Firestore import/export
C. Manual client-side copy
D. BigQuery transfer
Answer: A
Explanation: Realtime Database to Firestore migration tool handles live data sync. Export/import isn’t supported; manual copy is error-prone; BigQuery isn’t a live migration path.Question: To implement pagination in Firestore Native queries, you should use:
Options:
A. Query cursors (startAfter, startAt)
B. offset + limit
C. limit only
D. composite indexes
Answer: A
Explanation: Cursors are efficient for pagination, avoiding the cost of offset. Offset is supported but reads all preceding documents.Question: (Select two) Which of these Firestore features are billing metrics?
Options:
A. Document reads, writes, deletes
B. Data stored (GiB-months)
C. Network egress
D. Transactions per second
Answer: A, B, C (multi-select may include all three)
Explanation: Firestore bills for document operations, storage used, and network egress. Transactions are counted as writes + reads.Question: Scenario: You need to export your Firestore collection daily for analytics in BigQuery. Which service helps automate this?
Options:
A. Scheduled export via Cloud Scheduler + Cloud Functions invoking the export API
B. Firestore built-in scheduled export
C. Pub/Sub streaming export
D. Dataflow with direct connector
Answer: A, D (two possible)
Explanation: You can script exports with Functions/Scheduler or use Dataflow templates to export to BigQuery. There’s no built-in cron in Firestore.Question: Your app needs hierarchical data modeling (e.g., users/{uid}/orders/{oid}). Which Firestore structure supports this?
Options:
A. Collections and subcollections
B. Arrays inside a document
C. Nested maps only
D. Single flat Kind
Answer: A
Explanation: Subcollections enable hierarchical data. Arrays and maps are for fields, not collections.Question: You need a multi-region location for maximum availability. Which Firestore location do you pick?
Options:
A. nam5 (us multi-region)
B. us-central1 (regional)
C. europe-west2 (regional)
D. asia-east1 (regional)
Answer: A
Explanation: Multi-region locations (nam5, eur3, asia1) replicate across zones and regions for high availability. Regional locations are single region.Question: (Select two) Which operations incur Firestore function invocation cost?
Options:
A. OnCreate trigger
B. OnUpdate trigger
C. Document read in client SDK
D. Document write in client SDK
Answer: A, B
Explanation: Cloud Functions triggers on Firestore events are billed per invocation. Client SDK reads/writes are Firestore operations, not functions.Question: You need to enforce that when a “user” document is deleted, all subcollection “orders” are also removed. Which approach is best?
Options:
A. Cloud Function onDelete trigger to cascade delete
B. Firestore Security Rule
C. Composite delete in SDK
D. FieldValue.delete()
Answer: A
Explanation: Firestore doesn’t cascade deletes natively; you must implement via server-side function on delete. Security Rules don’t delete data; SDK must loop, but function is automatic.Question: Scenario: You want offline support on web, but your index count is very large. What risk arises?
Options:
A. Slow initial load due to downloading indexes
B. Exceeding browser storage quota
C. Realtime listeners fail
D. No offline on mobile
Answer: A, B (multi-select)
Explanation: Web SDK’s offline cache stores indexes/documents; large index sets risk slow cache initialization and browser storage limits.Question: Which API feature ensures batched writes across multiple documents atomically?
Options:
A. Write batch
B. Transaction
C. BulkWriter (batch, no atomicity)
D. Composite index
Answer: B
Explanation: Transactions ensure atomic commits across multiple document writes. Batches send writes together but don’t guarantee atomic rollback on partial failure.Question: (Select two) Which SDKs support Firestore offline persistence natively?
Options:
A. Android
B. iOS
C. Web
D. Admin-Java
Answer: A, B, C
Explanation: Android, iOS, and Web SDKs support offline persistence. Admin SDKs (Java, Node) do not.Question: To limit reads to documents where “ownerId” equals the authenticated user, you implement:
Options:
A. Security Rules with request.auth.uid == resource.data.ownerId
B. Query filter only
C. IAM allow only user’s service account
D. Network allow rule
Answer: A, B (two-layer)
Explanation: Security Rules enforce server-side security; you should also apply a client-side query filter. IAM is too coarse; network rules irrelevant.Question: You need to monitor Firestore read latency over time. Which Monitoring metric do you chart?
Options:
A. firestore.googleapis.com/document/read_latency
B. firestore.googleapis.com/storage/data_usage
C. datastore.googleapis.com/read_ops
D. firestore.googleapis.com/index_build
Answer: A
Explanation: The document read_latency metric shows latency distribution. Data_usage is storage; datastore API is for Datastore mode.Question: Scenario: You want to back up only documents modified in the last hour. Which approach works?
Options:
A. Use a timestamp field and export a filtered subset via Dataflow
B. Use Firestore export with time filter
C. Delete old docs first
D. Use Firestore backup service
Answer: A
Explanation: Firestore export API doesn’t support filters; you can use Dataflow to read a filtered query and write to GCS/BigQuery. There’s no native backup service on a schedule.Question: Which write format allows up to 500 operations per commit?
Options:
A. Batched writes
B. Transaction writes
C. BulkWriter
D. Single document write
Answer: A
Explanation: Batched writes allow up to 500 operations per batch. Transactions are limited to 500 reads/writes. BulkWriter is a new high-throughput write API but not atomic.Question: (Select two) Which Firestore Native features are NOT available in Datastore mode?
Options:
A. Real-time listeners
B. Offline persistence
C. Automatic multi-region replication
D. Entities and Kinds
Answer: A, B, C
Explanation: Datastore mode lacks real-time and offline features and is regional only. Entities/Kinds exist in both.Question: You need to export your Firestore data into BigQuery in a partitioned table daily. Which Google service simplifies this?
Options:
A. Managed export via Dataflow template “Firestore to BigQuery”
B. gcloud firestore export
C. Firestore console export
D. Cloud Composer DAGs
Answer: A
Explanation: Dataflow provides a template to continuously or periodically export Firestore collections to partitioned BigQuery tables. gcloud/firestore export writes to GCS, not directly to BigQuery; Console export is one-time and manual; Composer is generic scheduler.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Section 9: Memorystore (25 Questions)
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Question: Which managed service offers Redis-compatible in-memory caching on GCP?
Options:
A. Memorystore for Redis
B. Memorystore for Memcached
C. Cloud SQL
D. Cloud Bigtable
Answer: A
Explanation: Memorystore for Redis is the managed Redis service. Memcached is a separate tier; SQL and Bigtable are not in-memory caches.Question: Which tier provides high availability with automatic failover?
Options:
A. Basic (Standard)
B. Standard HA
C. Enterprise
D. Developer
Answer: B
Explanation: The Standard (HA) tier creates two Redis nodes in different zones and fails over automatically. Basic has a single node.Question: Scenario: You need up to 50,000 ops/sec with sub-millisecond latency. Which Memorystore setting helps scale?
Options:
A. Increase node count (shards) via Redis Cluster mode
B. Increase instance size only
C. Enable persistence
D. Use Basic tier
Answer: A
Explanation: Redis Cluster mode sharding (with multiple nodes) allows you to scale throughput. Instance size (memory) is orthogonal; persistence and basic tier don’t add ops capacity.Question: In Memorystore for Redis, which auth mechanism secures access?
Options:
A. Redis AUTH password
B. IAM roles on Redis commands
C. VPC firewall only
D. SSL/TLS
Answer: A
Explanation: Memorystore for Redis supports AUTH with a Redis password. IAM roles don’t apply to Redis commands; traffic is over VPC but password enforced. SSL/TLS is not supported.Question: (Select two) Which are true for Memorystore connectivity?
Options:
A. Private IP within your VPC only
B. Public IP optional
C. Requires VPC peering
D. Service Directory integration
Answer: A, D
Explanation: Memorystore nodes have private IP addresses in your VPC; no public IP. VPC networking is native (no peering); you can optionally register instances in Service Directory.Question: Scenario: You need Redis persistence to survive restarts. Which tier supports RDB snapshots?
Options:
A. Basic
B. Standard HA
C. Both Basic and Standard HA
D. Neither
Answer: C
Explanation: Both tiers support RDB snapshot-based persistence. Persistence is optional on both.Question: Which GCP quota must you watch to avoid hitting the limit when creating large Memorystore clusters?
Options:
A. Redis instance count per region
B. Redis memory per project
C. VPC subnet IPs
D. Firewall rules
Answer: A, C (multi-select)
Explanation: There are quotas on Redis instance count and total memory. You also need enough IPs in your subnet. Firewall rules are separate.Question: You want to monitor cache hit rate and latency. Which Monitoring metrics do you chart?
Options:
A. redis.googleapis.com/hit_count & redis.googleapis.com/latency_ms
B. compute.googleapis.com/cpu/utilization
C. storage.googleapis.com/cache_hits
D. bigtable.googleapis.com/read_latency
Answer: A
Explanation: Memorystore exports Redis-specific metrics (hit_count, miss_count, latency_ms) to Cloud Monitoring. Other services have different namespaces.Question: To connect GKE pods to Memorystore, what must be configured?
Options:
A. VPC-native cluster + proper network policies
B. Cloud NAT
C. Private IP only on node
D. Redis proxy sidecar automatically
Answer: A
Explanation: GKE must use VPC-native (alias IP) and have network/firewall rules to allow pods to reach the Redis private IP. No NAT or proxy sidecar required.Question: (Select two) For disaster recovery you want daily backups of Redis. Which approaches work?
Options:
A. Export RDB snapshot to GCS via gcloud redis export
B. Enable automated backups in Memorystore
C. Use Filestore mount
D. Redirect write traffic to a secondary instance
Answer: A, B
Explanation: gcloud redis export and the automated backup feature in Memorystore both export snapshots to GCS. Filestore is separate storage; redirect traffic is a failover, not backup.Question: Scenario: Your Redis instance is filling up memory and evicting keys. Which setting helps avoid eviction?
Options:
A. Increase maxmemory policy to noeviction
B. Increase instance size (memory capacity)
C. Switch to Basic tier
D. Change persistence to AOF
Answer: B
Explanation: Adding more memory capacity is the correct remedy. noeviction policy will cause writes to fail when memory is full. Tier change doesn’t increase memory; persistence doesn’t affect memory.Question: Which Redis version family does Memorystore currently support?
Options:
A. 3.2, 4.0, 5.0
B. 6.x only
C. 2.8 only
D. All Redis versions
Answer: A
Explanation: Memorystore supports Redis 3.2, 4.0, and 5.0 (depending on region). It does not support all versions arbitrarily.Question: (Select two) Which commands are disabled on Memorystore for security and stability?
Options:
A. CONFIG
B. KEYS
C. ECHO
D. PING
Answer: A, B
Explanation: CONFIG and keys are disabled to prevent users from altering server config or scanning entire keyspace. ECHO and PING are allowed.Question: You need sub-millisecond read latency under heavy load. Which Memorystore design pattern helps?
Options:
A. Use a Standard HA cluster with 3 nodes behind a proxy which shards data
B. Use a Basic tier instance large enough for hot working set
C. Add a caching layer in front of Redis
D. Use Cloud CDN
Answer: B
Explanation: A single large Redis instance in Basic tier stores the working set in memory and serves sub-ms reads. HA adds failover, not performance. C is redundant; CDN is for HTTP.Question: Scenario: You want to migrate an existing on-prem Redis to Memorystore with minimal downtime. Which approach works?
Options:
A. Use redis-cli ––rdb to generate dump file, upload to GCS, import via gcloud redis import
B. Use gcloud redis migrate command
C. Export to Filestore and import
D. Use Database Migration Service
Answer: A
Explanation: You produce an RDB snapshot, upload to GCS, and import into Memorystore. There is no dedicated migrate command or DMS support.Question: Which network interface do you use in the Redis client to connect to Memorystore?
Options:
A. Private IP address of the instance
B. Public IP with SSL
C. Cloud NAT external IP
D. Hostname “redis.googleapis.com”
Answer: A
Explanation: Memorystore exposes a private IP in your VPC; clients connect directly over that. There is no public IP or SSL endpoint.Question: (Select two) Which backup retention policies can you configure on Memorystore?
Options:
A. Maximum number of backups to retain
B. Backup window start time
C. Days to retain backups
D. Hourly snapshot frequency
Answer: A, C
Explanation: You can configure retention count and retention days. Backup windows exist, but frequency is daily only; no hourly snapshots.Question: Your application uses Redis Streams heavily. Is this supported on Memorystore?
Options:
A. Yes, all Redis data types are supported
B. No, Streams are disabled
C. Only in Redis 5.0 on Basic tier
D. Streams are read-only
Answer: A
Explanation: Memorystore supports all core Redis data types (Strings, Hashes, Lists, Sets, Sorted Sets, HyperLogLog, Streams) in the supported versions.Question: Scenario: You see “OOM command not allowed” errors in your Redis logs. What does this indicate?
Options:
A. Redis ran out of memory and eviction policy is noeviction
B. Redis process killed
C. Redis persistence failure
D. Client TTL too large
Answer: A
Explanation: The error means the instance is at max memory, and the configured eviction policy forbids evictions (noeviction or allkeys-lru with no space).Question: Which gcloud command lists all Memorystore Redis instances in a project?
Options:
A. gcloud redis instances list
B. gcloud redis list
C. gcloud memorystore list
D. gcloud compute instances list
Answer: A
Explanation: The correct command is gcloud redis instances list. The others are invalid or for compute.Question: (Select two) Which metrics should you monitor to detect slowdowns in a Redis instance?
Options:
A. cpu/utilization
B. memory/used_bytes
C. commands/latency_percentile_99
D. disk/write_ops
Answer: A, C
Explanation: CPU usage and 99th-percentile command latency are key indicators of performance issues. Memory usage affects evictions; disk ops apply only if persistence is configured and may not reflect read/write latency.Question: Your cluster is in Basic tier and you require HA. What must you do?
Options:
A. Migrate to Standard HA tier
B. Enable Redis Cluster mode
C. Enable persistence
D. Use VPC peering
Answer: A
Explanation: Only the Standard HA tier provides high availability with automatic failover. Cluster mode is for sharding, not high availability.Question: Scenario: You want to rotate the Redis AUTH password every 30 days without downtime. What’s the minimal-impact approach?
Options:
A. Use AUTH while configuring both old and new password in your client, then remove old
B. Delete the instance and recreate
C. Use VPC firewall rule to block old password
D. Memorystore doesn’t allow rotation
Answer: A
Explanation: Memorystore supports specifying a new AUTH password; clients can be configured to try both keys during rotation window. Recreating instance causes downtime.Question: Which data‐persistence mode writes RDB snapshots asynchronously?
Options:
A. RDB only
B. AOF only
C. RDB + AOF
D. Persistence is disabled by default
Answer: A
Explanation: Memorystore supports RDB snapshot persistence (in-memory with periodic snapshots). AOF is not supported; combined mode is not available. Persistence must be enabled explicitly.Question: (Select two) Which Redis commands will fail if persistence is disabled on Memorystore?
Options:
A. BGSAVE
B. SAVE
C. CLIENT KILL
D. PING
Answer: A, B
Explanation: BGSAVE and SAVE are snapshot persistence commands and will fail if persistence is disabled. CLIENT KILL and PING are data-plane and will work.
Subscribe to my newsletter
Read articles from Anusree Anilkumar directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
