Amazon AWS Certified Advanced Networking — Specialty ANS-C01 Exam Dumps and Practice Test Questions Set6 Q76-90

Amazon AWS Certified Advanced Networking — Specialty ANS-C01 Exam Dumps and Practice Test Questions Set6 Q76-90

Visit here for our full Amazon AWS Certified Advanced Networking — Specialty ANS-C01 exam dumps and practice test questions.

Question 76: 

A company needs to implement a solution where EC2 instances in a private subnet can initiate outbound HTTPS connections to external APIs but should not be able to receive any inbound connections from the internet. Which combination of resources is required?

A) Internet Gateway and Public IP addresses on instances

B) NAT Gateway in a public subnet and Internet Gateway

C) Egress-Only Internet Gateway

D) AWS Direct Connect

Answer: B

Explanation:

A NAT Gateway deployed in a public subnet combined with an Internet Gateway provides the correct solution for enabling EC2 instances in private subnets to initiate outbound HTTPS connections while preventing inbound connections from the internet. This architecture implements asymmetric connectivity where outbound traffic initiated from within the VPC is permitted, but unsolicited inbound traffic from the internet is blocked by design. NAT Gateway performs network address translation to enable instances with private IP addresses to communicate with internet destinations while maintaining security by preventing direct inbound connectivity.

The architectural components work together through a coordinated configuration. The NAT Gateway is deployed in a public subnet, which is defined as a subnet with a route table containing a default route to an Internet Gateway. The Internet Gateway itself is attached to the VPC and provides the actual ingress and egress path between the VPC and the public internet. The NAT Gateway is assigned an Elastic IP address, which provides its public identity on the internet. EC2 instances reside in private subnets that have route tables configured with a default route pointing to the NAT Gateway. When an instance needs to make an outbound HTTPS connection to an external API, the traffic flows from the instance to the NAT Gateway, which translates the source private IP address to its own Elastic IP address and forwards the traffic through the Internet Gateway to reach the internet destination.

Internet Gateway with Public IP addresses on instances (A) would expose instances directly to the internet with bidirectional connectivity, allowing inbound connection attempts and violating the security requirement. Egress-Only Internet Gateway (C) is specifically designed for IPv6 traffic and does not support IPv4 addresses, which are typically used in most current implementations. AWS Direct Connect (D) provides dedicated connectivity between on-premises infrastructure and AWS but does not provide general internet access or NAT functionality. Therefore, the combination of NAT Gateway and Internet Gateway represents the standard, secure solution for outbound-only internet connectivity from private subnets.

Question 77: 

A network engineer needs to design a VPC architecture that supports a three-tier application web tier, application tier, and database tier across three Availability Zones with high availability. How many subnets should be created at minimum?

A) 3 subnets, one for each tier

B) 6 subnets, two for each tier across two AZs

C) 9 subnets, one for each tier in each of three AZs

D) 12 subnets, public and private for each tier in each AZ

Answer: C

Explanation:

Creating 9 subnets, with one subnet for each application tier in each of the three Availability Zones, represents the minimum subnet architecture required to support a three-tier application with high availability across three AZs. This design ensures that each application tier can operate independently in each Availability Zone, providing fault isolation and enabling the application to continue functioning even if an entire Availability Zone becomes unavailable. The subnet distribution across AZs is fundamental to achieving the high availability that AWS’s multi-AZ architecture enables.

While the question asks for the minimum number of subnets, the architecture also requires consideration of public versus private subnet designation for each tier. Typically, web tier subnets are configured as public subnets with routes to an Internet Gateway to enable direct internet connectivity for receiving user requests. Application and database tier subnets are configured as private subnets without direct internet routes, with outbound connectivity provided through NAT Gateways if needed. However, this public/private distinction is implemented through route table associations rather than requiring separate physical subnets, so the minimum count remains nine subnets with appropriate routing configuration applied to each.

Creating only 3 subnets, one for each tier (A), fails to provide multi-AZ high availability and creates single points of failure at the Availability Zone level. Creating 6 subnets across two AZs (B) provides some redundancy but doesn’t meet the specified requirement of deployment across three Availability Zones and would result in an entire AZ worth of capacity being unused. Creating 12 subnets with separate public and private subnets for each tier in each AZ (D) exceeds the minimum requirement, although it represents a design some organizations implement to have explicit separation of public and private subnets; however, the question asks for the minimum, which is nine subnets. Therefore, nine subnets represent the minimum architecture for three-tier, three-AZ high availability.

Question 78: 

A company is migrating their application to AWS and needs to maintain specific source IP addresses when connecting to external partner systems for whitelisting purposes. The application runs on multiple EC2 instances behind a load balancer. How can this requirement be met?

A) Assign Elastic IP addresses to each EC2 instance

B) Use a NAT Gateway and provide the NAT Gateway’s Elastic IP address to partners for whitelisting

C) Configure AWS Global Accelerator with static IP addresses

D) Use the load balancer’s IP address for whitelisting

Answer: B

Explanation:

Using a NAT Gateway and providing the NAT Gateway’s Elastic IP address to partners for whitelisting represents the correct solution for maintaining consistent source IP addresses when EC2 instances behind a load balancer need to make outbound connections to external partner systems. NAT Gateway performs network address translation that replaces the source IP addresses of multiple instances with a single, predictable Elastic IP address, allowing partners to whitelist a known, static IP address rather than trying to manage a changing list of instance IPs. This approach works seamlessly with load-balanced architectures and Auto Scaling groups where instance IPs are dynamic.

Assigning Elastic IP addresses to each EC2 instance (A) would require instances to be in public subnets with direct internet connectivity, which violates security best practices for application tier resources and complicates integration with load balancers. Additionally, in dynamic environments with Auto Scaling, managing Elastic IP association as instances launch and terminate adds significant complexity, and partners would need to constantly update whitelists as the IP list changes. Configuring AWS Global Accelerator (C) provides static IP addresses for inbound traffic to reach AWS resources but doesn’t control the source IP for outbound connections from instances to external systems. Using the load balancer’s IP address (D) doesn’t work because load balancer IP addresses are used for inbound traffic to the instances, not for outbound connections from the instances to external systems, and Application Load Balancers don’t even have static IPs. Therefore, NAT Gateway with its static Elastic IP address provides the practical solution for consistent outbound source IPs.

Question 79: 

A network engineer is troubleshooting connectivity between an on-premises network and an AWS VPC connected via AWS Site-to-Site VPN. The VPN tunnel is established, but EC2 instances cannot reach on-premises servers. What should be verified to resolve this issue?

A) Security groups and network ACLs allow traffic, and route tables in the VPC have routes to the on-premises network through the Virtual Private Gateway

B) The Internet Gateway is properly configured

C) NAT Gateway has routes to the on-premises network

D) VPC Peering connections are established

Answer: A

Explanation:

Verifying that security groups and network ACLs allow traffic, and that route tables in the VPC contain routes to the on-premises network through the Virtual Private Gateway, represents the comprehensive troubleshooting approach for resolving connectivity issues over an established VPN tunnel. When the VPN tunnel status shows as up but application traffic cannot flow, the issue typically resides in either the filtering rules applied by security controls or in missing routing configuration that directs VPC traffic toward the VPN tunnel. Systematically checking both routing and security controls identifies and resolves the most common causes of this type of connectivity problem.

Network ACL verification provides subnet-level filtering checks. Network ACLs are stateless, meaning both the request and response traffic must be explicitly allowed in separate rules. The subnet’s Network ACL must have an outbound rule allowing traffic to the on-premises network and an inbound rule allowing response traffic from the on-premises network. A common misconfiguration is having rules that allow the initial outbound request but forget to allow the return traffic or vice versa. Network ACLs also use rule numbers for priority, and a deny rule with a lower number can override an allow rule with a higher number. Additionally, Network ACLs must account for ephemeral ports used for return traffic, typically requiring rules that allow TCP ports 1024-65535 for responses.

Internet Gateway configuration (B) is not relevant to VPN connectivity; the Internet Gateway handles traffic between the VPC and the public internet, whereas VPN traffic flows through the Virtual Private Gateway. NAT Gateway routes (C) are similarly not relevant; NAT Gateways are for enabling private subnet instances to reach the internet and don’t play a role in VPN connectivity to on-premises networks. VPC Peering connections (D) are used to connect separate VPCs within AWS and have no relation to connecting VPCs to on-premises networks via VPN. Therefore, systematic verification of routes, security groups, and Network ACLs represents the appropriate troubleshooting methodology for VPN connectivity issues.

Question 80: 

A company needs to implement DNS resolution where on-premises servers can resolve domain names for resources in an AWS VPC, and AWS resources can resolve domain names for on-premises servers. The solution should integrate with existing DNS infrastructure. Which AWS service should be configured?

A) Amazon Route 53 Resolver with inbound and outbound endpoints

B) AWS Directory Service

C) Amazon CloudFront

D) AWS Global Accelerator

Answer: A

Explanation:

Amazon Route 53 Resolver with inbound and outbound endpoints provides the comprehensive solution for bidirectional DNS resolution between on-premises and AWS environments, enabling seamless integration with existing DNS infrastructure. Route 53 Resolver is AWS’s managed DNS service that automatically handles DNS queries within VPCs, and by deploying inbound and outbound endpoints, organizations can extend DNS resolution capabilities to support hybrid cloud architectures where name resolution must work across the boundary between on-premises data centers and AWS.

The solution provides several operational advantages beyond simple DNS resolution. Route 53 Resolver endpoints are fully managed by AWS, eliminating the need to deploy, configure, and maintain custom DNS server infrastructure within VPCs. The endpoints scale automatically based on query volume and provide built-in high availability through multi-AZ deployment. Integration with AWS networking features like security groups allows administrators to control which sources can send DNS queries to inbound endpoints. The forwarding rules support complex conditional forwarding logic, allowing different domains to be forwarded to different resolver IP addresses. The service integrates with AWS RAM Resource Access Manager for sharing across multiple AWS accounts in an organization, enabling centralized DNS architecture for multi-account environments.

AWS Directory Service (B) provides managed Active Directory services and includes some DNS capabilities, but it’s primarily focused on directory services for identity management rather than providing comprehensive hybrid DNS resolution for general-purpose name resolution across environments. Amazon CloudFront (C) is a content delivery network for caching and distributing content, not a DNS resolution service. AWS Global Accelerator (D) provides static IP addresses and improved routing for global applications but doesn’t provide DNS resolution services. Therefore, Route 53 Resolver with inbound and outbound endpoints is the purpose-built solution for hybrid cloud DNS integration.

Question 81: 

A network engineer needs to ensure that all traffic between an Application Load Balancer and target EC2 instances is encrypted. The solution should use end-to-end encryption. How should this be configured?

A) Enable HTTPS listeners on the ALB and configure HTTPS health checks to the targets

B) Use Network Load Balancer instead of Application Load Balancer

C) Enable VPN connections between the ALB and instances

D) Deploy AWS Certificate Manager certificates only on the ALB

Answer: A

Explanation:

Enabling HTTPS listeners on the Application Load Balancer and configuring HTTPS health checks to the targets provides end-to-end encryption between clients, the load balancer, and the backend instances. This configuration ensures that traffic is encrypted in transit at every hop: from the client to the load balancer using TLS, and from the load balancer to the target instances using TLS. End-to-end encryption protects data confidentiality throughout the entire request path and ensures that sensitive information cannot be intercepted or read in plaintext at any point in the infrastructure.

Health check configuration is an important aspect of this setup. When target instances use HTTPS, the health checks must also be configured to use HTTPS protocol. The load balancer periodically sends HTTPS requests to the health check path configured in the target group, such as /health or /status, and evaluates the response to determine instance health. If health checks are left configured for HTTP while the instances only accept HTTPS, health checks will fail and the load balancer will mark all instances as unhealthy, preventing traffic from reaching the application. Properly configured HTTPS health checks ensure the load balancer can accurately assess instance availability while respecting the encrypted communication requirement.

Using a Network Load Balancer instead of Application Load Balancer (B) doesn’t inherently provide end-to-end encryption; NLB operates at Layer 4 and can pass through encrypted traffic without decrypting it, but this is connection passthrough rather than the managed TLS termination and re-encryption approach that provides end-to-end protection in ALB scenarios. Enabling VPN connections between the ALB and instances (C) is not a standard or supported configuration; VPC internal traffic between load balancers and instances doesn’t use VPN, and the proper approach is HTTPS at the application layer. Deploying certificates only on the ALB (D) provides encryption between clients and the load balancer but not between the load balancer and instances, failing to achieve end-to-end encryption. Therefore, HTTPS listeners with HTTPS target groups provide true end-to-end encryption in Application Load Balancer architectures.

Question 82: 

A company has a VPC with multiple subnets and needs to implement network segmentation to prevent certain subnets from communicating with each other while allowing other subnets to communicate freely. Which AWS feature provides subnet-level network segmentation within a VPC?

A) Security Groups

B) Network Access Control Lists

C) VPC Peering

D) AWS Network Firewall

Answer: B

Explanation:

Network Access Control Lists provide subnet-level network segmentation within a VPC, enabling administrators to control which subnets can communicate with each other through stateless firewall rules applied at subnet boundaries. NACLs operate at the subnet level, evaluating all traffic entering or leaving a subnet regardless of which specific resources are involved, making them the appropriate tool for implementing broad segmentation policies across subnets. This subnet-level control complements instance-level security groups to provide defense-in-depth network security architectures.

Network ACL rule structure includes several parameters that define the traffic to match: rule number for priority, protocol such as TCP, UDP, or ICMP, source or destination IP CIDR block, port range for TCP or UDP traffic, and the action of allow or deny. A common segmentation pattern includes explicit deny rules for unwanted cross-subnet communication with lower rule numbers, followed by broader allow rules with higher numbers for permitted traffic. The final implicit rule in every NACL is a deny all rule marked with asterisk, which blocks any traffic that doesn’t match earlier rules. This default-deny posture requires administrators to explicitly allow all desired traffic, encouraging a security-focused configuration approach.

Security Groups (A) operate at the instance level rather than the subnet level, controlling traffic to and from specific network interfaces; while they can reference source CIDR blocks or other security groups, they don’t provide the subnet-wide enforcement that NACLs offer. VPC Peering (C) is used to connect separate VPCs, not to segment subnets within a single VPC. AWS Network Firewall (D) provides advanced inspection capabilities for inter-VPC or inter-subnet traffic but represents a more complex and costly solution than NACLs for basic subnet segmentation within a VPC. Therefore, Network Access Control Lists are the purpose-built feature for subnet-level network segmentation.

Question 83: 

A network engineer is designing a solution for a global application that must maintain user session affinity while distributing traffic across multiple AWS regions. Users should be routed to the nearest healthy region. Which AWS service combination provides this functionality?

A) Amazon CloudFront with Lambda@Edge

B) AWS Global Accelerator with client affinity enabled

C) Amazon Route 53 with geolocation routing

D) Application Load Balancer with session affinity

Answer: B

Explanation:

AWS Global Accelerator with client affinity enabled provides the optimal combination of global traffic distribution to the nearest healthy region and session affinity to maintain user session state. Global Accelerator uses AWS’s global network infrastructure and Anycast IP addresses to route user traffic to the optimal AWS region based on latency, health, and routing policies, while the client affinity feature ensures that subsequent requests from the same user are consistently routed to the same endpoint, preserving session state for applications that require sticky sessions.

Client affinity in Global Accelerator ensures session persistence by binding user sessions to specific endpoints based on the source IP address of the client. When client affinity is enabled, Global Accelerator maintains a mapping between client source IPs and the endpoints those clients have been routed to. Subsequent requests from the same source IP are consistently directed to the same endpoint for the duration of the session or until that endpoint becomes unhealthy. This session stickiness is essential for applications that maintain session state locally on application servers, such as shopping cart information, authentication tokens, or user preferences that are stored in server memory rather than in a shared session store. The combination of global routing and session affinity allows applications to leverage local session storage for performance while still benefiting from multi-region deployment for high availability and global reach.

Global Accelerator’s health checking continuously monitors application endpoints and automatically routes traffic away from unhealthy regions without breaking user sessions. Each endpoint is health checked at configured intervals, and if an endpoint fails health checks, Global Accelerator immediately stops routing new sessions to that endpoint while existing sessions with client affinity to that endpoint are handled according to failover policies. This intelligent failover ensures both high availability and session preservation to the extent possible during failure scenarios. For applications requiring guaranteed session preservation across regional failures, session data must be stored in shared storage such as Amazon ElastiCache or DynamoDB, but Global Accelerator’s client affinity minimizes the need for shared session storage under normal operating conditions.

Amazon CloudFront with Lambda@Edge (A) is primarily a content delivery network optimized for caching static and dynamic content, and while it can route to different origins, it doesn’t provide the same level of intelligent regional traffic management and health-based failover that Global Accelerator offers. Amazon Route 53 with geolocation routing (C) can route users to different regions based on their geographic location, but it relies on DNS which has longer failover times due to DNS caching and doesn’t provide session affinity at the network level. Application Load Balancer with session affinity (D) provides excellent session stickiness within a single region but cannot distribute traffic across multiple AWS regions as it’s a regional service. Therefore, AWS Global Accelerator with client affinity provides the purpose-built solution for global multi-region applications with session affinity requirements.

Question 84: 

A company needs to monitor and log all DNS queries made by resources in their VPC to detect potential data exfiltration attempts through DNS tunneling. Which feature should be enabled to capture this information?

A) VPC Flow Logs

B) Route 53 Resolver query logging

C) AWS CloudTrail

D) Amazon GuardDuty

Answer: B

Explanation:

Route 53 Resolver query logging provides the capability to capture and log all DNS queries made by resources within a VPC, enabling security teams to detect anomalous DNS activity such as DNS tunneling attempts used for data exfiltration. Query logging records detailed information about every DNS query that Route 53 Resolver processes for the VPC, including the queried domain name, query type, response code, and source IP address. This visibility into DNS activity is essential for security monitoring, threat detection, compliance auditing, and investigating potential security incidents involving DNS-based attacks or data leakage.

Route 53 Resolver query logging operates by capturing DNS query metadata as the Resolver processes queries from resources in the VPC. When enabled, query logging can be configured to send logs to Amazon CloudWatch Logs for real-time monitoring and alerting, to Amazon S3 for long-term storage and analysis, or to Amazon Kinesis Data Firehose for streaming to other analytics platforms or security information and event management systems. The logs are written in JSON format and include fields such as the query_timestamp indicating when the query was made, query_name containing the domain name that was queried, query_type specifying whether it was an A record, AAAA record, TXT record, or other DNS record type, rcode showing the response code, answers providing the resolved IP addresses or other record data, and srcaddr showing the VPC source IP address of the resource that made the query.

Beyond security use cases, DNS query logs support operational troubleshooting and application dependency mapping. When applications experience connectivity issues, examining DNS query logs helps determine whether the problem is related to DNS resolution failures, misconfigured domain names, or queries being sent to incorrect DNS servers. Query logs can reveal which external services and APIs applications are communicating with, supporting efforts to document application dependencies for compliance, disaster recovery planning, or migration projects. The logs also help identify unnecessary or unexpected DNS traffic that might indicate misconfigurations or legacy dependencies that should be cleaned up.

VPC Flow Logs (A) capture network traffic metadata at the IP packet level but don’t decode or log the application-layer content of DNS queries; Flow Logs would show connections to port 53 but wouldn’t reveal the domain names being queried. AWS CloudTrail (C) logs API calls made to AWS services for management and control plane activity but doesn’t capture DNS queries made by applications running in VPCs. Amazon GuardDuty (D) is a threat detection service that can analyze DNS query patterns to identify malicious activity, but it relies on having access to DNS query data, which comes from enabling Route 53 Resolver query logging as its data source. Therefore, Route 53 Resolver query logging is the foundational feature that must be enabled to capture DNS query information for security monitoring.

Question 85: 

A network engineer is troubleshooting an issue where EC2 instances can reach some external websites but not others. The instances are in a private subnet using a NAT Gateway for internet access. What is the most likely cause?

A) The Security Group is blocking specific destination IP addresses

B) The Network ACL has explicit deny rules for certain IP ranges

C) The Internet Gateway is misconfigured

D) VPC Peering is not established to those websites

Answer: B

Explanation:

The Network ACL having explicit deny rules for certain IP ranges is the most likely cause of selective connectivity where some external websites are reachable while others are not, given that the instances are using NAT Gateway for internet access. Network ACLs operate at the subnet boundary and can contain explicit deny rules that block traffic to specific destination IP addresses or CIDR blocks. When such deny rules are present, they prevent connectivity to the blocked destinations regardless of whether other networking components are correctly configured. Unlike security groups which only support allow rules, Network ACLs support both allow and deny rules, and a deny rule will override allow rules for matching traffic.

A concrete example illustrates this scenario: suppose a Network ACL has rule number 50 that denies outbound traffic to 203.0.113.0/24, and rule number 100 that allows all outbound traffic. If a user attempts to reach a website hosted at 203.0.113.45, the traffic matches rule 50 first due to lower rule number priority, and the connection is denied. Attempts to reach websites on different IP addresses match rule 100 and are allowed. This configuration might have been created intentionally to block access to specific IP ranges associated with malicious sites or prohibited content categories, or it might represent a misconfiguration where deny rules were added without full understanding of their scope. Network ACLs are stateless, so if the deny rule exists on either outbound rules blocking requests or inbound rules blocking responses, the connection fails.

Security Groups blocking specific destination IP addresses *(A) is less likely because security groups are stateful and typically have default outbound rules allowing all traffic; additionally, security groups are instance-level controls and if they were causing the issue, it would manifest consistently across all instances in the same security group rather than as a subnet-level phenomenon. Moreover, the question states connectivity works for some sites, suggesting the security group is not the blocker. Internet Gateway misconfiguration (C) would affect all internet connectivity rather than selective connectivity to specific sites, and since some websites are reachable, the Internet Gateway is clearly functioning. VPC Peering not being established to those websites (D) is a nonsensical option because VPC Peering is used to connect VPCs within AWS, not to reach external public websites on the internet. Therefore, Network ACL deny rules represent the most probable cause of selective connectivity issues in this scenario.

Question 86: 

A company is deploying a new application that requires extremely low latency between application servers and database servers. Both tiers will run on EC2 instances. Which deployment strategy minimizes network latency?

A) Deploy application and database servers in the same Availability Zone with cluster placement group

B) Deploy servers across multiple Availability Zones for high availability

C) Use Enhanced Networking with ENA for all instances

D) Deploy servers in different regions connected by VPC Peering

Answer: A

Explanation:

Deploying application and database servers in the same Availability Zone using a cluster placement group minimizes network latency by ensuring instances are physically located in close proximity to each other with high-bandwidth, low-latency network connectivity. A cluster placement group is an AWS feature that specifically addresses low-latency requirements by logically grouping instances within a single Availability Zone and attempting to place them in close physical proximity, often within the same rack or adjacent racks in the data center. This collocation minimizes the physical distance network packets must travel and reduces the number of network hops between instances, resulting in the lowest possible latency for inter-instance communication.

The cluster placement group configuration must be planned carefully with awareness of its trade-offs. All instances in a cluster placement group must be in the same Availability Zone, which means the placement group sacrifices multi-AZ high availability for optimal performance. If the Availability Zone experiences an outage, all instances in the cluster placement group are affected simultaneously. Organizations must balance the latency benefits against availability requirements and decide whether the application’s latency sensitivity justifies the reduced fault tolerance. For some applications, the performance gains are essential and the availability trade-off is accepted, potentially with additional mitigation strategies such as deploying backup clusters in other Availability Zones or implementing rapid recovery procedures.

For maximum benefit from cluster placement groups, instances should be launched together at the same time rather than incrementally over time. When AWS provisions instances for a placement group, it reserves capacity in close proximity, but if instances are added later, AWS might not have adjacent capacity available, potentially resulting in insufficient capacity errors or instances being placed suboptimally within the group. Best practices include launching all instances in a single request when possible, using instance types from the same instance family, and starting with homogeneous configurations to maximize AWS’s ability to fulfill the placement requirements.

Deploying servers across multiple Availability Zones (B) increases availability but also increases latency because traffic must traverse inter-AZ network links which, while still very fast, have higher latency than intra-AZ communication due to the physical distance between data centers. Enhanced Networking with ENA (C) is beneficial and should be used for high-performance applications as it provides higher bandwidth and lower latency compared to traditional virtualized networking, but it doesn’t provide the same degree of latency minimization as physically collocating instances in a cluster placement group; additionally, ENA is a feature that can be combined with placement groups for optimal results rather than being an either-or choice. Deploying servers in different regions connected by VPC Peering (D) introduces the highest latency due to the long physical distances between regional data centers and is completely inappropriate for low-latency requirements. Therefore, same-AZ deployment with cluster placement group provides the minimum achievable latency for inter-instance communication.

Question 87: 

A network engineer needs to implement a solution where traffic from multiple VPCs in different AWS accounts is inspected by a centralized security appliance before egressing to the internet. Which architecture should be deployed?

A) Deploy security appliances in each VPC separately

B) Implement AWS Transit Gateway with a centralized inspection VPC containing security appliances, routing internet-bound traffic through the inspection VPC

C) Use VPC Peering to connect all VPCs to a security VPC

D) Deploy AWS Network Firewall in every VPC

Answer: B

Explanation:

Implementing AWS Transit Gateway with a centralized inspection VPC containing security appliances, and routing internet-bound traffic through the inspection VPC, provides the most scalable and manageable architecture for centralized security inspection in multi-VPC, multi-account environments. This design pattern, often referred to as the centralized egress or inspection VPC architecture, consolidates security controls at a single point, simplifies security policy management, reduces operational overhead, and provides consistent security posture across all VPCs and accounts. Transit Gateway serves as the network hub that enables this architecture by providing the routing capabilities necessary to direct traffic flows through the inspection infrastructure.

The architecture consists of several key components working together to implement centralized inspection. Transit Gateway is deployed in a central networking account and shared with application accounts using AWS Resource Access Manager, allowing those accounts to attach their VPCs to the shared Transit Gateway. An inspection VPC is created in the networking account, containing security appliances such as next-generation firewalls, intrusion prevention systems, or web filtering appliances deployed across multiple Availability Zones for high availability. This inspection VPC is attached to the Transit Gateway and also contains NAT Gateways and an Internet Gateway for internet connectivity. Application VPCs in various accounts are attached to the Transit Gateway, creating a hub-and-spoke topology.

The centralized architecture provides significant operational and security benefits. Security teams can manage and update security policies in a single location rather than maintaining duplicate configurations across dozens or hundreds of VPCs. Consistent security controls are applied to all traffic, eliminating gaps that might exist in decentralized approaches where each VPC has its own security implementation that could be misconfigured or outdated. Centralized logging and monitoring provide comprehensive visibility into all internet-bound traffic across the organization. Cost optimization is achieved by deploying and licensing security appliances once in the central inspection VPC rather than replicating them in every VPC. The architecture scales efficiently as new VPCs are added; they simply attach to the Transit Gateway and immediately benefit from the centralized security controls without requiring deployment of additional security infrastructure.

Deploying security appliances in each VPC separately (A) creates significant operational complexity, duplicates costs for appliance licensing and compute resources, and risks inconsistent security policies across VPCs as each deployment must be independently managed and updated. Using VPC Peering to connect all VPCs to a security VPC (C) becomes unmanageable at scale due to the mesh of peering connections required and VPC Peering’s limitations on the number of peering connections per VPC; additionally, peering doesn’t provide the advanced routing capabilities needed to elegantly direct traffic through inspection infrastructure. Deploying AWS Network Firewall in every VPC (D) provides distributed inspection but still requires policy management across multiple deployments and incurs costs for firewall endpoints in every VPC; while this approach may be suitable for some use cases, it doesn’t achieve the level of centralization described in the question. Therefore, Transit Gateway with centralized inspection VPC represents the optimal architecture for enterprise-scale centralized security inspection.

Question 88: 

A company needs to provide temporary access to an S3 bucket for external partners without creating AWS IAM users. The solution should allow partners to upload files directly to S3 using their existing credentials. Which approach provides this capability?

A) Generate pre-signed URLs for S3 object uploads

B) Enable public access to the S3 bucket

C) Create IAM users for each partner

D) Use AWS Directory Service federation

Answer: A

Explanation:

Generating pre-signed URLs for S3 object uploads provides secure, temporary access for external partners to upload files directly to S3 without requiring AWS IAM users or exposing the bucket publicly. Pre-signed URLs are time-limited, cryptographically signed URLs that grant temporary permissions to perform specific S3 operations such as uploading objects, downloading objects, or listing bucket contents. This approach enables controlled, temporary access that aligns with security best practices of least privilege and temporary credentials while providing a seamless user experience for external partners.

The security characteristics of pre-signed URLs make them ideal for partner file sharing scenarios. URLs are time-limited, automatically becoming invalid after the specified expiration period, which limits the window of potential misuse if a URL is inadvertently shared or intercepted. Each URL grants access to only the specific operation and object specified during generation; a URL created for uploading logo.png cannot be used to upload other files, download existing files, or list bucket contents unless separate URLs with those permissions are generated. Partners don’t need to handle or store AWS credentials, reducing the risk of credential leakage. The company maintains full control over access by controlling URL generation and can revoke access by ensuring new URLs aren’t generated, while existing URLs naturally expire based on their configured lifetime.

Implementation typically involves building an application or API that authenticates partners using existing identity systems, then generates and distributes pre-signed URLs programmatically. For example, a web application might authenticate a partner, generate a pre-signed URL valid for 15 minutes for uploading a specific file, and return that URL to the partner’s browser. The browser can then directly upload to S3 using the URL without the upload traffic routing through the application server, enabling efficient large file transfers. This direct-to-S3 upload reduces bandwidth costs for the application infrastructure and improves upload performance by eliminating the intermediary.

Enabling public access to the S3 bucket (B) would allow anyone on the internet to upload files without authentication, creating severe security risks including unlimited storage costs from abuse, potential liability from illegal content uploads, and violation of compliance requirements. Creating IAM users for each partner (C) contradicts the requirement of not creating IAM users and introduces credential management overhead including secure distribution, rotation, and revocation of AWS credentials. AWS Directory Service federation (D) is designed for federating existing directory services like Active Directory for employee access to AWS resources, not for providing external partners with temporary file upload capabilities. Therefore, pre-signed URLs provide the secure, scalable solution for temporary external partner access to S3.

Question 89: 

A network engineer is designing a hybrid DNS architecture where AWS workloads need to resolve on-premises Active Directory domain names, and on-premises servers need to resolve AWS resource names in Route 53 private hosted zones. The solution must be highly available. Which configuration provides this capability?

A) Deploy DNS servers on EC2 instances in multiple Availability Zones

B) Configure Route 53 Resolver with outbound endpoints to forward to on-premises DNS, and inbound endpoints for on-premises to query, both deployed across multiple AZs

C) Use VPC DNS only without additional configuration

D) Replicate Active Directory to AWS

Answer: B

Explanation:

Configuring Route 53 Resolver with outbound endpoints for forwarding to on-premises DNS and inbound endpoints for on-premises queries, with both deployed across multiple Availability Zones, provides a highly available hybrid DNS architecture that enables bidirectional name resolution. This managed solution leverages AWS’s Route 53 Resolver service to bridge DNS queries between AWS and on-premises environments without requiring custom DNS server infrastructure, while the multi-AZ deployment ensures resilience against Availability Zone failures.

The inbound endpoint configuration enables on-premises servers to resolve AWS resource names, including entries in Route 53 private hosted zones associated with VPCs, AWS service endpoints, and instance hostnames. Inbound endpoints are deployed as elastic network interfaces in selected subnets across multiple Availability Zones. These interfaces are assigned IP addresses from the VPC CIDR range, and these IPs serve as the target DNS servers for on-premises conditional forwarders. On-premises DNS infrastructure is configured with conditional forwarders that direct queries for AWS-related domains to the inbound endpoint IP addresses. For example, conditional forwarders can be created for the domain aws.internal or for specific private hosted zone domains like app.example.internal. When on-premises applications query these domains, the queries are forwarded across the hybrid connectivity to the inbound endpoints, and Route 53 Resolver processes them using its knowledge of VPC DNS records and private hosted zones.

The multi-AZ deployment ensures high availability by eliminating single points of failure at the Availability Zone level. Both inbound and outbound endpoints should be deployed in at least two Availability Zones, with each endpoint consisting of multiple network interfaces distributed across the zones. If an Availability Zone experiences an outage, the endpoints in other zones continue processing DNS queries, maintaining DNS resolution functionality for the hybrid environment. Route 53 Resolver automatically handles failover and load distribution across the available endpoints. This high availability is critical because DNS resolution is a foundational service that must remain operational for applications to function; DNS failures cause widespread application failures even when other infrastructure remains healthy.

Deploying DNS servers on EC2 instances (A) creates operational burden for managing instance lifecycle, software updates, high availability configuration, backup and recovery, and scaling, all of which Route 53 Resolver handles as a managed service. Using VPC DNS only (C) doesn’t provide forwarding to on-premises DNS or inbound query handling from on-premises, failing to meet the hybrid DNS requirements. Replicating Active Directory to AWS (D) might be done for some use cases but doesn’t solve the general hybrid DNS resolution requirement for all domains and services, and it represents a much larger infrastructure decision than DNS configuration alone. Therefore, Route 53 Resolver with inbound and outbound endpoints provides the managed, highly available solution for hybrid DNS architectures.

Question 90: 

A company is experiencing issues with an Application Load Balancer where some backend instances are receiving disproportionately more traffic than others, causing performance degradation. All instances are healthy according to health checks. What is the most likely cause and solution?

A) Cross-zone load balancing is disabled; enable it to distribute traffic evenly across all AZs

B) The instances are different sizes; use identical instance types

C) Security groups are blocking traffic; update inbound rules

D) The Internet Gateway is misconfigured; verify routing

Answer: A

Explanation:

Cross-zone load balancing being disabled is the most likely cause of uneven traffic distribution across backend instances in an Application Load Balancer when all instances are healthy. When cross-zone load balancing is disabled, the load balancer distributes traffic evenly across the Availability Zones where it has nodes, but not evenly across individual instances. This can result in uneven instance load if different Availability Zones have different numbers of registered targets. Enabling cross-zone load balancing resolves this by ensuring the load balancer distributes traffic evenly across all registered healthy targets regardless of which AZ they’re in.

Understanding how Application Load Balancer distributes traffic requires examining its architecture. An ALB is deployed across multiple Availability Zones by specifying subnets in those AZs during load balancer creation. The load balancer service automatically creates a load balancer node in each specified AZ, and these nodes receive traffic from clients and forward it to target instances. When cross-zone load balancing is disabled, which is not the default for Application Load Balancers created through the console but might be disabled in some configurations, each load balancer node only distributes traffic to targets within its own Availability Zone. Traffic arriving at the load balancer is first distributed evenly across the load balancer nodes in each AZ, then each node distributes its received traffic to targets in that AZ.

The uneven distribution problem manifests when there are unequal numbers of targets across Availability Zones. For example, consider an ALB deployed across three AZs with targets registered as follows: AZ1 has 5 instances, AZ2 has 3 instances, and AZ3 has 2 instances, totaling 10 instances. When cross-zone load balancing is disabled, overall traffic is distributed evenly across the three AZs, with each AZ receiving approximately 33% of total traffic. However, within AZ1, that 33% is distributed across 5 instances (about 6.7% per instance), in AZ2 across 3 instances (about 11% per instance), and in AZ3 across 2 instances (about 16.7% per instance). Instances in AZ3 receive more than double the traffic per instance compared to those in AZ1, causing performance degradation on the more heavily loaded instances.

Enabling cross-zone load balancing changes this behavior so the load balancer distributes traffic evenly across all registered healthy targets regardless of their Availability Zone. In the same example with 10 instances total, each instance receives approximately 10% of traffic, eliminating the imbalance. This even distribution occurs because the load balancer nodes in each AZ can forward traffic to targets in other AZs, not just to targets in their own AZ. While this introduces some additional cross-AZ data transfer, which incurs inter-AZ data transfer charges, it ensures optimal load distribution and prevents hotspots that could degrade application performance. For Application Load Balancers, cross-zone load balancing can be enabled at the load balancer level and is typically the recommended configuration for production workloads.