Recent Announcements The AWS Cloud platform expands daily. Learn about announcements, launches, news, innovation and more from Amazon Web Services.
- Amazon GameLift Servers now available in Asia Pacific (Thailand) and Asia Pacific (Malaysia)by aws@amazon.com on June 24, 2025 at 9:50 pm
Amazon GameLift Servers, a fully managed service for deploying, operating, and scaling game servers for multiplayer games, is now available in two additional AWS Regions: Asia Pacific (Thailand) and Asia Pacific (Malaysia). With this launch, customers can now deploy GameLift fleets closer to players in Thailand and Malaysia, helping reduce latency and improve gameplay responsiveness. This regional expansion supports both Amazon GameLift Servers managed EC2 and container-based hosting options. Developers can take advantage of features such as FlexMatch for customizable matchmaking, FleetIQ for cost-optimized instance management, and auto-scaling to manage player demand dynamically. The addition of these new regions enables game developers and publishers to better server growing player communities across Southeast Asia while maintaining high performance and reliability. To get started, visit the Amazon GameLift console or refer to the Amazon GameLift Servers developer guide.
- Amazon Route 53 Resolver endpoints now support DNS delegation for private hosted zonesby aws@amazon.com on June 24, 2025 at 9:45 pm
Starting today, domain name system (DNS) delegation for private hosted zone subdomains can be used with Route 53 inbound and outbound Resolver endpoints. This allows you to delegate the authority for a subdomain from your on-premises infrastructure to the Route 53 Resolver cloud service and vice versa, enabling a simplified cloud experience across namespaces in AWS and on your own local infrastructure. AWS customers allow multiple organizations within their enterprise to individually manage their respective subdomains and subzones, whereas apex domains and parent hosted zones are typically overseen by a central team. Previously, these customers had to create and maintain conditional forwarding rules in their existing network infrastructure to enable services to discover one another across subdomains. However, conditional forwarding rules are difficult to maintain across large organizations and, in many cases, are not supported by on-premises infrastructure. With today’s release, customers can instead delegate authority of subdomains to Route 53 using name server records and vice versa, achieving compatibility with common, on-premises DNS infrastructure and removing the need for teams to use conditional forwarding rules throughout their organization. Inbound and outbound delegation for Resolver endpoints is available globally in all AWS Regions, where Resolver endpoints are available, except in AWS GovCloud and Amazon Web Services in China. Inbound and outbound delegation is provided at no additional cost to Resolver endpoints usage. For more details on pricing, visit the Route 53 pricing page, and to learn more about this feature, visit the developer guide.
- Amazon EMR on EKS now supports Service Quotasby aws@amazon.com on June 24, 2025 at 9:30 pm
Today, Amazon EMR on EKS announces support for Service Quotas, improving visibility and control over EMR on EKS quotas. Previously, to request an increase for EMR on EKS quotas, such as maximum number of StartJobRun API calls per second, customers had to open a support ticket and wait for the support team to process the increase. Now, customers can view and manage their EMR on EKS quota limits directly in the Service Quotas console. This enables automated limit increase approvals for eligible requests, improving response times and reducing the number of support tickets. Customers can also set up Amazon CloudWatch alarms to get automatically notified when their usage reaches a certain percentage of a maximum quota. Amazon EMR on EKS support for Service Quotas is available in all Regions where Amazon EMR on EKS is currently available. To get started, visit the Service Quotas User Guide.
- Now in GA: Accelerate troubleshooting with Amazon CloudWatch investigationsby aws@amazon.com on June 24, 2025 at 6:45 pm
Now generally available, Amazon CloudWatch helps you accelerate operational investigations across your AWS environment in just a fraction of the time. With a deep understanding of your AWS cloud environment and resources, CloudWatch investigations use an AI agent to look for anomalies in your environment, surface related signals, identify root-cause hypotheses, and suggest remediation steps, significantly reducing mean time to resolution (MTTR). This new CloudWatch investigations capability works alongside you throughout your operational troubleshooting journey from issue triage through remediation. You can initiate an investigation by selecting the Investigate action on any CloudWatch data widget across the AWS Management Console. You can also start investigations from more than 80 AWS consoles, configure to auto trigger from a CloudWatch alarm action, or initiate from an Amazon Q chat. The new investigation experience in CloudWatch allows teams to collaborate and add findings, view related signals and anomalies, and review suggestions for potential root cause hypotheses. This new capability also provides remediation suggestions for common operational issues across your AWS environment by surfacing relevant AWS Systems Manager Automation runbooks, AWS re:Post articles, and documentation. It also integrates with popular communication channels such as Slack and Microsoft Teams. The Amazon CloudWatch investigations capability is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Spain), and Europe (Stockholm). The CloudWatch investigations capability is now generally available at no additional cost. It was previously launched in preview as Amazon Q Developer operational investigations. To learn more, see getting started and best practice documentation.
- Amazon S3 now supports sort and z-order compaction for Apache Iceberg tablesby aws@amazon.com on June 24, 2025 at 5:00 pm
Amazon S3 now supports sort and z-order compaction for Apache Iceberg tables, available both in Amazon S3 Tables and general purpose S3 buckets using AWS Glue Data Catalog optimization. Sort compaction in Iceberg tables minimizes the number of data files scanned by query engines, leading to improved query performance and reduced costs. Z-order compaction provides additional performance benefits through efficient file pruning when querying across multiple columns simultaneously. S3 Tables provide a fully managed experience where hierarchical sorting is automatically applied on columns during compaction when a sort order is defined in table metadata. When multiple query predicates need to be prioritized equally, you can enable z-order compaction through the S3 Tables maintenance API. If you are using Iceberg tables in general purpose S3 buckets, optimization can be enabled in the AWS Glue Data Catalog console, where you can specify your preferred compaction method. These additional compaction capabilities are available in all AWS Regions where S3 Tables or optimization with the AWS Glue Data Catalog are available. To learn more, read the AWS News Blog, and visit the S3 Tables maintenance documentation and AWS Glue Data Catalog optimization documentation.
- Amazon Bedrock Guardrails announces tiers for content filters and denied topicsby aws@amazon.com on June 24, 2025 at 5:00 pm
Amazon Bedrock Guardrails announces tiers for content filters and denied topics, offering additional flexibility and ease of use towards choosing features and expanded language support depending on customer use cases. With a new Standard tier, Guardrails now detects and filters undesirable content with better contextual understanding including modifications such as typographical errors, and support for up to 60 languages. Bedrock Guardrails provides configurable safeguards to help detect and block harmful content and prompt attacks, define topics to deny and disallow specific topics, and helps redact personally identifiable information (PII) such as personal data from input prompts and model responses. Additionally, Bedrock Guardrails helps detect and block model hallucinations, and identify, correct, and explain factual claims in model responses using Automated Reasoning checks. Guardrails can be applied across any foundation model including those hosted with Amazon Bedrock, self-hosted models, and third-party models outside Bedrock using the ApplyGuardrail API, providing a consistent user experience and helping to standardize safety and privacy controls. The new Standard tier enhances the content filters and denied topics safeguards within Bedrock Guardrails by offering better robust detection of prompt and response variations, strengthened defense against all categories of content filters including prompt attacks, and broader language support. The improved prompt attacks filter clearly distinguishes between jailbreaks and prompt injection on the backend while protecting against other threats including output manipulation. To access the Standard tier’s capabilities, customers must explicitly opt in to cross-region inference with Bedrock Guardrails. To learn more, see the technical documentation and the Bedrock Guardrails product page.
- Amazon SageMaker HyperPod announces P6-B200 instances powered by NVIDIA B200 GPUsby aws@amazon.com on June 24, 2025 at 5:00 pm
Today, Amazon SageMaker HyperPod announces the general availability of Amazon EC2 P6-B200 instances powered by NVIDIA B200 GPUs. Amazon EC2 P6-B200 instances offer up to 2x performance compared to P5en instances for AI training. P6-B200 instances feature 8 Blackwell GPUs with 1440 GB of high-bandwidth GPU memory and a 60% increase in GPU memory bandwidth compared to P5en, 5th Generation Intel Xeon processors (Emerald Rapids), and up to 3.2 terabits per second of Elastic Fabric Adapter (EFAv4) networking. P6-B200 instances are powered by the AWS Nitro System, so you can reliably and securely scale AI workloads within Amazon EC2 UltraClusters to tens of thousands of GPUs. The instances are available through SageMaker HyperPod flexible training plans in US West (Oregon) AWS Region. For on-demand reservation of B200 instances, please reach out to your account manager. Amazon SageMaker AI lets you easily train machine learning models at scale using fully managed infrastructure optimized for performance and cost. To get started with SageMaker HyperPod, visit the webpage and documentation.
- Announcing Intelligent Search for re:Post and re:Post Privateby aws@amazon.com on June 24, 2025 at 5:00 pm
Today, AWS launches Intelligent Search on AWS re:Post and AWS re:Post Private — offering a more efficient and intuitive way to access AWS knowledge across multiple sources. This new capability transforms how builders find information, providing synthesized answers from various AWS resources in one place. Intelligent Search streamlines the process of finding relevant AWS information by unifying results from re:Post community discussions, AWS Official documentation, and other public AWS knowledge sources. Instead of manually searching through multiple pages, users receive contextually relevant answers directly, saving time and effort. For instance, when troubleshooting an IAM permissions error, developers can ask a question in natural language and immediately receive a comprehensive response drawing from diverse AWS resources. This feature is particularly valuable for developers, architects, and technical leaders who need quick access to accurate information for problem-solving and decision-making. By consolidating knowledge from various AWS sources, Intelligent Search helps users find solutions faster, accelerating development processes and improving productivity. Intelligent Search is now available on repost.aws. re:Post Private customers can also utilize this feature if artificial intelligence capabilities are enabled in their instance. For setup instructions, see the re:Post Private Administration Guide.
- Amazon GameLift Servers launches UDP ping beaconsby aws@amazon.com on June 24, 2025 at 5:00 pm
We’re excited to announce the general availability of UDP ping beacons for Amazon GameLift Servers, a new feature that enables game developers to measure real-time network latency between game clients and game servers hosted on Amazon GameLift Servers. With UDP ping beacons, you can now accurately measure latency for UDP (User Datagram Protocol) packet payloads across all AWS Regions and Local Zones where Amazon GameLift Servers is available. Most multiplayer games use UDP as their primary packet transmission protocol due to its performance benefits for real-time gaming and optimizing network latency is crucial for delivering the best possible player experience. UDP ping beacons provide a reliable way to measure actual UDP packet latency between players and game servers, helping make better decisions about player-to-server matching and game session placement. The beacon endpoints are available in all AWS Global Regions and Local Zones supported by Amazon GameLift Servers, except AWS China, and through the ListLocations API, making it easy to programmatically access the endpoints. To learn more, visit the Amazon GameLift Servers Release Notes.
- Customer Carbon Footprint Tool now includes location-based emissionsby aws@amazon.com on June 24, 2025 at 5:00 pm
The Customer Carbon Footprint Tool (CCFT) and Data Exports now show emissions calculated using the location-based method (LBM), alongside emissions calculated using the market-based method (MBM) which were already present. In addition, you can now see the estimated emissions from CloudFront usage in the service breakdown, alongside EC2 and S3 estimates. LBM reflects the average emissions intensity of grids on which energy consumption occurs. Electricity grids in different parts of the world use various sources of power, from carbon-intense fuels like coal, to renewable energy like solar. With LBM, you can view and validate trends in monthly carbon emissions that more directly align to your cloud usage, and get insights into the carbon intensity of the underlying electricity grids in which AWS data centers operate. This empowers you to make more informed decisions about optimizing your cloud usage and achieving your overall sustainability objectives. To learn more about the differences between LBM and MBM see the GHG Protocol Scope 2 Guidance. Check out your LBM emissions today in the Customer Carbon Footprint Tool and Data Exports; the updates are explained in detail in the user guide.
- Announcing Amazon WorkSpaces Core Managed Instances to simplify VDI migrationsby aws@amazon.com on June 23, 2025 at 9:50 pm
AWS today announced Amazon WorkSpaces Core Managed Instances, simplifying virtual desktop infrastructure (VDI) migrations with highly customizable instance configurations. Utilizing EC2 Managed Instances at its foundation, WorkSpaces Core can now provision resources in your AWS account, handling infrastructure lifecycle management for both persistent and non-persistent workloads. Managed Instances complement existing WorkSpaces Core pre-configured bundles by providing greater flexibility for organizations requiring specific compute, memory, or graphics configurations. You can now use existing discounts, Savings Plans, and other features like On-Demand Capacity Reservations (ODCRs), with the operational simplicity of WorkSpaces – all within the security and governance boundaries of your AWS account. WorkSpaces Core Managed Instances is ideal for organizations migrating from on-premises VDI environments or existing AWS customers seeking enhanced cost optimization without sacrificing control over their infrastructure configurations. You can use a broad selection of instance types, including accelerated graphics instances, while your Core partner solution handles desktop and application provisioning and session management through familiar administrative tools. Amazon WorkSpaces Core Managed Instances is available today in all AWS Regions where WorkSpaces is supported. Customers will incur standard compute costs along with an hourly fee for WorkSpaces Core. See the WorkSpaces Core pricing page for more information. To learn more about Amazon WorkSpaces Core Managed Instances, visit the product page. For technical documentation, getting started guides, and the shared responsibility model for partner VDI solutions integrating WorkSpaces Core bundles and managed instances, see the Amazon WorkSpaces Core Documentation.
- Amazon OpenSearch Serverless now supports Point in Time (PIT) and SQL search in the AWS GovCloud (US) Regionsby aws@amazon.com on June 23, 2025 at 9:20 pm
Amazon OpenSearch Serverless has added support for Point in Time (PIT) search and SQL in AWS GovCloud US-East and US-West Regions, enabling you to run multiple queries against a dataset fixed at a specific moment. With PIT search you to maintain consistent search results even as your data continues to change, making it particularly useful for applications that require deep pagination or need to preserve a stable view of data across multiple queries. OpenSearch SQL API allows you to leverage your existing SQL skills and tools to analyze data stored in your collections. PIT supports both forward and backward navigation through search results, ensuring consistency even during ongoing data ingestion. This feature is ideal for e-commerce applications, content management systems, and analytics platforms that require reliable and consistent search capabilities across large datasets. SQL and PPL API support addresses the need for familiar query syntax and improved integration with existing analytics tools, benefiting data analysts and developers who work with OpenSearch Serverless collections. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.
- Amazon VPC raises default Route Table capacityby aws@amazon.com on June 23, 2025 at 9:00 pm
AWS VPC has increased the default value for routes per route table from 50 to 500 entries. Before this enhancement, customers had to request a limit increase to use more than 50 routes per VPC route table. Organizations often need additional routes to maintain precise control over their VPC traffic flows to insert firewalls or network functions in the traffic path, or direct traffic to peering connections, internet gateway, virtual private gateway or transit gateway. This enhancement automatically increases the route table capacity to 500 routes, mitigating administrative overhead and enables customers to scale their network architecture seamlessly as their requirements grow. The new default limit will be automatically available for all route tables in all AWS commercial and AWS GovCloud (US) Regions. Customer accounts without route quota overrides will automatically get 500 routes per VPC route table for their existing and new VPCs. Customer accounts with route quota overrides will not see any changes to their existing or new VPC setups. To learn more about this quota increase, please refer to our documentation.
- AWS AppSync is now available in 3 additional regionsby aws@amazon.com on June 23, 2025 at 5:00 pm
AWS AppSync is now available in Asia Pacific (Malaysia, Thailand), and Canada West (Calgary). AWS AppSync GraphQL is a fully managed service that enables developers to create scalable APIs that simplify application development by allowing applications to securely access, manipulate, and combine data from one or multiple sources. AWS AppSync Events is a fully managed service for serverless WebSocket APIs with full connection management. To learn more about AWS AppSync’s regional availability, please visit the AWS Services by Region page. For more information about AWS AppSync, visit the AWS AppSync documentation.
- AWS Private CA now supports Internet Protocol Version 6 (IPv6)by aws@amazon.com on June 23, 2025 at 5:00 pm
AWS Private Certificate Authority (AWS Private CA) now supports Internet Protocol version 6 (IPv6) through new dual-stack endpoints. Customers can connect to AWS Private CA service, download Certificate Revocation Lists (CRLs), and check revocation status via Online Certificate Status Protocol (OCSP) over the public internet using IPv6, IPv4, or dual-stack clients. AWS Private CA Connector for Active Directory (AD) and AWS Private CA Connector for Simple Certificate Enrollment Protocol (SCEP) also support IPv6. The existing AWS Private CA endpoints supporting IPv4 will remain available for backwards compatibility. AWS Private CA is a managed service that lets you create private certificate authorities (CAs) to issue digital certificates for authenticating users, servers, workloads, and devices within your organization, while securing the CA’s private keys using FIPS 140-3 Level 3 hardware security modules (HSMs). AWS Private CA offers connectors so you can use AWS Private CA with Kubernetes, Active Directory, and mobile device management (MDM) software. AWS Private CA support for IPv6 is available in all AWS Regions, including AWS GovCloud (US) Regions and the China Regions. To learn more on best practices for configuring IPv6 in your environment, visit the whitepaper on IPv6 in AWS. To learn more about AWS Private CA IPv6 support, visit the AWS Private CA user guide.
- AWS End User Messaging now supports Service Quotasby aws@amazon.com on June 23, 2025 at 5:00 pm
Today, AWS End User Messaging announces support for Service Quota. This integrations provides customers with improved visibility and control over their SMS, voice, and WhatsApp service quotas, streamlining the quota management process and reducing the need for manual intervention. With Service Quotas, customers can now view and manage their End User Messaging quota limits directly through the AWS Service Quotas console. This integration enables automated limit increase approvals for eligible requests, improving response times and reducing the number of support tickets. Customers will also benefit from visibility into quota usage for all on-boarded quotas via Amazon CloudWatch usage metrics, allowing for better resource planning and management. Service Quotas for End User Messaging is available in all commercial regions and the AWS GovCloud (US) Regions. To learn more about Service Quotas and how to manage your End User Messaging quotas, visit the Service Quotas User Guide or the AWS End User Messaging product page.
- Amazon Time Sync Service now supports Nanosecond Hardware Packet Timestampsby aws@amazon.com on June 23, 2025 at 5:00 pm
The Amazon Time Sync Service now supports nanosecond-precision hardware packet timestamping on supported Amazon EC2 instances. Built on Amazon’s proven network infrastructure and the AWS Nitro System, customers can enable the Amazon Time Sync Service’s hardware packet timestamping to add a 64 bit nanosecond-precision timestamp to every inbound network packet. By timestamping at the hardware level, before the kernel, socket, or application layer, customers can now more directly leverage the reference clock running in the AWS Nitro System and bypass any delays added by timestamping in software. Customers can then use these timestamps to determine the order and resolve fairness of incoming packets to their ec2 instances, measure 1-way network latency, and further increase distributed system transaction speed with higher precision and accuracy than most on-premises solutions. Customers already using the Amazon Time Sync Service’s PTP Hardware Clocks (PHC) can install the latest ENA Linux driver and enable hardware packet timestamping, accessible through standard Linux socket API, for all incoming network packets without needing any updates to their VPC configurations. Hardware packet timestamping is available starting today in all regions and EC2 instance types where the Amazon Time Sync Service’s PHC is supported. Hardware packet timestamping can be used on virtualized or bare metal instances. There is no additional charge for using this feature. Configuration instructions, and more information on the Amazon Time Sync Service, are available in the EC2 User Guide.
- Amazon Neptune Analytics now Integrates with GraphStorm for Scalable Graph Machine Learningby aws@amazon.com on June 23, 2025 at 5:00 pm
Today, we’re announcing the integration of Amazon Neptune Analytics with GraphStorm, a scalable, open-source graph machine learning (ML) library built for enterprise-scale applications. This integration brings together Neptune’s high-performance graph analytics engine and GraphStorm’s flexible ML pipeline, making it easier for customers to build intelligent applications powered by graph-based insights. With this launch, customers can train graph neural networks (GNNs) using GraphStorm and bring their learned representations—such as node embeddings, classifications, and link predictions—into Neptune Analytics. Once loaded, these enriched graphs can be queried interactively and analyzed using built-in algorithms like community detection or similarity search, enabling a powerful feedback loop between ML and human analysis. This integration supports a wide range of use cases, from detecting fraud and recommending content, to improving supply chain intelligence, understanding biological networks, or enhancing customer segmentation. GraphStorm simplifies model training with a high-level command-line interface (CLI) and supports advanced use cases via its Python API. Neptune Analytics, optimized for low-latency analysis of billion-scale graphs, allows developers and analysts to explore multi-hop relationships, analyze graph patterns, and perform real-time investigations. By combining graph ML with fast, scalable analytics, Neptune and GraphStorm help teams move from raw relationships to real insights—whether they’re uncovering hidden patterns, ranking risks, or personalizing experiences. To learn more about using GraphStorm with Neptune Analytics, visit the blog post.
- AWS Step Functions TestState now available in the AWS GovCloud (US) Regionsby aws@amazon.com on June 20, 2025 at 9:55 pm
AWS Step Functions now offers TestState in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. This expansion allows customers operating in these regions to test individual states of their workflows without creating or updating existing state machines, helping to enhance the development and troubleshooting process for their applications. AWS Step Functions is a visual workflow service that enables customers to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. TestState allows developers to validate a state’s input and output processing, test AWS service integrations, and verify HTTP task requests and responses. With TestState now available in the AWS GovCloud (US) Regions, customers can test and validate individual workflow steps. TestState supports various state types including Task, Pass, Wait, Choice, Succeed, and Fail, with tests running for up to five minutes. This feature is now generally available in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, in addition to all commercial regions where AWS Step Functions is available. To learn more about TestState and how to incorporate it into your workflow development process, visit the AWS Step Functions documentation. You can start testing your workflow states using the Step Functions console, AWS Command Line Interface (CLI), or AWS SDKs.
- Amazon EC2 R7g instances are now available in AWS Asia Pacific (Melbourne) regionby aws@amazon.com on June 20, 2025 at 9:40 pm
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R7g instances are available in AWS Asia Pacific (Melbourne) region. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS). To learn more, see Amazon EC2 R7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
- Amazon RDS for Oracle now offers Reserved Instances for R7i and M7i instancesby aws@amazon.com on June 20, 2025 at 9:15 pm
Amazon Relational Database Service (Amazon RDS) for Oracle offers Reserved Instances for R7i and M7i instances with up to 46% cost savings compared to On-Demand prices. These instances are powered by custom 4th Generation Intel Xeon Scalable processors and provide larger sizes up to 48xlarge with 192 vCPUs and 1536 GiB of latest DDR5 memory. Reserved instance benefits apply to both Multi-AZ and Single-AZ configurations. This means that customers can move freely between configurations within the same database instance class type, making them ideal for varying production workloads. Amazon RDS for Oracle Reserved Instances also provide size flexibility for the Oracle database engine under the Bring Your Own License (BYOL) licensing model. With size flexibility, discounted rate for Reserved Instances will automatically apply to usage of any size in the same instance family. Customers can now purchase Reserved Instances for Amazon RDS Oracle in all AWS regions where R7i and M7i instances are available. For information on specific Oracle database editions and licensing options that support these database instance types, refer to the Amazon RDS user guide. To get started, purchase Reserved Instances through the AWS Management Console, AWS CLI, or AWS SDK. For detailed pricing information and purchase options, visit the Amazon RDS for Oracle pricing page.
- Amazon EC2 C7i-flex and C7i instances are now available in 2 additional regionsby aws@amazon.com on June 20, 2025 at 8:45 pm
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex and C7i instances are available in the Asia Pacific (Hong Kong) and Europe (Zurich) Regions. These instances are powered by powered by custom 4th Generation Intel Xeon Scalable processors (code-named Sapphire Rapids) custom processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers. C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads, and deliver up to 19% better price-performance compared to C6i. C7i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. To learn more, visit Amazon EC2 C7i Instances. To get started, see the AWS Management Console.
- Amazon EC2 M7i-flex and M7i instances are now available in Asia Pacific (Hong Kong) Regionby aws@amazon.com on June 20, 2025 at 8:00 pm
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i-flex and M7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in Asia Pacific (Hong Kong) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers. M7i-flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose workloads. They deliver up to 19% better price-performance compared to M6i. M7i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources such as web and application servers, virtual-desktops, batch-processing, and microservices. M7i deliver up to 15% better price-performance compared to M6i. M7i instances are a great choice for workloads that need the largest instance sizes or continuous high CPU usage, such as gaming servers, CPU-based machine learning (ML), and video-streaming. M7i offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. To learn more, visit Amazon EC2 M7i Instances. To get started, see the AWS Management Console.
- Anthropic’s Claude 3.7 Sonnet is now available on Amazon Bedrock in Londonby aws@amazon.com on June 20, 2025 at 5:00 pm
Anthropic’s Claude 3.7 Sonnet hybrid reasoning model is now available in Europe (London). Claude 3.7 Sonnet offers advanced AI capabilities with both quick responses and extended, step-by-step thinking made visible to the user. This model has strong capabilities in coding and brings enhanced performance across various tasks, like instruction following, math, and physics. Claude 3.7 Sonnet introduces a unique approach to AI reasoning by integrating it seamlessly with other capabilities. Unlike traditional models that separate quick responses from those requiring deeper thought, Claude 3.7 Sonnet allows users to toggle between standard and extended thinking modes. In standard mode, it functions as an upgraded version of Claude 3.5 Sonnet. In extended thinking mode, it employs self-reflection to achieve improved results across a wide range of tasks. Amazon Bedrock customers can adjust how long the model thinks, offering a flexible trade-off between speed and answer quality. Additionally, users can control the reasoning budget by specifying a token limit, enabling more precise cost management. Claude 3.7 Sonnet is also available on Amazon Bedrock in the Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm), US East (N. Virginia), US East (Ohio), and US West (Oregon) regions. To get started, visit the Amazon Bedrock console. Integrate it into your applications using the Amazon Bedrock API or SDK. For more information, see the AWS News Blog and Claude in Amazon Bedrock.
- AWS License Manager now supports license type conversions for AWS Marketplace productsby aws@amazon.com on June 20, 2025 at 5:00 pm
AWS License Manager now supports license type conversions for AWS Marketplace products, initially for Red Hat Enterprise Linux (RHEL) and RHEL for SAP products. Using AWS License Manager, Amazon EC2 customers can now switch Red Hat subscriptions between AWS-provided and Red Hat-provided options from AWS Marketplace without re-deploying instances. License conversion empowers customers to optimize their licensing strategy by seamlessly transitioning between different subscription models, whether purchased directly through EC2 or from the vendor in AWS Marketplace. Utilizing the license type conversion process, customers are now no longer required to re-deploy instances when switching licenses, reducing downtime and IT operational overhead. By switching their license, customers can negotiate custom pricing directly with vendors and transact through private offers in AWS Marketplace. This new flexibility allows customers to consolidate their vendor spend in AWS Marketplace and maintain preferred vendor relationships for support. License type conversion for select AWS Marketplace products is available in all AWS Commercial and AWS GovCloud (US) Regions where AWS Marketplace is available. To get started, customers can configure Linux subscriptions discovery through the AWS License Manager console, AWS CLI, or License Manager Linux subscription API, and create a license type conversion. For more information and to begin using this capability, visit the AWS License Manager page or AWS Marketplace Buyer Guide.
- AWS Lambda announces native support for Avro and Protobuf formatted Kafka eventsby aws@amazon.com on June 20, 2025 at 5:00 pm
AWS Lambda now provides native support for Avro and Protobuf formatted Kafka events with Apache Kafka’s event-source-mapping (ESM), and integrates with AWS Glue Schema registry (GSR), Confluent Cloud Schema registry (CCSR), and self-managed Confluent Schema registry (SCSR) for schema management. This enables you to validate your schema, filter events, and process events using open-source Kafka consumer interfaces. Additionally, customers can use Powertools for AWS Lambda to process their Kafka events without writing custom deserialization code, making it easier to build their Kafka applications with AWS Lambda. Kafka customers use Avro and Protobuf formats for efficient data storage, fast serialization and deserialization, schema evolution support, and interoperability between different programming languages. They utilize schema registry to manage, evolve, and validate schemas before data enters processing pipelines. Previously, customers were required to write custom code within their Lambda function, in order to validate, de-serialize, and filter events when using these data formats. With today’s launch, Lambda natively supports Avro and Protobuf as well as integration with GSR, CCSR and SCSR, enabling customers to process their Kafka events using these data formats, without writing custom code. Additionally, customers can optimize costs through event filtering to prevent unnecessary function invocations. This feature is generally available in all AWS Commercial Regions where AWS Lambda Kafka ESM is available, except Israel (Tel Aviv), Asia Pacific (Malaysia), and Canada West (Calgary). To get started, provide your schema registry configuration for your new or existing Kafka ESM in the ESM API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. Optionally, you can setup filtering rules to discard irrelevant Avro or Protobuf formatted events before function invocations. To build your function with Kafka’s open-source ConsumerRecords interface, add Powertools for AWS Lambda as a dependency within your Lambda function. To learn more, read Lambda ESM documentation and AWS Lambda pricing.
- Amazon IVS Real-Time Streaming now supports E-RTMP multitrack video ingestby aws@amazon.com on June 20, 2025 at 5:00 pm
Starting today, you can use E-RTMP (Enhanced Real-Time Messaging Protocol) multitrack video to send multiple video qualities to your Amazon Interactive Video Service (Amazon IVS) stages. This feature enables adaptive bitrate streaming, allowing viewers to watch in the best quality for their network connection. Multitrack video is supported in OBS Studio and complements the existing simulcast capabilities in the IVS broadcast SDK. There is no additional cost for using multitrack video with Real-Time Streaming. Amazon IVS is a managed live streaming solution designed to make low-latency or real-time video available to viewers around the world. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available. To learn more, please visit the Amazon IVS RTMP ingest documentation page.
- Amazon U7i instances now available in the AWS US West (Oregon) Regionby aws@amazon.com on June 19, 2025 at 9:20 pm
Starting today, Amazon EC2 High Memory U7i instances with 8TB of memory (u7i-8tb.112xlarge) are now available in the US West (Oregon) region. U7i-8tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-8tb instances offer 8TiB of DDR5 memory enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-8tb instances offer 448 vCPUs, support up to 60Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.
- Amazon EC2 C8g instances now available in additional regionsby aws@amazon.com on June 19, 2025 at 7:20 pm
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g instances are available in AWS Canada (Central) and AWS Asia Pacific (Malaysia) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 C8g instances are built for compute-intensive workloads, such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning (ML) inference, and ad serving. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 C8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
- Introducing the updated AWS Government Competencyby aws@amazon.com on June 19, 2025 at 5:45 pm
Today, AWS announced major enhancements to its AWS Government Competency, introducing three categories to help public sector customers effectively identify and engage with validated AWS Partners. This update consolidates and streamlines AWS’s public sector partner offerings by merging the AWS Public Safety Competency and AWS Smart City Competency under the Government Competency. This update features three distinct categories: Citizen Services, Defense & National Security, and Public Safety. The new structure enables government customers to quickly find partners with specific expertise aligned to their mission requirements. Partners in the program must meet rigorous technical validation requirements and demonstrate proven success in their designated categories, ensuring customers can confidently select partners who understand their unique compliance, security, and procurement needs. AWS has also enhanced the program benefits for qualified partners, including new technical and go-to-market enablement resources, early access to new solutions development tools, and exclusive networking opportunities. Partners will receive specialized support tailored to their focus areas, helping them better serve government customers’ evolving needs. The AWS Government Competency Program, which has grown from 24 partners in 2016 to more than 169 partners globally, will maintain its high standards through a new re-validation process. This ensures that partners continue to meet the technical expertise, customer success, and compliance requirements that government customers expect. To learn more about the AWS Government Competency Program and find qualified partners, visit the AWS Government Competency webpage. Government organizations interested in working with AWS Government Competency Partners can start exploring partner solutions today.