Recent Announcements The AWS Cloud platform expands daily. Learn about announcements, launches, news, innovation and more from Amazon Web Services.
- Amazon CloudWatch now supports deletion protection for logsby aws@amazon.com on November 26, 2025 at 3:00 pm
Amazon CloudWatch now offers configuring deletion protection on your CloudWatch log groups, helping customers safeguard their critical logging data from accidental or unintended deletion. This feature provides an additional layer of protection for logs maintaining audit trails, compliance records, and operational logs that must be preserved. With deletion protection enabled, administrators can prevent unintended deletions of their most important log groups. Once enabled, log groups cannot be deleted until the protection is explicitly turned off, helping safeguard critical operational, security, and compliance data. This protection is particularly valuable for preserving audit logs and production application logs needed for troubleshooting and analysis. Log group deletion protection is available in all AWS commercial Regions. You can enable deletion protection during log group creation or on existing log groups using the Amazon CloudWatch console, AWS Command Line Interface (AWS CLI), AWS Cloud Development Kit (AWS CDK), and AWS SDKs. For more information, visit the Amazon CloudWatch Logs User Guide..
- Improved AWS Health event triageby aws@amazon.com on November 26, 2025 at 3:00 pm
AWS Health now includes two new properties in its event schema – actionability and persona – enabling customers to identify the most relevant events. These properties allow organizations to programmatically identify events requiring customer action and direct them to relevant teams. The enhanced event schema is accessible through both the AWS Health API and Health EventBridge communication channels, improving operational efficiency and team coordination. AWS customers receive various operational notifications and scheduled changes, including Planned Lifecycle Events. With the new actionability property, teams can quickly distinguish between events requiring action and those shared for awareness. The persona property streamlines event routing and visibility to specific teams like security and billing, ensuring critical information reaches appropriate stakeholders. These structured properties streamline integration with existing operational tools, allowing teams to effectively identify and remediate affected resources while maintaining appropriate visibility across the organization. This enhancement is available across all AWS Commercial and AWS GovCloud (US) Regions. To learn more about implementing these new properties, see the AWS Health User Guide and the API and EventBridge schema documentation.
- Amazon S3 Block Public Access now supports organization-level enforcementby aws@amazon.com on November 26, 2025 at 3:00 pm
Amazon S3 Block Public Access (BPA) now allows organization-level control through AWS Organizations, allowing you to standardize and enforce S3 public access settings across all accounts in your AWS organization through a single policy configuration. S3 Block Public Access at the organization level uses a single configuration that controls all public access settings across accounts within your organization. When you attach the policy at the root or Organizational Unit (OU)-level of your organization, it propagates to all sub-accounts within that scope, and new member accounts automatically inherit the policy. Alternatively, you can choose to apply the policy to specific accounts for more granular control. To get started, navigate to the AWS Organizations console and use the “Block all public access” checkbox or JSON editor. Additionally, you can use AWS CloudTrail to audit or keep track of policy attachment as well as enforcement for member accounts. This feature is available in the AWS Organizations console as well as AWS CLI/SDK, in all AWS Regions where AWS Organizations and Amazon S3 are supported, with no additional charges. For more information, visit the AWS Organizations User Guide and Amazon S3 Block Public Access documentation.
- Amazon Route 53 announces accelerated recovery for managing public DNS recordsby aws@amazon.com on November 26, 2025 at 2:00 pm
Amazon Route 53 is excited to release the accelerated recovery option for managing DNS records in public hosted zones. Accelerated recovery targets a 60-minute recovery time objective (RTO) for regaining the ability to make DNS changes to your DNS records in Route 53 public hosted zones, if AWS services in US East (N. Virginia) become temporarily unavailable. The Route 53 public DNS service API is used by customers today for making changes to DNS records in order to facilitate software deployments, run infrastructure operations, and onboard new users. Customers in banking, financial technology (FinTech), and software-as-a-service (SaaS) in particular need a predictable and short RTO for meeting business continuity and disaster recovery objectives. In the past, if AWS services in US East (N. Virginia) became unavailable, customers would not be able to modify or recreate DNS records to point users and internal services to updated endpoints. Now, when you enable the accelerated recovery option on your Route 53 public hosted zone, you can make changes to Route 53 public DNS records (Resource Record Sets) in that hosted zone soon after such an interruption, most often in less than one hour. Accelerated recovery for managing public DNS records is available globally, except in AWS GovCloud and Amazon Web Services in China. There is no additional charge for using this feature. To learn more about the accelerated recovery option, visit our documentation.
- Amazon Quick Research now includes trusted third-party industry intelligenceby aws@amazon.com on November 26, 2025 at 8:00 am
Amazon Quick Suite, the AI-powered workspace helping organizations get answers from their enterprise data and move swiftly from insights to action, enhances Quick Research with access to specialized third-party datasets. Quick Research transforms how business professionals tackle complex business problems by completing weeks of data discovery, analysis, and insight generation in minutes. Today, Quick Research launches its partner ecosystem with industry intelligence providers S&P Global, FactSet, and IDC, with more to come. Users with existing subscriptions can combine these authoritative datasets with all of their business data and real-time web search, accelerating their path to deeper insights and strategic decision-making. Additionally, all users have access to decades of US Patent and Trademark Office data along with millions of PubMed citations and abstracts in biomedical and life sciences literature. Business professionals from any industry can now access and analyze multiple data sources in one unified workspace, eliminating the need to switch between platforms. For example, a financial analyst can evaluate investment opportunities using FactSet’s financial data alongside real-time web search and internal market reports, while energy teams can optimize trading strategies using S&P Global’s commodity data combined with insights from their strategy teams. Similarly, sales and product teams can spot emerging trends faster by leveraging IDC’s industry intelligence with their customer data. By bringing critical data sources together in one place, organizations can move from insight to action with greater speed and confidence. Quick Research’s third-party data integration is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). To learn more, read our User Guide.
- Amazon Lex now supports LLMs as the primary option for natural language understandingby aws@amazon.com on November 26, 2025 at 8:00 am
Amazon Lex now allows you to use Large Language Models (LLMs) as the primary option to understand customer intent across voice and chat interactions. With this capability, your voice and chat bots can better understand customer requests, handle complex utterances, maintain accuracy despite spelling errors, and extract key information from verbose inputs. When customer intent is unclear, bots can intelligently ask follow-up questions to fulfill requests accurately. For example, when a customer says “I need help with my flight,” the LLM automatically clarifies whether the customer wants to check their flight status, upgrade their flight, or change their flight. This feature is available in all AWS commercial regions where Amazon Connect and Lex operate. To learn more, visit the Amazon Lex documentation or explore the Amazon Connect website to learn how Amazon Connect and Amazon Lex deliver seamless end-customer self-service experiences.
- Introducing AWS Network Firewall Proxy in previewby aws@amazon.com on November 25, 2025 at 7:00 pm
AWS introduces Network Firewall Proxy in public preview. You can use it to exert centralized controls against data exfiltration and malware injection. You can set up your Network Firewall Proxy in explicit mode in just a few clicks and filter the traffic going out from your applications and the response that these applications receive. Network Firewall Proxy enables customers to efficiently manage and secure web and inter-network traffic. It protects your organization against atempts to spoof the domain name or the server name index (SNI) and offers flexibility to set fine-grained access controls. You can use Network Firewall Proxy to restrict access from your applications to trusted domains or IP addresses, or block unintended response from external servers. You can also turn on TLS inspection and set granular filtering controls on HTTP header attributes. Your Network Firewall Proxy offers comprehensive logs for monitoring your applications. You can enable them and send to Amazon S3 and AWS CloudWatch for detailed analyses and audit. Try out AWS Network Firewall Proxy in your test environment today in US East (Ohio) region. Proxy is available for free during public preview. For more information check AWS Network Firewall proxy documentation.
- Manage Amazon SageMaker HyperPod clusters with the new Amazon SageMaker AI MCP Serverby aws@amazon.com on November 25, 2025 at 7:00 pm
The Amazon SageMaker AI MCP Server now supports tools that help you setup and manage HyperPod clusters. Amazon SageMaker HyperPod removes the undifferentiated heavy lifting involved in building generative AI models by quickly scaling model development tasks such as training, fine-tuning, or deployment across a cluster of AI accelerators. The SageMaker AI MCP Server now empowers AI coding assistants to provision and operate AI/ML clusters for model training and deployment. MCP servers in AWS provide a standard interface to enhance AI-assisted application development by equipping AI code assistants with real-time, contextual understanding of various AWS services. The SageMaker AI MCP server comes with tools that streamline end-to-end AI/ML cluster operations using the AI assistant of your choice—from initial setup through ongoing management. It enables AI agents to reliably setup HyperPod clusters orchestrated by Amazon EKS or Slurm complete with pre-requisites, powered by CloudFormation templates that optimize networking, storage, and compute resources. Clusters created via this MCP server are fully optimized for high-performance distributed training and inference workloads, leveraging best practice architectures to maximize throughput and minimize latency at scale. Additionally, it provides comprehensive tools for cluster and node management—including scaling operations, applying software patches, and performing various maintenance tasks. When used in conjunction with AWS API MCP Server, AWS Knowledge MCP Server, and Amazon EKS MCP Server you gain complete coverage for all SageMaker HyperPod APIs and you can effectively troubleshoot common issues, such as diagnosing why a cluster node became inaccessible. For cluster administrators, these tools streamline daily operations. For data scientists, they enable you to set up AI/ML clusters at scale without requiring infrastructure expertise, allowing you to focus on what matters most—training and deploying models. You can manage your AI/ML clusters through the SageMaker AI MCP server in all regions where SageMaker HyperPod is available. To get started, visit the AWS MCP Servers documentation.
- Announcing AWS Glue zero-ETL for self-managed Database Sourcesby aws@amazon.com on November 25, 2025 at 4:00 pm
AWS Glue now supports zero-ETL for self-managed database sources. Using Glue zero-ETL, you can now setup an integration to replicate data from Oracle, SQL Server, MySQL or PostgreSQL databases which are located on-premises or on AWS EC2 to Redshift with a simple experience that eliminates configuration complexity. AWS zero-ETL for self-managed database sources will automatically create an integration for an on-going replication of data from your on-premises or EC2 databases through a simple, no-code interface. You can now replicate data from Oracle, SQL Server, MySQL and PostgreSQL databases into Redshift. This feature further reduces users’ operational burden and saves weeks of engineering effort needed to design, build, and test data pipelines to ingest data from self-managed databases to Redshift. AWS Glue zero-ETL for self-managed database sources are available in the following AWS Regions: US East (Ohio), Europe (Stockholm), Europe (Ireland), Europe (Frankfurt), Canada West (Calgary), US West (Oregon), and Asia Pacific (Seoul) regions. To get started, sign into the AWS Management Console. For more information visit the AWS Glue page or review the AWS Glue zero-ETL documentation.
- AWS Lambda adds support for Node.js 24by aws@amazon.com on November 25, 2025 at 3:00 pm
AWS Lambda now supports creating serverless applications using Node.js 24. Developers can use Node.js 24 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available. Node.js 24 is the latest long-term support release of Node.js and is expected to be supported for security and bug fixes until April 2028. With this release, Lambda has simplified the developer experience, focusing on the modern async/await programming pattern and no longer supports callback-based function handlers. You can use Node.js 24 with Lambda@Edge (in supported Regions), allowing you to customize low-latency content delivered through Amazon CloudFront. Powertools for AWS Lambda (TypeScript), a developer toolkit to implement serverless best practices and increase developer velocity, also supports Node.js 24. You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Node.js 24. The Node.js 24 runtime is available in all Regions, including the AWS GovCloud (US) Regions and China Regions. For more information, including guidance on upgrading existing Lambda functions, see our blog post. For more information about AWS Lambda, visit our product page.
- Amazon SageMaker AI now supports EAGLE speculative decodingby aws@amazon.com on November 25, 2025 at 8:00 am
Amazon SageMaker AI now supports EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) speculative decoding to improve large language model inference throughput by up to 2.5x. This capability enables models to predict and validate multiple tokens simultaneously rather than one at a time, improving response times for AI applications. As customers deploy AI applications to production, they need capabilities to serve models with low latency and high throughput to deliver responsive user experiences. Data scientists and ML engineers lack efficient methods to accelerate token generation without sacrificing output quality or requiring complex model re-architecture, making it hard to meet performance expectations under real-world traffic. Teams spend significant time optimizing infrastructure rather than improving their AI applications. With EAGLE speculative decoding, SageMaker AI enables customers to accelerate inference throughput by allowing models to generate and verify multiple tokens in parallel rather than one at a time, maintaining the same output quality while dramatically increasing throughput. SageMaker AI automatically selects between EAGLE 2 and EAGLE 3 based on your model architecture, and provides built-in optimization jobs that use either curated datasets or your own application data to train specialized prediction heads. You can then deploy optimized models through your existing SageMaker AI inference workflow without infrastructure changes, enabling you to deliver faster AI applications with predictable performance. You can use EAGLE speculative decoding in the following AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Europe (Ireland), Asia Pacific (Singapore), and Europe (Frankfurt) To learn more about EAGLE speculative decoding, visit AWS News Blog here, and SageMaker AI documentation here.
- AWS Glue Data Quality now supports rule labeling for enhanced reportingby aws@amazon.com on November 25, 2025 at 8:00 am
Today, AWS announces the general availability of rule label, a feature of AWS Glue Data Quality, enabling you to apply custom key-value pair labels to your data quality rules for improved organization, filtering, and targeted reporting. This enhancement allows you to categorize data quality rules by business context, team ownership, compliance requirements, or any custom taxonomy that fits your data quality and governance needs. Rule labels provide effective way to organize analyze data quality results. You can query results by specific labels to identify failing rules within particular categories, count rule outcomes by team or domain, and create focused reports for different stakeholders. For example, you can apply all rules that pertain to finance team with a label “team=finance” and generate a customized report to showcase quality metrics specific to finance team. You can label high priority rules with “criticality=high” to prioritize remediation efforts. Labels can be authored as part of the DQDL. You can query the labels as part of rule outcomes, row-level results, and API responses, making it easy to integrate with your existing monitoring and reporting workflows. AWS Glue Data Quality rule labeling is available in all commercial AWS Regions where AWS Glue Data Quality is available. See the AWS Region Table for more details. To learn more about rule labeling, see the AWS Glue Data Quality documentation.
- AWS Service Quotas adds now support for automatic quota managementby aws@amazon.com on November 25, 2025 at 8:00 am
Today, we’re excited to announce the general availability of new capability of automatic quota management feature in AWS Service Quotas. Today, automatic quota management supports customers to receive notifications when their quota usage approaches their allocated quotas and configure their preferred notifications channel, such as email, SMS, or Slack, through Service Quotas console or API. Now, this feature adjusts values of AWS services’ quotas automatically and safely based on customer’s usage, which reduces operational burden from customers to constantly monitor their quota usage, and request quota increases across multiple AWS services in different AWS accounts and Regions. Customers can now confidently scale their applications on AWS to meet their growing customer demand without the risk of unexpected service interruptions due to quota exhaustion. This new capability is now available at no additional cost in all AWS commercial regions. To explore this feature and for details, please visit Service Quotas console and AWS Service Quotas documentation.
- Amazon SageMaker AI Inference now supports bidirectional streamingby aws@amazon.com on November 25, 2025 at 8:00 am
Amazon SageMaker AI Inference now supports bidirectional streaming for real-time speech-to-text transcription, enabling continuous speech processing instead of batch input. Models can now receive audio streams and return partial transcripts simultaneously as users speak, enabling you to build voice agents that process speech with minimal latency. As customers build AI voice agents, they need real-time speech transcription to minimize delays between user speech and agent responses. Data scientists and ML engineers lack managed infrastructure for bidirectional streaming, making it necessary to build custom WebSocket implementations and manage streaming protocols. Teams spend weeks developing and maintaining this infrastructure rather than focusing on model accuracy and agent capabilities. With bidirectional streaming on Amazon SageMaker AI Inference, you can deploy speech-to-text models by invoking your endpoint with the new Bidirectional Stream API. The client opens an HTTP2 connection to the SageMaker AI runtime, and SageMaker AI automatically creates a WebSocket connection to your container. This can process streaming audio frames and return partial transcripts as they are produced. Any container implementing a WebSocket handler following the SageMaker AI contract works automatically, with real-time speech models such as Deepgram running without modifications. This eliminates months of infrastructure development, enabling you to deploy voice agents with continuous transcription while focusing your time on improving model performance. Bidirectional streaming is available in following AWS Regions – Canada (Central), South America (São Paulo), Africa (Cape Town), Europe (Paris), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Israel (Tel Aviv), Europe (Zurich), Asia Pacific (Tokyo), AWS GovCloud US (West), AWS GovCloud US (East), Asia Pacific (Mumbai), Middle East (Bahrain), US West (Oregon), China (Ningxia), US West (Northern California), Asia Pacific (Sydney), Europe (London), Asia Pacific (Seoul), US East (N. Virginia), Asia Pacific (Hong Kong), US East (Ohio), China (Beijing), Europe (Stockholm), Europe (Ireland), Middle East (UAE), Asia Pacific (Osaka), Asia Pacific (Melbourne), Europe (Spain), Europe (Frankfurt), Europe (Milan), Asia Pacific (Singapore). To learn more, visit AWS News Blog here and SageMaker AI documentation here.
- Amazon OpenSearch Service introduces Agentic Searchby aws@amazon.com on November 25, 2025 at 8:00 am
Amazon OpenSearch Service launches Agentic Search, transforming how users interact with their data through intelligent, agent-driven search. Agentic Search introduces an intelligent agent-driven system that understands user intent, orchestrates the right set of tools, generates OpenSearch DSL (domain-specific language) queries, and provides transparent summaries of its decision-making process through a simple ‘agentic’ query clause and natural language search terms. Agentic Search automates OpenSearch query planning and execution, eliminating the need for complex search syntax. Users can ask questions in natural language like “Find red cars under $30,000” or “Show last quarter’s sales trends.” The agent interprets intent, applies optimal search strategies, and delivers results while explaining its reasoning process. The feature provides two agent types: conversational agents, which handle complex interactions with the ability to store conversations in memory, and flow agents for efficient query processing. The built-in QueryPlanningTool uses large language models (LLMs) to create DSL queries, making search accessible regardless of technical expertise. Users can manage Agentic Search through APIs or OpenSearch Dashboards to configure and modify agents. Agentic Search’s advanced settings allow you to connect with external MCP servers and use custom search templates. Support for agentic search is available for OpenSearch Service version 3.3 and later in all AWS Commercial and AWS GovCloud (US) Regions where OpenSearch Service is available. See here for a full listing of our Regions. Build agents and run agentic searches using the new Agentic Search use case available in the AI Search Flows plugin. To learn more about Agentic Search, visit the OpenSearch technical documentation.
- Amazon Quick Suite introduces scheduling for Quick Flowsby aws@amazon.com on November 25, 2025 at 8:00 am
Amazon Quick Flows now supports scheduling, enabling you to automate repetitive workflows without requiring manual intervention. You can now configure Quick Flows to run automatically at specified times or intervals, improving operational efficiency and ensuring critical tasks execute consistently. You can schedule Quick Flows to run daily, weekly, monthly, or on custom intervals. This capability is great for automating routine and administrative tasks such as generating recurring reports from dashboards, summarizing open items assigned to you in external services, or generating daily meeting briefings before you head out to work. You can schedule any flow you have access to—whether you created it or it was shared with you. To schedule a flow, click the scheduling icon and configure your desired date, time, and frequency. Scheduling in Quick Flows is available now in US East (N. Virginia), US West (Oregon), and Europe (Ireland) There are no additional charges for using scheduled execution beyond standard Quick Flows usage. To learn more about configuring scheduled Quick Flows, please visit our documentation.
- AWS Glue Data Quality now supports pre-processing queriesby aws@amazon.com on November 25, 2025 at 8:00 am
Today, AWS announces the general availability of preprocessing queries for AWS Glue Data Quality, enabling you to transform your data before running data quality checks through AWS Glue Data Catalog APIs. This feature allows you to create derived columns, filter data based on specific conditions, perform calculations, and validate relationships between columns directly within your data quality evaluation process. Preprocessing queries provide enhanced flexibility for complex data quality scenarios that require data transformation before validation. You can create derived metrics like calculating total fees from tax and shipping columns, limiting number of columns that are considered for data quality recommendations or filter datasets to focus quality checks on specific data subsets. This capability eliminates the need for separate data pre-processing steps, streamlining your data quality workflows. AWS Glue Data Quality preprocessing queries are available through AWS Glue Data Catalog APIs – start-data-quality-rule-recommendation-run and start-data-quality-ruleset-evaluation-run, in all commercial AWS Regions where AWS Glue Data Quality is available. To learn more about preprocessing queries, see the Glue Data Quality documentation.
- AWS IoT Core now supports IoT thing registry data retrieval from IoT rulesby aws@amazon.com on November 24, 2025 at 6:00 pm
AWS IoT Core announces a new capability to dynamically retrieve IoT thing registry data using an IoT rule, enhancing your ability to filter, enrich, and route IoT messages. Using the new get_registry_data() inline rule function, you can access IoT thing registry data, such as device attributes, device type, and group membership and leverage this information directly in IoT rules. For example, your rule can filter AWS IoT Core connectivity lifecycle events and then retrieve thing attributes (such as “test” or “production” device) to inform routing of lifecycle events to different endpoints for downstream processing. You can also use this feature to enrich or route IoT messages with registry data from other devices. For instance, you can add a sensor’s threshold temperature from IoT thing registry to the messages relayed by its gateway. To get started, connect your devices to AWS IoT Core and store your IoT device data in IoT thing registry. You can then use IoT rules to retrieve your registry data. This capability is available in all AWS regions where AWS IoT Core is present. For more information refer to the developer guide and API documentation.
- Amazon EC2 announces interruptible Capacity Reservationsby aws@amazon.com on November 24, 2025 at 6:00 pm
Today, Amazon EC2 announces interruptible Capacity Reservations to help you better utilize your reserved capacity and save costs. On-Demand Capacity Reservations (ODCRs) help you reserve compute capacity in a specific Availability Zone for any duration. When ODCRs are not in use, you can now make them temporarily available as interruptible ODCRs, enabling other workloads within your organization to utilize them while preserving your ability to reclaim the capacity for critical operations. By repurposing unused capacity as interruptible ODCRs, workloads suitable for flexible, fault-tolerant operations—such as batch processing, data analysis, and machine learning training can benefit from temporarily available capacity. Reservation owners can reclaim their capacity at any time, while consumers of interruptible ODCRs will receive an interruption notice before termination to allow for graceful shutdown or checkpointing before. Interruptible ODCRs are now available at no additional cost to all Capacity Reservations customers. Refer to the AWS Capabilities by Region website for the feature’s regional availability. CloudFormation support will be coming soon. For more details, please refer to the Capacity Reservations user guide.
- Amazon CloudFront announces support for mutual TLS authenticationby aws@amazon.com on November 24, 2025 at 6:00 pm
Amazon CloudFront announces support for mutual TLS Authentication (mTLS), a security protocol that requires both the server and client to authenticate each other using X.509 certificates, enabling customers to validate client identities at CloudFront’s edge locations. Customers can now ensure only clients presenting trusted certificates can access their distributions, helping protect against unauthorized access and security threats. Previously, customers had to spend ongoing effort implementing and maintaining their own client access management solutions, leading to undifferentiated heavy lifting. Now with the support for mutual TLS, customers can easily validate client identities at the AWS edge before connections are established with their application servers or APIs. Example use cases include B2B secure API integrations for enterprises and client authentication for IoT. For B2B API security, enterprises can authenticate API requests from trusted third parties and partners using mutual TLS. For IoT use cases, enterprises can validate that devices are authorized to receive proprietary content such as firmware updates. Customers can leverage their existing third-party Certificate Authorities or AWS Private Certificate Authority to sign the X.509 certificates. With Mutual TLS, customers get the performance and scale benefits of CloudFront for workloads that require client authentication. Mutual TLS authentication is available to all CloudFront customers at no additional cost. Customers can configure mutual TLS with CloudFront using the AWS Management Console, CLI, SDK, CDK, and CloudFormation. For detailed implementation guidance and best practices, visit CloudFront Mutual TLS (viewer) documentation.
- OpenSearch Service Enhances Log Analytics with New PPL Experienceby aws@amazon.com on November 24, 2025 at 6:00 pm
Today, AWS announces enhanced log analytics capabilities in Amazon OpenSearch Service, making Piped Processing Language (PPL) and natural language the default experience in OpenSearch UI’s Observability workspace. This update combines proven pipeline syntax with simplified workflows to deliver an intuitive observability experience, helping customers analyze growing data volumes while controlling costs. The new experience includes 35+ new commands for deep analysis, faceted exploration, and natural language querying to help customers gain deeper insights across infrastructure, security, and business metrics. With this enhancement, customers can streamline their log analytics workflows using familiar pipeline syntax while leveraging advanced analytics capabilities. The solution includes enterprise-grade query capabilities, supporting advanced event correlation using natural language that help teams uncover meaningful patterns faster. Users can seamlessly move from query to visualization within a single interface, reducing mean time to detect and resolve issues. Admins can quickly stand up an end-to-end OpenTelemetry solution using OpenSearch’s Get Started workflow in the AWS console. The unified workflow includes out-of-the-box OpenSearch Ingestion pipelines for OpenTelemetry data, making it easier for teams to get started quickly. Amazon OpenSearch UI is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Paris), Europe (Stockholm), Europe (Milan), Europe (Spain), Europe (Zurich), South America (São Paulo), and Canada (Central). To learn more about the new OpenSearch log analytics experience, visit the OpenSearch Service observability documentation and start using these enhanced capabilities today in OpenSearch UI.
- Claude Opus 4.5 now available in Amazon Bedrockby aws@amazon.com on November 24, 2025 at 3:00 pm
Customers can now use Claude Opus 4.5 in Amazon Bedrock, a fully managed service that offers a choice of high-performing foundation models from leading AI companies. Opus 4.5 is Anthropic’s newest model, setting new standards across coding, agentic workflows, computer use, and office tasks while making Opus-level intelligence accessible at one-third the cost. Opus 4.5 excels at professional software engineering tasks, achieving state-of-the-art performance on SWE-bench. The model handles ambiguity, reasons about tradeoffs and can figure out fixes for bugs that require reasoning across multiple systems. It can help transform multi-day team development projects into hours-long tasks with improved multilingual coding capabilities. This generation of Claude spans the full development lifecycle: Opus 4.5 for production code and lead agents, Sonnet 4.5 for rapid iteration and scaled user experiences, Haiku 4.5 for sub-agents and free-tier products. Beyond coding, the model powers agents that produce documents, spreadsheets, and presentations with consistency, professional polish, and domain awareness, making it ideal for finance and other precision-critical verticals. As Anthropic’s best vision model yet, it unlocks workflows that depend on complex visual interpretation and multi-step navigation. Through the Amazon Bedrock API, Opus 4.5 introduces two new capabilities: tool search and tool use examples. Together, these updates enable Claude to navigate large tool libraries and accurately execute complex tasks. A new effort parameter, available in beta, lets you control how much effort Claude allocates across thinking, tool calls, and responses to balance performance with latency, and cost. Claude Opus 4.5 is now available in Amazon Bedrock via global cross region inference in multiple locations. For the full list of available regions, refer to the documentation. To get started with the model in Amazon Bedrock, read the launch blog or visit the Amazon Bedrock console.
- Amazon MSK Replicator is now available in five additional AWS Regionsby aws@amazon.com on November 24, 2025 at 3:00 pm
You can now use Amazon MSK Replicator to replicate streaming data across Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters in five additional AWS Regions: Asia Pacific (Thailand), Mexico (Central), Asia Pacific (Taipei), Canada West (Calgary), Europe (Spain). MSK Replicator is a feature of Amazon MSK that enables you to reliably replicate data across Amazon MSK clusters in different or the same AWS Region(s) in a few clicks. With MSK Replicator, you can easily build regionally resilient streaming applications for increased availability and business continuity. MSK Replicator provides automatic asynchronous replication across MSK clusters, eliminating the need to write custom code, manage infrastructure, or setup cross-region networking. MSK Replicator automatically scales the underlying resources so that you can replicate data on-demand without having to monitor or scale capacity. MSK Replicator also replicates the necessary Kafka metadata including topic configurations, Access Control Lists (ACLs), and consumer group offsets. If an unexpected event occurs in a region, you can failover to the other AWS Region and seamlessly resume processing. You can get started with MSK Replicator from the Amazon MSK console or the Amazon CLI. To learn more, visit the MSK Replicator product page, pricing page, and documentation.
- AWS Lambda announces enhanced error handling capabilities for Kafka event processingby aws@amazon.com on November 24, 2025 at 3:00 pm
AWS Lambda launches enhanced error handling capabilities for Amazon Managed Streaming for Apache Kafka (MSK) and self-managed Apache Kafka (SMK) event sources. These capabilities allow customers to build custom retry configurations, optimize retries of failed messages, and send failed events to a Kafka topic as an on-failure destination, enabling customers to build resilient Kafka workloads with robust error handling strategies. Customers use Kafka event source mappings (ESM) with their Lambda functions to build their mission-critical Kafka applications. Kafka ESM offers robust error handling of failed events by retrying events with exponential backoff, and retaining failed events in on-failure destinations like Amazon SQS, Amazon S3, Amazon SNS. However, customers need customized error handling to meet stringent business and performance requirements. With this launch, developers can now exercise precise control over failed event processing and leverage Kafka topics as an additional on-failure destination when using Provisioned mode for Kafka ESM. Customers can now define specific retry limits and time boundaries for retry, automatically discarding failed records beyond these limits to customer-specified destination. They can now also set automatic retries of failed records in the batch and enhance their function code to report individual failed messages, optimizing the retry process. This feature is available in all AWS Commercial Regions where AWS Lambda’s Provisioned mode for Kafka ESM is available. To enable these capabilities, provide configuration parameters for your Kafka ESM in the ESM API, AWS Console, and AWS CLI. To learn more, read the Lambda ESM documentation and AWS Lambda pricing.
- Amazon Quick Suite Embedded Chat is now availableby aws@amazon.com on November 24, 2025 at 3:00 pm
Today, AWS announces the general availability of Amazon Quick Suite Embedded Chat, enabling you to embed Quick Suite’s conversational AI, which combines structured data and unstructured knowledge in a single conversation – directly into your applications, eliminating the need to build conversational interfaces, orchestration logic, or data access layers from scratch. Quick Suite Embedded Chat solves a fundamental problem: users want answers where they work, not in another tool. Whether in a CRM, support console, or analytics portal, they need instant, contextual responses. Most conversational tools excel at either structured data or documents, analytics or knowledge bases, answering questions or performing actions—rarely all of the above. Quick Suite closes this gap. Now, users can reference a KPI, pull details from a file, check customer feedback, and trigger actions in one continuous conversation without leaving the embedded chat. Embedded Chat brings this unified experience into your applications with simple integration, either through 1-click embedding or through API-based iframes for registered users with your existing authentication. You can connect your Agentic Chat to your data through connectors to search SharePoint, websites, send Slack messages, or create Jira tasks and customize the Agent with your brand colors, communication style, and personalized greetings. Security always stays under your control as you choose what the agent accesses and explicitly scope all actions. Quick Suite Embedded Chat is available the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland), and we’ll expand availability to additional AWS Regions over the coming months. There is no additional cost for Quick Suite Embedded Chat. Existing Quick Suite pricing is available here. To learn more, see Embedding Amazon Quick Suite launch blog. To get started with Amazon Quick Suite, visit the Amazon Quick Suite product page.
- Amazon Connect flow modules now support custom inputs, outputs, and version managementby aws@amazon.com on November 24, 2025 at 3:00 pm
Amazon Connect flow modules now support custom inputs, outputs, and branches, along with version and alias management. With this launch, you can now define flexible parameters for your reusable flow modules to math your specific business logic. For example, you can create an authentication module that accepts a phone number and PIN as inputs, then returns the customer name and authentication status as outputs with branches such as “authenticated” or “not authenticated”. All parameters are customizable to meet your specific needs. Additionally, advanced versioning and aliasing capabilities allow you to manage module updates more seamlessly. You can create immutable version snapshots and map aliases to specific versions. When you update an alias to point to a new version, all flows using that module automatically reference the updated version. These new features make flow modules more powerful and reusable, allowing you to build and maintain flows more efficiently. To learn more about these feature, see the Amazon Connect Administrator Guide. This feature is available in all AWS regions that offers Amazon Connect. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.
- Amazon U7i instances now available in Asia Pacific (Jakarta) Regionby aws@amazon.com on November 24, 2025 at 3:00 pm
Starting today, Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the Asia Pacific (Jakarta) region. U7i-6tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.
- Amazon Redshift now supports federated permissions across multi-warehouse architecturesby aws@amazon.com on November 24, 2025 at 3:00 pm
Amazon Redshift now supports federated permissions across multi-warehouse architectures Amazon Redshift now supports federated permissions, which simplify permissions management across multiple Redshift data warehouses. Customers are adopting multi-warehouse architectures to scale and isolate workloads and are looking for simplified, consistent permissions management across warehouses. With Redshift federated permissions, you define data permissions once from any Redshift warehouse and automatically enforce them across all warehouses in the account. Amazon Redshift warehouses with federated permissions are auto-mounted in every Redshift warehouse, and you can use existing workforce identities with AWS IAM Identity Center or use existing IAM roles to query data across warehouses. Regardless of which warehouse is used for querying, row-level, column-level, and masking controls always apply automatically, delivering fine-grained access compliance. You can get started by registering a Redshift Serverless namespace or Redshift provisioned cluster with AWS Glue Data Catalog and start querying across warehouses using Redshift Query Editor V2, or any supported SQL client. You get horizontal scalability with multiple warehouses by allowing you to add new warehouses without increasing governance complexity, as new warehouses automatically enforce permission policies and analysts immediately see all databases from registered warehouses. Amazon Redshift federated permissions is available at no additional cost in supported AWS regions. To learn more, visit the Amazon Redshift documentation.
- AWS Elemental MediaTailor now supports HLS Interstitials for live streamsby aws@amazon.com on November 24, 2025 at 3:00 pm
AWS Elemental MediaTailor now supports HTTP Live Streaming (HLS) Interstitials for live streams, enabling broadcasters and streaming service providers to deliver seamless, personalized ad experiences across a wide range of modern video players. This capability allows customers to insert interstitial advertisements and promotions directly into live streams using the HLS Interstitials specification (RFC 8216), which is natively supported by popular players including HLS.js, Shaka Player, Bitmovin Player, and Apple devices running iOS 16.4, iPadOS 16.4, tvOS 16.4, and later. With HLS Interstitials, MediaTailor automatically generates the necessary metadata tags (Interstitial class EXT-X-DATERANGE with X-ASSET-LIST attributes) that signal to client players when and how to play interstitial content. This approach eliminates the need for custom player-side stitching logic, reducing development complexity and ensuring consistent playback behavior. The feature integrates with MediaTailor’s existing server-side ad insertion (SSAI) capabilities, delivering frame-accurate transitions with no buffering between content and interstitials. Server-side beaconing continues to work with HLS Interstitials, ensuring ad tracking and measurement workflows remain intact. HLS Interstitials for live streams is particularly valuable for sports broadcasts, live news, and event streaming where precise ad timing and minimal latency are critical. The feature supports pre-roll and mid-roll insertion, giving customers flexibility in how they monetize their live content. This launch complements MediaTailor’s existing HLS Interstitials support for VOD, rounding out support across Linear, Live, FAST, and VOD workflows. MediaTailor makes it easy to test and deploy—customers can rapidly enable or disable HLS Interstitials with a simple query parameter on the multi-variant manifest request, providing per playback session control without changing the underlying MediaTailor configuration. AWS Elemental MediaTailor HLS Interstitials for live streams is available today in all AWS Regions where MediaTailor operates. You pay only for the features you use, with no upfront commitments. To learn more and get started, visit the AWS Elemental MediaTailor documentation and the HLS Interstitials implementation guide.
- AWS Glue announces catalog federation for remote Apache Iceberg catalogsby aws@amazon.com on November 24, 2025 at 3:00 pm
AWS Glue announces the general availability of catalog federation for remote Iceberg catalogs. This capability provides direct and secure access to Iceberg tables stored in Amazon S3 and cataloged in remote catalogs using AWS analytics engines. With catalog federation, you can federate to remote Iceberg catalogs and query remote Iceberg tables using your preferred AWS analytics engines, without moving or copying tables. It synchronizes metadata real-time across AWS Glue Data Catalog and remote catalogs when data teams query remote tables, which means that query results are always completely up-to-date. You can now choose the best price-performance for your workloads when analyzing remote Iceberg tables using your preferred AWS analytics engines, while maintaining consistent security controls when discovering or querying data. Catalog federation is supported by a wide variety of analytics engines, including Amazon Redshift, Amazon EMR, Amazon Athena, AWS Glue, third-party engines like Apache Spark, and Amazon SageMaker with the serverless notebooks. Catalog federation uses AWS Lake Formation for access controls, allowing you to use fine-grained access controls, cross-account sharing, and trusted identity propagation when sharing remote catalog tables with other data consumers. Catalog federation integrates with catalog implementations that support the Iceberg REST specifications. Catalog federation is available in Lake Formation console and using AWS Glue and Lake Formation SDKs and APIs. This feature is generally available in all AWS commercial regions where AWS Glue and Lake Formation are available. With just a few clicks in the console, you can federate to remote catalogs, discover its databases and tables, grant permissions to access table data, and query remote Iceberg tables using AWS analytics engines. To learn more, visit the documentation.


