
This caused the bootstrap service to be stuck in a loop where it was being restarted every 5 to 10 seconds. Due to the misconfiguration, the service bootstrap code threw an exception, and was automatically restarted. This version contained a misconfiguration that blocked the service from starting normally. We’re providing you with this Post Incident Review (PIR) to summarize what went wrong, how we responded, and the steps Microsoft is taking to learn and improve.Ī code deployment for the Azure Container Apps service was started on 3 July 2023 via the normal Safe Deployment Practices (SDP), first rolling out to Azure canary and staging regions.

Unfortunately, this issue impacted one or more of your Azure resources.


Security Operations Center (SOC) functionality in Sentinel including hunting queries, workbooks with custom queries, and notebooks that queried impacted tables with date range inclusive of the logs data that we failed to ingest, might have returned partial or empty results. In cases where Event or Security Event tables were impacted, incident investigations of a correlated incident may have showed partial or empty results. Additionally, platform logs gathered via Diagnostic Settings failed to route some data to customer destinations such as Log Analytics, Storage, Event Hub and Marketplace. These failures were caused by a deployment of a service within Microsoft, with a bug that caused a much higher than expected call volume that overwhelmed the telemetry management control plane. Customers in all regions experienced impact. Between 23:15 UTC on 6 July 2023 and 09:00 UTC on 7 July 2023, a subset of data for Azure Monitor Log Analytics and Microsoft Sentinel failed to ingest.
