Refresh

Cloud Service Status

Last updated: Mar 22, 2026 10:05:45 UTC

AWS Incidents

2

2 medium

GCP Incidents

0

DO Incidents

0

Total Active

2

Recent Incidents

74 incidents
Provider Service Title Status Severity Started
aws
Service impact: Increased connectivity issues and API Error Rates Active medium Mar 1, 21:56 UTC

Description

We are investigating increased API error rates in a single Availability Zone (mes1-az2) in the ME-SOUTH-1 Region.

Details

External ID aws-13f2ea05d175ef82
Last Updated Mar 1, 21:56 UTC
aws
Service impact: Increased Error Rates Active medium Mar 1, 04:51 UTC

Description

We are investigating issues with AWS services in the ME-CENTRAL-1 Region.

Details

External ID aws-08dd642baf403df6
Last Updated Mar 1, 04:51 UTC
DO
App platform seeing delays in deployments across FRA1 region App platform seeing delays in deployments across FRA1 region Resolved low Mar 20, 10:32 UTC

Description

<p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>12:01</var> UTC</small><br><strong>Resolved</strong> - The issue impacting delays in App Platform deployments has been confirmed to be resolved. Between approximately 00:08am UTC & 11:46am UTC, users may have noticed delays while creating or updating apps, or may have encountered failed deployments. For failed deployments, please trigger a redeploy, which should successfully resolve the issue.<br /><br />We confirmed that the service is functioning as expected. Once again, we sincerely apologize for the inconvenience caused and appreciate your understanding.<br /><br />However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation. We'll be happy to assist you.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>11:14</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has deployed a fix to resolve the issue impacting new App Platform deployments using Dedicated Egress IP in FRA1 region. We are actively monitoring the situation to ensure stability and will provide an update once the incident has been fully resolved. Thank you for your patience and we apologize for the inconvenience.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>10:32</var> UTC</small><br><strong>Investigating</strong> - Our engineers are currently investigating an issue impacting new App Platform deployments using Dedicated Egress IP in FRA1 region. During this time, some users may experience delay when creating new App Platform apps or deploying existing apps. Existing apps are not affected and should continue to function normally. We apologize for any inconvenience, and we'll share more information as it becomes available.</p>

Details

External ID do-86caf44e1255682b
Last Updated Mar 20, 12:01 UTC
End Time Mar 20, 12:01 UTC

Affected Regions

fra1 Frankfurt 1

Status Timeline

Resolved Mar 20, 12:01 UTC

Status changed from Monitoring to Resolved

Monitoring Mar 20, 11:14 UTC

Status changed from Investigating to Monitoring

DO
Gradient AI Platform agents and services Accessibility Gradient AI Platform agents and services Accessibility Resolved low Mar 20, 08:50 UTC

Description

<p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>14:14</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has implemented a fix, the issues impacting Gradient AI Platform have been resolved. All agents are back up and healthy. Service has been fully restored.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>14:04</var> UTC</small><br><strong>Monitoring</strong> - A fix has been implemented and services have been restored. We are continuing to monitor the system to ensure stability. We will provide further updates if needed.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>11:05</var> UTC</small><br><strong>Update</strong> - We've identified the issue and are actively working to restore the affected services. We're making steady progress and closely monitoring the situation. Further updates will be shared as they become available.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>09:51</var> UTC</small><br><strong>Identified</strong> - We’ve identified the issue and are currently working on restoring the services. We’ll continue to provide updates as progress is made.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>08:50</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating issue affecting the accessibility of agents and services on the Gradient AI Platform. Users may experience failures or unresponsiveness when attempting to use these features. Our engineering team is actively working to identify the root cause and restore full functionality. We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-2a3eff41e8411d3d
Last Updated Mar 20, 14:14 UTC
End Time Mar 20, 14:14 UTC

Status Timeline

Resolved Mar 20, 14:14 UTC

Status changed from Monitoring to Resolved

Monitoring Mar 20, 14:04 UTC

Status changed from Identified to Monitoring

Identified Mar 20, 09:51 UTC

Status changed from Investigating to Identified

DO
Gradient AI model availability Gradient AI model availability Resolved low Mar 17, 15:00 UTC

Description

<p><small>Mar <var data-var='date'>17</var>, <var data-var='time'>19:49</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has implemented a fix, the issues impacting model availability and performance have been resolved. All models, including those previously degraded, are back up and healthy. Service has been fully restored.</p><p><small>Mar <var data-var='date'>17</var>, <var data-var='time'>15:00</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating reports of Gradient AI model availability issues impacting multiple models. Users may experience issues with models availability, including Llama3.1-8b and Qwen3-32b, as well as embedding models such as GTE Large (v1.5), All-MiniLM-L6-v2, Multi-QA-mpnet-base-dot-v1, and Qwen3 Embedding 0.6B. <br /><br />Additionally, Guardrails are not available, affecting associated agents, and users attempting to run inference on the Llama3.3-70b model will see degraded performance. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-8b98dbfd62b0a45e
Last Updated Mar 17, 19:49 UTC
End Time Mar 17, 19:49 UTC

Status Timeline

Resolved Mar 17, 19:49 UTC

Status changed from Investigating to Resolved

DO
Cloud Control Panel and API Cloud Control Panel and API Resolved low Mar 16, 17:39 UTC

Description

<p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>17:39</var> UTC</small><br><strong>Resolved</strong> - From 16:14 to 16:38 UTC, Our Engineering team observed an issue impacting Cloud control panel and API. During this time, users may experienced errors when trying to access the Cloud control panel and when trying to use the API. Our team has fully resolved the issues as of 16:38 UTC. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.</p>

Details

External ID do-8ad2579ca22fc19d
Last Updated Mar 16, 17:39 UTC
End Time Mar 16, 17:39 UTC
DO
Degraded performance with BYOK Anthropic models Degraded performance with BYOK Anthropic models Resolved low Mar 15, 02:55 UTC

Description

<p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>03:31</var> UTC</small><br><strong>Resolved</strong> - The issue is now resolved, all Anthropic BYOK models in Gradient AI should work normally.<br />Contact support if issues persist.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>02:55</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue related to all Gradient AI agents and serverless inference that require BYOK Anthropic modles.<br />Impacted users may experience degraded performance.<br />We will provide an update as soon as possible</p>

Details

External ID do-2585f02e5cb40a8e
Last Updated Mar 15, 03:31 UTC
End Time Mar 15, 03:31 UTC

Status Timeline

Resolved Mar 15, 03:31 UTC

Status changed from Investigating to Resolved

DO
Delay in App Platform Deployments Delay in App Platform Deployments Resolved low Mar 13, 21:30 UTC

Description

<p><small>Mar <var data-var='date'>14</var>, <var data-var='time'>01:47</var> UTC</small><br><strong>Resolved</strong> - As of 23:00 UTC, our Engineering team has confirmed that the issue causing delays in App Platform deployments has been fully resolved. The fix implemented earlier has been successful, and we are no longer seeing any delays or errors with deployments.<br /><br />Users should now be able to deploy their apps successfully and without any issues. We apologize again for the inconvenience caused.<br /><br />However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>23:39</var> UTC</small><br><strong>Monitoring</strong> - After working with our upstream provider, our Engineering team has implemented a fix to resolve the issue that was causing delays in the deployment of new apps, and they are currently monitoring the situation. <br /><br />During this time, users should no longer experience issues with creating new apps and all the stalled creation events should provision completely.<br /><br />We will post an update as soon as the issue is fully resolved.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>22:01</var> UTC</small><br><strong>Identified</strong> - Our Engineering team is starting to see delays once again with new App Platform deployments. During this time, users may still experience delays with deploying new apps. We're working with our upstream provider to resolve the issue.<br /><br />We again apologize for the inconvenience. We will post further updates once we have more information.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>21:30</var> UTC</small><br><strong>Monitoring</strong> - Starting at 20:40 UTC, users may have seen delays with deploying new apps on App Platform. <br /><br />At this time, our Engineering team is seeing signs of recovery, and users should be able to deploy new apps without issue. We're currently monitoring the situation to ensure full recovery.<br /><br />We apologize for the inconvenience. We'll post an update once the issue has been confirmed to be resolved.</p>

Details

External ID do-b030a551f1b1c907
Last Updated Mar 14, 01:47 UTC
End Time Mar 14, 01:47 UTC

Status Timeline

Resolved Mar 14, 01:47 UTC

Status changed from Monitoring to Resolved

DO
Newly Created Managed Kubernetes Nodes Newly Created Managed Kubernetes Nodes Resolved low Mar 13, 11:26 UTC

Description

<p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>16:35</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed the resolution of the issue impacting DNS timeouts for newly provisioned Managed Kubernetes nodes. At this time all cluster services should now be functioning normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>13:55</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to address the issue causing DNS timeouts for newly provisioned Managed Kubernetes nodes. Further investigation has confirmed that this issue primarily affected customers utilizing a NAT Gateway within their VPC and running a VPC-native cluster. We are actively monitoring the situation to ensure overall stability.<br /><br />We appreciate your patience and will provide a further update once the issue is fully confirmed to be resolved.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>12:32</var> UTC</small><br><strong>Identified</strong> - Our Engineering team is investigating an issue impacting newly provisioned Managed Kubernetes nodes. During this time, Only customers who run a NAT Gateway in their VPC and a VPC-native clusters are affected and may experience DNS timeouts. We apologize for the inconvenience and will share an update once we have more information.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>11:26</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue impacting newly provisioned Managed Kubernetes nodes. During this time, new nodes may experience DNS timeouts, which could temporarily affect cluster services. We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-d1a7461e4d517224
Last Updated Mar 13, 16:35 UTC
End Time Mar 13, 16:35 UTC

Status Timeline

Resolved Mar 13, 16:35 UTC

Status changed from Monitoring to Resolved

Monitoring Mar 13, 13:55 UTC

Status changed from Identified to Monitoring

Identified Mar 13, 12:32 UTC

Status changed from Investigating to Identified

DO
Ubuntu/Debian Package Mirror Failure Ubuntu/Debian Package Mirror Failure Resolved low Mar 9, 19:23 UTC

Description

<p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>19:23</var> UTC</small><br><strong>Resolved</strong> - From 17:50 to 19:06 UTC, Our Engineering team observed an issue with mirrors.digitalocean.com. During this time, users may have experienced errors when trying to update packages on Debian and Ubuntu Images. Our team has fully resolved the issues as of 19:06 UTC. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.</p>

Details

External ID do-bc55555d6d83c7e5
Last Updated Mar 9, 19:23 UTC
End Time Mar 9, 19:23 UTC
aws
Service degradation: Increased Error Rates Resolved medium Mar 7, 11:53 UTC

Description

We are investigating increased error rates in the EU-CENTRAL-2 Region.

Details

External ID aws-74e4b17ca6cd9c5d
Last Updated Mar 7, 11:53 UTC
End Time Mar 8, 21:26 UTC

Status Timeline

Resolved Mar 8, 21:26 UTC

Auto-resolved: no longer in provider status feed

DO
HTTP 522 Error on App Platform HTTP 522 Error on App Platform Resolved low Mar 6, 21:22 UTC

Description

<p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>21:22</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team identified an issue affecting the App Platform. During the incident, users may have experienced HTTP 522 (Connection Timed Out) errors when accessing their apps. The issue seems to be resolved now.<br /><br />We apologize for the inconvenience caused. If you continue to experience any related errors, please contact our Support team by opening a ticket at https://www.digitalocean.com/support/contact/.</p>

Details

External ID do-82301fecee6daf16
Last Updated Mar 6, 21:22 UTC
End Time Mar 6, 21:22 UTC
DO
App Platform Deployments App Platform Deployments Resolved low Mar 5, 23:28 UTC

Description

<p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>01:12</var> UTC</small><br><strong>Resolved</strong> - As of 00:22 UTC, our Engineering team has confirmed that the issue causing delays in App Platform deployments has been fully resolved. The fix implemented earlier has been successful, and we are no longer seeing any delays or errors with deployments.<br /><br />Users should now be able to deploy their apps successfully and without any issues. We apologize again for the inconvenience caused.<br /><br />However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>00:32</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to address the issue causing delays in App Platform deployments. We are actively monitoring the situation to ensure overall stability. <br /><br />We appreciate your patience and will provide a further update once the issue is fully confirmed to be resolved.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:28</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating an issue impacting App Platform deployments. During this time, users may experience a delay or failure when deploying new and existing App Platform apps. <br /><br />We apologize for any inconvenience, and we'll share more information as it becomes available.</p>

Details

External ID do-a8ad6212340fd944
Last Updated Mar 6, 01:12 UTC
End Time Mar 6, 01:12 UTC

Status Timeline

Resolved Mar 6, 01:12 UTC

Status changed from Monitoring to Resolved

Monitoring Mar 6, 00:32 UTC

Status changed from Investigating to Monitoring

DO
Internal Load Balancers Connectivity Internal Load Balancers Connectivity Resolved low Mar 5, 00:23 UTC

Description

<p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:52</var> UTC</small><br><strong>Resolved</strong> - From 19:57 UTC to 01:03 UTC, customers may have experienced connectivity issues between Internal Load Balancers and their associated target droplets, which could have resulted in service disruption or traffic routing failures.<br /><br />Our Engineering team has confirmed full resolution of the issue, and Internal Load Balancers should now be functioning normally.<br /><br />If you continue to experience any problems, please open a ticket with our Support team. We apologize for any inconvenience caused.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:19</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented mitigation measures to address the connectivity issues affecting Internal Load Balancers and their associated target droplets. We are actively monitoring the situation to ensure stability and to prevent any recurrence.<br /><br />We will provide a further update once we confirm the issue is fully resolved.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:23</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue affecting Internal Load Balancers. Customers may experience connectivity loss between Internal Load Balancers and their associated target droplets.<br /><br />We apologize for the inconvenience and will share an update as soon as more information becomes available.</p>

Details

External ID do-0219e5a9a6f98b04
Last Updated Mar 5, 01:52 UTC
End Time Mar 5, 01:52 UTC

Status Timeline

Resolved Mar 5, 01:52 UTC

Status changed from Monitoring to Resolved

Monitoring Mar 5, 01:19 UTC

Status changed from Investigating to Monitoring

DO
Core Infrastructure Maintenance in All Regions 2026-03-03 10:00 UTC Core Infrastructure Maintenance in All Regions 2026-03-03 10:00 UTC Resolved low Mar 3, 10:00 UTC

Description

<p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>15:55</var> UTC</small><br><strong>Completed</strong> - This scheduled maintenance is now complete across all regions. Thank you for your patience and understanding throughout this process.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>11:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>13:05</var> UTC</small><br><strong>Scheduled</strong> - Phase 2 maintenance is complete. Phase 3 is scheduled to begin at March 06, 11:00 UTC.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>10:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>13:03</var> UTC</small><br><strong>Scheduled</strong> - Phase 1 maintenance is complete. Phase 2 is scheduled to begin at March 05, 10:00 UTC.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>10:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>10:44</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-03-03 10:00 UTC<br />End: 2026-03-06 13:00 UTC<br /><br />Hello,<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in all regions. Please note that the existing infrastructure will continue running without issue.This maintenance will be carried out in three phases as outlined below:<br /><br />March 03, 10:00 to 13:00 UTC<br />March 05, 10:00 to 13:00 UTC<br />March 06, 11:00 to 13:00 UTC<br /><br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.</p>

Details

External ID do-1355dc16bdec7f42
Last Updated Mar 6, 15:55 UTC
End Time Mar 6, 15:55 UTC

Status Timeline

Resolved Mar 6, 15:55 UTC

Status changed from Scheduled to Resolved

DO
Core Infrastructure Maintenance in SFO2 and SFO3 Core Infrastructure Maintenance in SFO2 and SFO3 Resolved low Mar 2, 13:00 UTC

Description

<p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>13:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>13:20</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-03-02 13:00 UTC<br />End: 2026-03-02 16:00 UTC<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in SFO2 and SFO3. Please note that the existing infrastructure will continue running without issue.<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.</p>

Details

External ID do-e77d0410b3e85ee9
Last Updated Mar 2, 16:00 UTC
End Time Mar 2, 16:00 UTC

Affected Regions

sfo2 San Francisco 2 sfo3 San Francisco 3

Status Timeline

Resolved Mar 2, 16:00 UTC

Status changed from Scheduled to Resolved

gcp
Multiple Products Multiple Products - Resolved low Feb 27, 12:36 UTC

Description

Vertex AI Gemini API customers experienced increased error rates when accessing the global endpoint.

Details

External ID 41E5S3mkTGDfkZuJZH5k
Last Updated Mar 9, 05:25 UTC
End Time Feb 27, 14:35 UTC
DO
Intermittent Errors with Llama 3.3-70B Intermittent Errors with Llama 3.3-70B Resolved low Feb 26, 19:39 UTC

Description

<p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>22:23</var> UTC</small><br><strong>Resolved</strong> - Issue resolved.<br />Cause: A few requests made to the Llama 3.3-70B model caused issues.<br />Impact: Intermittent errors when interacting with the model through serverless inference and/or with agents created using this model.<br />Contact support if issues persist.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>21:52</var> UTC</small><br><strong>Monitoring</strong> - Fix deployed. Monitoring resources related to the Llama 3.3-70B.<br />Users should no longer experience intermittent errors when making serverless inference requests via APIs and Agents . Awaiting confirmation before closure.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating an issue affecting the Llama 3.3-70B model.<br />Symptoms: Users may encounter intermittent errors when making serverless inference requests via APIs and Agents.<br />Current Status: Our engineering team is actively investigating the issue to determine the root cause.</p>

Details

External ID do-52452f162108b76c
Last Updated Feb 26, 22:23 UTC
End Time Feb 26, 22:23 UTC

Status Timeline

Resolved Feb 26, 22:23 UTC

Status changed from Monitoring to Resolved

Monitoring Feb 26, 21:52 UTC

Status changed from Investigating to Monitoring

DO
App Platform Deployments App Platform Deployments Resolved low Feb 26, 17:14 UTC

Description

<p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>04:55</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed that the issue impacting build failures on App Platform has been resolved. Between approximately 14:30 UTC on the 26th and 00:01 UTC on the 27th, users may have experienced errors when attempting to build or deploy applications using older versions of the Node.js buildpack. A fix has been implemented, and build and deployment operations have been restored to normal.<br /><br />All App Platform builds are now succeeding as expected. Customers who previously encountered build failures should now be able to deploy their applications without further issues.<br /><br />If you continue to experience any problems, please open a ticket with our support team. Thank you for your patience, and we apologize for any inconvenience.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>17:14</var> UTC</small><br><strong>Investigating</strong> - As of 14:30 UTC, our Engineering team is investigating reports of build failures on App Platform for customers using older version of the Node.js buildpack. Users may experience errors when attempting to build their applications, resulting in failed deployments.<br /><br />Our Engineering team is working to fix the issue and will share an update once we have more information. In the meantime, as a workaround, we recommend that customers upgrade to the latest version of Node.js build packs. This may help to resolve the build failures and allow for successful deployments. To upgrade, please follow the instructions outlined here:<br /><br />https://docs.digitalocean.com/products/app-platform/how-to/migrate-nodejs-buildpack/<br /><br />We apologize for the inconvenience this issue may be causing and appreciate your patience as we work to resolve it.</p>

Details

External ID do-9b34e9ed44ea5f0e
Last Updated Feb 27, 04:55 UTC
End Time Feb 27, 04:55 UTC

Status Timeline

Resolved Feb 27, 04:55 UTC

Status changed from Investigating to Resolved

DO
Core Infrastructure Maintenance in AMS3, FRA1, and LON1 Core Infrastructure Maintenance in AMS3, FRA1, and LON1 Resolved low Feb 26, 16:00 UTC

Description

<p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>20:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>15:04</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-02-26 16:00 UTC<br />End: 2026-02-26 20:00 UTC<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in AMS1, FRA1, and LON1. Please note that the existing infrastructure will continue running without issue.<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.</p>

Details

External ID do-7709cf7850da4a12
Last Updated Feb 26, 20:00 UTC
End Time Feb 26, 20:00 UTC

Affected Regions

ams3 Amsterdam 3 fra1 Frankfurt 1 lon1 London 1

Status Timeline

Resolved Feb 26, 20:00 UTC

Status changed from Scheduled to Resolved

DO
Core Infrastructure Maintenance in BLR1 and SGP1 Core Infrastructure Maintenance in BLR1 and SGP1 Resolved low Feb 25, 15:00 UTC

Description

<p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>15:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>16:05</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-02-25 15:00 UTC<br />End: 2026-02-25 18:00 UTC<br /><br />Hello,<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in BLR1 and SGP1. Please note that the existing infrastructure will continue running without issue.<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.<br /><br />Thank you,<br />Team DigitalOcean</p>

Details

External ID do-b520ad6cf61d6651
Last Updated Feb 25, 18:00 UTC
End Time Feb 25, 18:00 UTC

Affected Regions

blr1 Bangalore 1 sgp1 Singapore 1

Status Timeline

Resolved Feb 25, 18:00 UTC

Status changed from Scheduled to Resolved

aws
Service impact: Intermittent missing or delayed EC2 instance and status check metrics Resolved medium Feb 25, 10:14 UTC

Description

We are experiencing intermittent missing or delayed EC2 instance and status check metrics in the US-EAST-1 Region. Alarms on delayed or missing metrics may transition into an INSUFFICIENT_DATA state. We are taking multiple parallel paths to mitigate this issue. While underlying resources are not affected by this issue, customers with automated actions based off of delayed or missing metric data may see their automations start. EC2 APIs are not impacted and therefore EC2 AutoScaling will not be affected by this issue.

Details

External ID aws-0843e3d33143ca0d
Last Updated Feb 25, 10:14 UTC
End Time Feb 26, 21:01 UTC

Status Timeline

Resolved Feb 26, 21:01 UTC

Auto-resolved: no longer in provider status feed

DO
Core Infrastructure Maintenance in SYD1 Core Infrastructure Maintenance in SYD1 Resolved low Feb 24, 15:00 UTC

Description

<p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>15:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>22</var>, <var data-var='time'>15:12</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-02-24 15:00 UTC<br />End: 2026-02-24 17:00 UTC<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in SYD1. Please note that the existing infrastructure will continue running without issue.<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.</p>

Details

External ID do-426ec76b207e13bd
Last Updated Feb 24, 17:00 UTC
End Time Feb 24, 17:00 UTC

Affected Regions

syd1 Sydney 1

Status Timeline

Resolved Feb 24, 17:00 UTC

Status changed from Scheduled to Resolved

DO
Network Maintenance Network Maintenance - SFO1 Resolved low Feb 24, 09:00 UTC

Description

<p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>21:28</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>09:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>22</var>, <var data-var='time'>09:36</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-02-24 09:00 UTC<br />End: 2026-02-24 22:00 UTC<br /><br />During the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the SFO1 region.<br /><br />Expected Impact:<br /><br />These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, network traffic for the SFO1 region might be affected for a short period of time. We will endeavor to keep any such impact to a minimum.<br /><br />If you have any questions related to this maintenance, please send us a ticket from your cloud support page.<br /> https://cloudsupport.digitalocean.com/s/createticket</p>

Details

External ID do-59b8b942cfc8653a
Last Updated Feb 24, 21:28 UTC
End Time Feb 24, 21:28 UTC

Affected Regions

sfo1 San Francisco 1

Status Timeline

Resolved Feb 24, 21:28 UTC

Status changed from Scheduled to Resolved

aws
Service impact: AWS Direct Connect connectivity loss Resolved medium Feb 20, 16:38 UTC

Description

We are investigating connectivity issues impacting AWS Direct Connect connectivity to the US-EAST-1 Region.

Details

External ID aws-202701d56318da99
Last Updated Feb 20, 16:38 UTC
End Time Feb 22, 02:26 UTC

Status Timeline

Resolved Feb 22, 02:26 UTC

Auto-resolved: no longer in provider status feed

DO
Core Infrastructure Maintenance NYC3 Core Infrastructure Maintenance NYC3 Resolved low Feb 19, 19:00 UTC

Description

<p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>23:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>19</var>, <var data-var='time'>19:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>18</var>, <var data-var='time'>10:16</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-02-19 19:00 UTC<br />End: 2026-02-20 23:00 UTC<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in NYC3. Please note that the existing infrastructure will continue running without issue.<br /><br />Expected Impact:<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from https://cloudsupport.digitalocean.com/s/createticket. We’re here to help.</p>

Details

External ID do-3543ce682bdcc133
Last Updated Feb 20, 23:00 UTC
End Time Feb 20, 23:00 UTC

Affected Regions

nyc3 New York 3

Status Timeline

Resolved Feb 20, 23:00 UTC

Status changed from Scheduled to Resolved

DO
Control Panel Visibility Control Panel Visibility Resolved low Feb 19, 17:34 UTC

Description

<p><small>Feb <var data-var='date'>19</var>, <var data-var='time'>18:42</var> UTC</small><br><strong>Resolved</strong> - The issue impacting the visibility of Cloud Panel has been confirmed to be resolved. Between approximately 16:08 PM & 17:56 PM UTC, users may have noticed unusual behavior when accessing the console, performing resizing operations, or experiencing issues with the Cloud Panel visibility. <br /><br />Our team has taken necessary corrective measures to restore the service, and we can confirm that it is now functioning as expected. We sincerely apologize for the inconvenience caused and truly appreciate your understanding throughout this process. <br /><br />However, if you continue to experience any issues, please do not hesitate to raise a support ticket for further investigation. We’ll be happy to assist you.</p><p><small>Feb <var data-var='date'>19</var>, <var data-var='time'>18:05</var> UTC</small><br><strong>Monitoring</strong> - Our team has implemented a fix to address the issue affecting the visibility of the Cloud Panel. We are actively monitoring the situation to ensure overall stability. <br /><br />Users should no longer encounter abnormalities when accessing the console, resizing the Droplet, misalignments within the Cloud Panel, etc. We will provide a further update once the issue is fully confirmed to be resolved.</p><p><small>Feb <var data-var='date'>19</var>, <var data-var='time'>17:34</var> UTC</small><br><strong>Investigating</strong> - Our team is currently investigating an issue impacting the visibility of the Control Panel. Users may notice unexpected behavior, such as being prompted for login credentials when accessing the console, being unable to select radio buttons for any plans during resize, columns appearing squished, etc.<br /><br />We apologize for the inconvenience and will share further updates as soon as more information becomes available.</p>

Details

External ID do-4f8193def711c0f6
Last Updated Feb 19, 18:42 UTC
End Time Feb 19, 18:42 UTC

Status Timeline

Resolved Feb 19, 18:42 UTC

Status changed from Monitoring to Resolved

Monitoring Feb 19, 18:05 UTC

Status changed from Investigating to Monitoring

DO
Spaces Availability in NYC3 Spaces Availability in NYC3 Resolved low Feb 16, 07:36 UTC

Description

<p><small>Feb <var data-var='date'>16</var>, <var data-var='time'>10:58</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed that the issue impacting the availability of Spaces and the Container Registry in the NYC3 region has been fully resolved. A fix was implemented, and services have been restored. <br /><br />All operations are now succeeding normally. If you experience any further issues, please contact Support by creating a Support ticket from within your account. <br /><br />Thank you for your patience while we worked to resolve this issue.</p><p><small>Feb <var data-var='date'>16</var>, <var data-var='time'>07:36</var> UTC</small><br><strong>Monitoring</strong> - Between 5:34 and 6:32 UTC, our Engineering team identified the issue that was impacting the availability of Spaces and the Container Registry in the NYC3 region. A fix has been implemented to resolve the issue.<br /><br />During this time users may have experienced errors while interacting with Spaces. Additionally, CRUD operations (create, read, update, delete) within the Container Registry may have failed or returned errors during this time.<br /><br />We are now monitoring the platform to ensure services remain stable and operating as expected. We will provide a final update once the issue is fully resolved.<br /><br />If you continue to experience any issues, please contact our Support team.</p>

Details

External ID do-96b53cce4e99ecfb
Last Updated Feb 16, 10:58 UTC
End Time Feb 16, 10:58 UTC

Affected Regions

nyc3 New York 3

Status Timeline

Resolved Feb 16, 10:58 UTC

Status changed from Monitoring to Resolved

DO
Droplet Limit Increase Feature Droplet Limit Increase Feature Resolved low Feb 16, 06:38 UTC

Description

<p><small>Feb <var data-var='date'>16</var>, <var data-var='time'>09:24</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed that the issue affecting the Droplet limit increase feature within the Cloud Control Panel has been fully resolved. Requests to increase Droplet limits submitted through the Control Panel are now being processed correctly, and Support tickets are being generated as expected.<br /><br />If you experience any further issues, please contact Support by creating a Support ticket from within your account.<br /><br />Thank you for your patience while we worked to resolve this issue.</p><p><small>Feb <var data-var='date'>16</var>, <var data-var='time'>08:37</var> UTC</small><br><strong>Monitoring</strong> - The issue affecting the Droplet limit increase feature within the Cloud Control Panel has been identified and a fix has been implemented.<br /><br />Requests submitted through the Control Panel are now generating Support tickets as expected. Our team is continuing to monitor the system to ensure full functionality and stability.<br /><br />If you experience any further issues with Droplet limit increase requests, please contact Support directly by creating a Support ticket from within your account.</p><p><small>Feb <var data-var='date'>16</var>, <var data-var='time'>06:38</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating an issue affecting the Droplet limit increase feature within the Cloud Control Panel.<br /><br />At this time requests to increase Droplet limits submitted through the Control Panel are not being processed. Customer submissions to increase limits are not generating support tickets as expected. We are actively working to identify the root cause and restore normal functionality as quickly as possible.<br /><br />If you urgently require a Droplet limit increase, please contact Support directly by creating a Support ticket from within your account.</p>

Details

External ID do-8c4286996d127d3e
Last Updated Feb 16, 09:24 UTC
End Time Feb 16, 09:24 UTC

Status Timeline

Resolved Feb 16, 09:24 UTC

Status changed from Monitoring to Resolved

Monitoring Feb 16, 08:37 UTC

Status changed from Investigating to Monitoring

DO
Delay in App Platform Deployments Delay in App Platform Deployments Resolved low Feb 12, 19:06 UTC

Description

<p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>19:34</var> UTC</small><br><strong>Resolved</strong> - As of 18:55 UTC, our Engineering team has confirmed that the issue causing delays in App Platform deployments has been fully resolved. The fix implemented earlier has been successful, and we are no longer seeing any delays or errors with deployments.<br /><br />Users should now be able to deploy their apps successfully and without any issues. We apologize again for the inconvenience caused.<br /><br />However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation. We'll be happy to assist you.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>19:17</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to address the issue causing delays in App Platform deployments. <br /><br />We are actively monitoring the situation to ensure overall stability. Users may already notice improvements while deploying apps. <br /><br />We appreciate your patience throughout the process and will provide a further update once the issue is fully confirmed to be resolved.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>19:06</var> UTC</small><br><strong>Investigating</strong> - Our engineers are currently investigating an issue impacting new App Platform deployments. <br /><br />During this time, some users may experience delay when creating new App Platform apps. Existing apps are not affected and should continue to function normally. <br /><br />We apologize for any inconvenience, and we'll share more information as it becomes available.</p>

Details

External ID do-2a620155b3610f93
Last Updated Feb 12, 19:34 UTC
End Time Feb 12, 19:34 UTC

Status Timeline

Resolved Feb 12, 19:34 UTC

Status changed from Monitoring to Resolved

Monitoring Feb 12, 19:17 UTC

Status changed from Investigating to Monitoring

DO
MongoDB Cluster Creation MongoDB Cluster Creation Resolved low Feb 12, 05:42 UTC

Description

<p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>12:06</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed the full resolution of the issue with MongoDB Clusters.<br /><br />Thank you for your patience, and we apologize for any inconvenience. If you continue to experience any issues, please open a Support ticket right away.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>09:50</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to resolve the issue with MongoDB clusters and at this time, services should be functioning as expected. <br /><br />We're monitoring the situation and will post a final update once we confirm this is fully resolved.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>09:04</var> UTC</small><br><strong>Identified</strong> - Our engineering team has identified the cause of the issue with create, fork and resize events failure for MongoDB clusters in all of our regions and is actively working on a fix. We will post an update as soon as additional information is available.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>07:34</var> UTC</small><br><strong>Update</strong> - Our engineering team continues to investigate the issue with create, fork and resize events failure for MongoDB clusters in all of our regions. <br /><br />We appreciate your patience and will post an update as soon as additional information is available.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>05:42</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue with all events for MongoDB clusters in all of our regions.<br /><br />During this time, users may face issues with creation, fork and resize operations in the MongoDB clusters. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-51a8fd55304e4d8b
Last Updated Feb 12, 12:06 UTC
End Time Feb 12, 12:06 UTC

Status Timeline

Resolved Feb 12, 12:06 UTC

Status changed from Monitoring to Resolved

Monitoring Feb 12, 09:50 UTC

Status changed from Identified to Monitoring

Identified Feb 12, 09:04 UTC

Status changed from Investigating to Identified

DO
App platform seeing delays in deployments across all regions. App platform seeing delays in deployments across all regions. Resolved low Feb 11, 11:50 UTC

Description

<p><small>Feb <var data-var='date'>11</var>, <var data-var='time'>13:08</var> UTC</small><br><strong>Resolved</strong> - The issue impacting delays in App Platform deployments has been confirmed to be resolved. Between approximately 08:52 UTC & 13:01 UTC, users may have noticed delays while creating or updating apps, or may have encountered failed deployments. For failed deployments, please trigger a redeploy, which should successfully resolve the issue.<br /><br />We confirmed that the service is functioning as expected. Once again, we sincerely apologize for the inconvenience caused and appreciate your understanding.<br /><br />However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation. We’ll be happy to assist you.</p><p><small>Feb <var data-var='date'>11</var>, <var data-var='time'>12:03</var> UTC</small><br><strong>Monitoring</strong> - Our team has implemented a fix to address the issue causing delays in App Platform deployments. We are actively monitoring the situation to ensure overall stability. <br /><br />Users may already notice improvements while deploying apps. We appreciate your patience throughout the process and will provide a further update once the issue is fully confirmed to be resolved.</p><p><small>Feb <var data-var='date'>11</var>, <var data-var='time'>11:50</var> UTC</small><br><strong>Investigating</strong> - Our engineers are currently investigating an issue impacting new App Platform deployments. During this time, some users may experience delay when creating new App Platform apps. Existing apps are not affected and should continue to function normally. We apologize for any inconvenience, and we'll share more information as it becomes available.</p>

Details

External ID do-bb038b84af3695a1
Last Updated Feb 11, 13:08 UTC
End Time Feb 11, 13:08 UTC

Status Timeline

Resolved Feb 11, 13:08 UTC

Status changed from Monitoring to Resolved

Monitoring Feb 11, 12:03 UTC

Status changed from Investigating to Monitoring

aws
Service is operating normally: [RESOLVED] Change Propagation Delays Resolved medium Feb 10, 20:18 UTC

Description

Between 12:20 PM and 1:55 PM PST, we experienced elevated DNS resolution errors for CloudFront distributions served from a subset of edge locations globally. During this time, customers may have received NXDOMAIN responses. Engineers were automatically engaged, immediately began investigating in multiple parallel paths and mitigated errors by taking the affected fleet of DNS servers out of service. Additionally, between 12:20 PM and 8:08 PM PST, we experienced longer than usual propagation times for changes to CloudFront configurations. The propagation delay was limited to certain types of changes, such as creating/deleting distributions, or updating the distributions for DNS or TLS certificate related changes. Following mitigation of the DNS resolution errors at 1:55 PM, end-user requests for content delivery from our edge locations were not affected and are being served normally. Invalidations operated normally throughout the event. Both issues have been resolved and all services are operating normally. We are confident this issue will not reoccur.

Details

External ID aws-13578bff087dde4b
Last Updated Feb 10, 20:18 UTC
End Time Feb 12, 04:25 UTC

Status Timeline

Resolved Feb 10, 22:30 UTC

Auto-resolved: no longer in provider status feed

Resolved Feb 10, 19:06 UTC

We continue to make progress towards mitigating the change propagation delays to CloudFront distributions. We have completed a successful configuration updates to a set of edge pops and are now propagating to edge pops in other stripes. The delayed propagation impact is limited to certain types of changes, such as creating/deleting distributions, or updating the distributions for DNS or TLS certificate related changes. End-user requests for content delivery from our edge locations and invalidations continue to be served normally at this stage. We estimate approximately 60 minutes for full recovery.

Resolved Feb 10, 18:06 UTC

We continue to work toward mitigating the change propagation delays to CloudFront distributions. We have further confirmed that the delayed propagation impact is limited to certain types of changes, such as creating/deleting distributions, or updating the distributions for DNS or TLS certificate related changes. End-user requests for content delivery from our edge locations and invalidations are being served normally at this stage. Our recovery is taking longer than anticipated in the previous update and we now estimate another 2 hours for full recovery. We will provide another update within 60 minutes, or sooner if information becomes available.

Resolved Feb 10, 17:01 UTC

We are on track to apply mitigations towards resolving the change propagation delays to CloudFront distributions. We are implementing the mitigations in a striped manner on the affected systems that consume the configurations on the edge pops. Customers changes to CloudFront distributions will not propagate until we’ve fully mitigated this issue. End-user requests for content delivery from our edge locations and invalidations are being served normally at this stage. We expect full recovery in approximately 2 hours. We will provide another update within 60 minutes, or sooner if information becomes available.

Resolved Feb 10, 16:02 UTC

We continue to apply mitigation steps for delays in propagating changes to CloudFront distributions. We are actively engaged and working on two parallel paths to mitigate the issue. Customers changes to CloudFront distributions will not propagate until we’ve fully mitigated this issue. End-user requests for content delivery from our edge locations and invalidations are being served normally at this stage. We expect full recovery is still 2-3 hours away. We will provide another update within 60 minutes, or sooner if information becomes available.

Resolved Feb 10, 16:02 UTC

We continue to apply mitigation steps for delays in propagating changes to CloudFront distributions. We are actively engaged and working on two parallel paths to mitigate the issue. Customers changes to CloudFront distributions will not propagate until we’ve fully mitigated this issue. End-user requests for content delivery from our edge locations and invalidations are being served normally at this stage. We expect full recovery in 2-3 hours. We will provide another update within 60 minutes, or sooner if information becomes available.

Resolved Feb 10, 15:06 UTC

We continue to work toward mitigating the current issue, which is delays in propagating changes to CloudFront distributions. Cache invalidation is not affected by this issue. In parallel, we are actively investigating options to increase the velocity of our recovery efforts. In order to ensure that we do not cause additional impact, we are proceeding cautiously but safely. During this time, customers can continue making changes but these changes will not propagate until we’ve fully mitigated this issue. We expect full recovery is a couple hours away.

Resolved Feb 10, 14:34 UTC

We can confirm full recovery for DNS resolution errors for CloudFront Distributions. Due to some of the mitigation actions we have taken, customers will experience delays in propagating changes to CloudFront distributions, including the creation of new CloudFront distributions. We continue to work toward recovering change propagation delays and will provide an update in the next 30-60 minutes.

Resolved Feb 10, 13:58 UTC

We can confirm significant recovery for the DNS resolution errors for Cloudfront distributions. Customers are still experiencing delays propagating changes to Cloudfront distributions. We continue to work toward full recovery and will provide additional information in the next 30-60 minutes.

Resolved Feb 10, 13:58 UTC

We can confirm significant recovery for the DNS resolution errors for CloudFront distributions. Customers are still experiencing delays propagating changes to CloudFront distributions. We continue to work toward full recovery and will provide additional information in the next 30-60 minutes.

Resolved Feb 10, 13:46 UTC

We are seeing early signs of recovery, and continue to work toward full recovery.

Resolved Feb 10, 13:40 UTC

We can confirm errors for DNS resolution for some Cloudfront distributions. During this time, customers may receive an NXDOMAIN response. Additionally, customers may also experience delayed propagation for changes to CloudFront distributions. We have identified the root cause of the issue and are actively working on multiple paths to resolving the errors. We have verified that our initial mitigation effort on a portion of the affected subsystem was successful, and we are actively working toward performing that mitigation across the fleet. We recommend customers continue to retry any failed requests while we work toward mitigation. A few services that use CloudFront distributions for delivering content may also be affected at this time.

Resolved Feb 10, 13:40 UTC

We can confirm errors for DNS resolution for some CloudFront distributions. During this time, customers may receive an NXDOMAIN response. Additionally, customers may also experience delayed propagation for changes to CloudFront distributions. We have identified the root cause of the issue and are actively working on multiple paths to resolving the errors. We have verified that our initial mitigation effort on a portion of the affected subsystem was successful, and we are actively working toward performing that mitigation across the fleet. We recommend customers continue to retry any failed requests while we work toward mitigation. A few services that use CloudFront distributions for delivering content may also be affected at this time.

Resolved Feb 10, 13:15 UTC

We are investigating DNS resolution failures for some specific Cloudfront distributions. We are actively investigating and will provide additional information in the next 30-60 minutes.

Resolved Feb 10, 13:15 UTC

We are investigating DNS resolution failures for some specific CloudFront distributions. We are actively investigating and will provide additional information in the next 30-60 minutes.

DO
Network Maintenance Network Maintenance - AMS2 Resolved low Feb 10, 09:00 UTC

Description

<p><small>Feb <var data-var='date'>10</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>10</var>, <var data-var='time'>09:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'> 8</var>, <var data-var='time'>04:38</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-02-10 09:00 UTC<br />End: 2026-02-10 17:00 UTC<br /><br /><br />Hello,<br /><br />During the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the AMS2 region.<br /><br />Expected Impact:<br /><br />These upgrades are designed and tested to be seamless and we do not expect any impact to customer traffic due to this maintenance. If an unexpected issue arises, network traffic for the AMS2 region might be affected for a short period of time. We will endeavour to minimise any such impact.<br /><br />If you have any questions related to this maintenance, please send us a ticket from your cloud support page.<br /> https://cloudsupport.digitalocean.com/s/createticket</p>

Details

External ID do-c55108196d59c921
Last Updated Feb 10, 17:00 UTC
End Time Feb 10, 17:00 UTC

Affected Regions

ams2 Amsterdam 2

Status Timeline

Resolved Feb 10, 17:00 UTC

Status changed from Scheduled to Resolved

aws
Service impact: [Resolved] Network Connectivity Resolved medium Feb 7, 10:30 UTC

Description

Between 6:36 AM to 7:09 AM PST, we experienced intermittent network connectivity issues between two Availability Zones (sae1-az1 and sae1-az3) in the SA-EAST-1 Region. This issue resulted in API error rates and latencies for AWS Services in the SA-EAST-1 Region. The issue has been resolved and all services are operating normally.

Details

External ID aws-abed7e976af79dfd
Last Updated Feb 7, 10:30 UTC
End Time Feb 7, 18:53 UTC

Status Timeline

Resolved Feb 7, 18:53 UTC

Auto-resolved: no longer in provider status feed

Resolved Feb 7, 10:30 UTC

Between 6:36 AM and 7:09 AM PST, we experienced intermittent network connectivity issues between two Availability Zones (sae1-az1 and sae1-az3) in the SA-EAST-1 Region. This resulted in API error rates and latencies for AWS Services in the SA-EAST-1 Region. The issue has been resolved and all services are operating normally.

DO
Cloud Control Panel Cloud Control Panel Resolved low Feb 3, 18:53 UTC

Description

<p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>18:53</var> UTC</small><br><strong>Resolved</strong> - As of 17:55 UTC, our Engineering team has resolved the timeouts affecting the Cloud Control Panel and API. The issue was caused by a temporary overload on our infrastructure, resulting in 5xx errors for API requests and gateway timeouts for Cloud Control Panel users.<br /><br />If you continue to experience any problems, please open a ticket with our Support team. We apologize for any inconvenience this may have caused and appreciate your patience and understanding.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>18:03</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix for the timeouts affecting the Cloud Control Panel and API, and is currently monitoring the situation. Services have recovered, and users should no longer experience 5xx errors when using the API or gateway timeouts when accessing the Cloud Control Panel.<br /><br />We will continue to monitor the situation to ensure that all services are stable and functioning as expected. We will post an update as soon as the issue is fully resolved.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>17:51</var> UTC</small><br><strong>Identified</strong> - Our Engineering team has identified the cause of the issue impacting the Cloud Control Panel and API and is actively working on deploying a fix. At this time, users will continue to see timeouts/5xx errors, but may intermittently see requests succeeding. <br /><br />We will post further updates as soon as the fix is deployed or there is more information to share.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>17:32</var> UTC</small><br><strong>Update</strong> - Our Engineering team is investigating an issue impacting our Cloud Control Panel and API. Users attempting to make API requests could see 5xx errors and users attempting to access the Cloud Control Panel may see gateway timeouts or page timeouts. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>17:27</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue impacting our Cloud Control Panel and API. Users attempting to make API requests could see 5xx errors and users attempting to access the Cloud Control Panel may see gateway timeouts or page timeouts. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-299f3dbfb7685e30
Last Updated Feb 3, 18:53 UTC
End Time Feb 3, 18:53 UTC
DO
Kubernetes Clusters and Droplets in FRA1 region Kubernetes Clusters and Droplets in FRA1 region Resolved low Jan 28, 17:37 UTC

Description

<p><small>Jan <var data-var='date'>28</var>, <var data-var='time'>17:37</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has resolved the issue affecting Kubernetes clusters and Droplet events in the FRA1 region. Between approximately 00:17 UTC and 14:30 UTC, customers may have experienced issues provisioning Kubernetes clusters and mounting volumes. All services should now be functioning normally.<br /><br />If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.</p><p><small>Jan <var data-var='date'>28</var>, <var data-var='time'>14:47</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to address the issue affecting Kubernetes clusters and Droplet events in the FRA1 region and is actively monitoring the situation. Customers should no longer experience issues provisioning Kubernetes clusters or mounting volumes. We will provide an update as soon as the issue is fully resolved.</p><p><small>Jan <var data-var='date'>28</var>, <var data-var='time'>13:30</var> UTC</small><br><strong>Identified</strong> - Our Engineering team has identified the root cause of the issue impacting Kubernetes clusters and Droplet events in the FRA1 region and is actively working on a fix. In the meantime, users may continue to experience issues. We appreciate your patience and will share updates as more information becomes available.</p><p><small>Jan <var data-var='date'>28</var>, <var data-var='time'>11:27</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue with Kubernetes clusters in the FRA1 region. During this time, subset of users may experience an issue while provisioning Kubernetes clusters and mounting volumes. Additionally, users may also notice the droplet events appearing to be stuck or delayed in this region. We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-df9468555ac654ed
Last Updated Jan 28, 17:37 UTC
End Time Jan 28, 17:37 UTC

Affected Regions

fra1 Frankfurt 1
aws
Service is operating normally: [RESOLVED] Elevated latencies for network change propagation Resolved medium Jan 28, 11:22 UTC

Description

Between 7:08 AM and 10:48 AM PST, we experienced elevated latencies for network change propagation in the EU-WEST-1 Region. This was due to a delay in propagation of configuration updates on a sub-system which supports external and cross account connectivity. Customers experienced delays when attempting to connect to newly launched instances using their assigned public IPs, as well as delays in establishing new VPC Peering connections. Other AWS Services were also impacted by this issue and were impacted by delayed or stuck workflows such as pulling container images. Engineers were automatically engaged at 6:29 AM, prior to customer impact beginning. At 7:20 AM, root cause was identified. By 9:20 we began observing early signs of recovery. Our mitigation efforts completed at 10:43 AM and full recovery was observed at 10:48 AM. We recommend customers retry any failed requests. The issue has been resolved and all services are operating normally.

Details

External ID aws-fecc2861b22d2590
Last Updated Jan 28, 11:22 UTC
End Time Feb 4, 18:47 UTC

Status Timeline

Jan 28, 11:22 UTC

Between 7:08 AM and 10:48 AM PST, we experienced elevated latencies for network change propagation in the EU-WEST-1 Region. This was due to a delay in propagation of configuration updates on a sub-system which supports external and cross account connectivity. Customers experienced delays when attempting to connect to newly launched instances using their assigned public IPs, as well as delays in establishing new VPC Peering connections. Other AWS Services were also impacted by this issue and were impacted by delayed or stuck workflows such as pulling container images. Engineers were automatically engaged at 6:29 AM, prior to customer impact beginning. At 7:20 AM, root cause was identified. By 9:20 AM we began observing early signs of recovery. Our mitigation efforts completed at 10:43 AM and full recovery was observed at 10:48 AM. We recommend customers retry any failed requests. The issue has been resolved and all services are operating normally.

Jan 28, 10:51 UTC

We are seeing significant signs of recovery. We continue to work toward full resolution.

Jan 28, 10:45 UTC

We continue to work toward fully mitigating the issue that is resulting in elevated latencies for network change propagation delays in the EU-WEST-1 Region. We have a high degree of confidence that our current mitigation efforts will fully mitigate the issue. Our current estimation is that these mitigations will complete within the next 60 to 120 minutes. We recommend customers continue to retry failed requests. We will provide additional information as recovery continues to progress.

Jan 28, 09:47 UTC

We are seeing early signs of progressive recovery. We continue to work toward full recovery and will continue to provide updates as we work toward full mitigation. We recommend customers continue to retry failed requests. As recovery progresses, additional requests will succeed.

Jan 28, 09:36 UTC

We can confirm elevated latencies for network change propagation in the EU-WEST-1 Region. This causes delays when attempting to connect to newly launched instances using their assigned public IPs, as well as delays in establishing new VPC Peering connections. Other AWS Services are also impacted by this issue and may be impacted by delayed or stuck workflows such as pulling container images. Engineers were automatically engaged and we are actively working on mitigating this issue. We plan to provide an update within the next 60 minutes.

Jan 28, 09:14 UTC

EU-WEST-1We are investigating elevated latencies for network change propagation in the EU-WEST-1 Region. This is resulting in impact to other services, such as timeouts when pulling images. We will provide you with further information shortly.

Jan 28, 09:14 UTC

We are investigating elevated latencies for network change propagation in the EU-WEST-1 Region. This is resulting in impact to other services, such as timeouts when pulling images. We will provide you with further information shortly.

DO
Droplet Based Events in FRA1 Droplet Based Events in FRA1 Resolved low Jan 28, 04:27 UTC

Description

<p><small>Jan <var data-var='date'>28</var>, <var data-var='time'>04:27</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed that the issue impacting our Droplet-based products in the FRA1 region has been completely mitigated. Users should no longer see issues with their Droplets and Droplet-related services.<br /><br />If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.</p><p><small>Jan <var data-var='date'>28</var>, <var data-var='time'>04:11</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has identified the cause of the issue impacting our Droplet-based products in the FRA1 region and applied a fix. The impact has started to mitigate and users should be able to connect to their Droplets and also start to see events getting processed successfully.<br /><br />We're now monitoring the fix for stability and will post an update once we are confident it is successful.</p><p><small>Jan <var data-var='date'>28</var>, <var data-var='time'>00:45</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating an issue affecting events in FRA1. During this time, customers may experience delays or errors when creating or deleting Droplets, as well as when using Droplet-based products such as Load Balancers, Kubernetes Clusters, or Databases.<br /><br />Our teams are actively working to identify the root cause and restore full service as quickly as possible. We apologize for the inconvenience and will provide updates as more information becomes available.</p>

Details

External ID do-8ad6fc8ddbc3fc5d
Last Updated Jan 28, 04:27 UTC
End Time Jan 28, 04:27 UTC

Affected Regions

fra1 Frankfurt 1
DO
Network Maintenance Network Maintenance - NYC3 Resolved low Jan 27, 13:00 UTC

Description

<p><small>Jan <var data-var='date'>27</var>, <var data-var='time'>13:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Jan <var data-var='date'>27</var>, <var data-var='time'>09:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Jan <var data-var='date'>25</var>, <var data-var='time'>09:28</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-01-27 09:00 UTC<br />End: 2026-01-27 12:00 UTC<br /><br />During the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the NYC3 region.<br /><br />Expected impact:<br /><br />We do not anticipate any downtime for Droplets or Droplet-related services, including Managed Databases, Load Balancers, App Platform, and Managed Kubernetes, as this maintenance has been carefully designed and tested to be seamless. In the unlikely event that an undetected misconfiguration occurs, a subset of customers could experience temporary network disruption. We will endeavor to keep this to a minimum for the duration of the change.<br /><br />If you have any questions related to this maintenance, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket</p>

Details

External ID do-390077d1e57856d4
Last Updated Jan 27, 13:00 UTC
End Time Jan 27, 13:00 UTC

Affected Regions

nyc3 New York 3
DO
Cloud Control Panel and API Cloud Control Panel and API Resolved low Jan 26, 22:44 UTC

Description

<p><small>Jan <var data-var='date'>26</var>, <var data-var='time'>22:44</var> UTC</small><br><strong>Resolved</strong> - From 20:45 UTC to 21:06 UTC, users may have experienced issue affecting the Cloud Control Panel, API, and related services.<br /><br />Our Engineering team has confirmed that the issue is fully resolved, and all systems are now operating normally.<br /><br />If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.</p><p><small>Jan <var data-var='date'>26</var>, <var data-var='time'>21:27</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix for the issue affecting the Cloud Control Panel, API, and related services. <br /><br />We are observing recovery, and users should now be able to access their accounts and use the API without errors.<br /><br />We are continuing to monitor the situation closely and will provide an update once full resolution is confirmed.</p><p><small>Jan <var data-var='date'>26</var>, <var data-var='time'>21:06</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue impacting multiple services including the Cloud Control Panel and API.<br /><br />Users may encounter errors when accessing their accounts or using the API.<br /><br />We are actively working to resolve this issue and will provide updates as soon as more information becomes available.</p>

Details

External ID do-80a66b0deb1ccebd
Last Updated Jan 26, 22:44 UTC
End Time Jan 26, 22:44 UTC
DO
App Platform Deployments App Platform Deployments Resolved low Jan 20, 18:25 UTC

Description

<p><small>Jan <var data-var='date'>20</var>, <var data-var='time'>18:25</var> UTC</small><br><strong>Resolved</strong> - The issue impacting App Platform deployments has been successfully resolved. Users should no longer encounter delays during the build phase or have deployments getting stuck. All services are now confirmed to be stable and operating normally.<br /><br />We appreciate your patience throughout this but if you continue to experience any issues, please create a support ticket for further analysis.</p><p><small>Jan <var data-var='date'>20</var>, <var data-var='time'>18:05</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented necessary changes to address the issue impacting both new and in-progress App Platform Deployments. Our team is currently monitoring the situation. Users should now notice improvements in deployment performance.<br /><br />We appreciate your patience. We'll update once the issue is confirmed to be resolved.</p>

Details

External ID do-2ba39c0d9ca9dc66
Last Updated Jan 20, 18:25 UTC
End Time Jan 20, 18:25 UTC
DO
Account access and Payment Account access and Payment Resolved low Jan 15, 09:18 UTC

Description

<p><small>Jan <var data-var='date'>15</var>, <var data-var='time'>09:18</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has resolved the issue with payments failure using PayNow. Users should not see any issues with making payments via PayNow and logging into the accounts on our platform. Services should now be operating normally.<br /><br />If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.</p><p><small>Jan <var data-var='date'>15</var>, <var data-var='time'>07:16</var> UTC</small><br><strong>Investigating</strong> - Our engineering team is investigating an issue with the suspended user's accounts being unable to access the accounts. During this time users may have experience issues signing in or accessing accounts and the failure in the payments. We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-c7936230adf41521
Last Updated Jan 15, 09:18 UTC
End Time Jan 15, 09:18 UTC
DO
App Platform Static Websites in NYC3 Region App Platform Static Websites in NYC3 Region Resolved low Dec 18, 22:34 UTC

Description

<p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>22:34</var> UTC</small><br><strong>Resolved</strong> - Our Engineering has confirmed resolution of the issue. Users should no longer experience errors with attempting to deploy new static sites in NYC3 on App Platform.<br /><br />If you experience any further problems or have any questions, please open a support ticket within your account.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>22:06</var> UTC</small><br><strong>Update</strong> - Our Engineering team has deployed a fix for the issue. Users should no longer experience errors when attempting to deploy new static sites now in NYC3 for App Platform.<br /><br />We will post an update once we've confirmed that the issue is fully resolved.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>19:25</var> UTC</small><br><strong>Investigating</strong> - As of 17:45 UTC, our Engineering team is investigating reports of static site deployment failures in the NYC3 region on App Platform. Users may experience errors when attempting to deploy new static sites, resulting in failed deployments.<br /><br />The existing static sites are still accessible and functioning normally.<br /><br />Our team is actively working on identifying the root cause and implementing the fix.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-e52ff008bbdddc24
Last Updated Dec 18, 22:34 UTC
End Time Dec 18, 22:34 UTC

Affected Regions

nyc3 New York 3
DO
API and Cloud Requests API and Cloud Requests Resolved low Dec 18, 21:34 UTC

Description

<p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>21:34</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed full resolution of the issue. Users should no longer experience errors when making requests to the Cloud Control Panel or API.<br /><br />If you experience any further problems or have any questions, please open a support ticket within your account.</p><p><small>Dec <var data-var='date'>18</var>, <var data-var='time'>18:46</var> UTC</small><br><strong>Investigating</strong> - As of 17:47 UTC, our Engineering team is investigating reports of intermittent 504 errors when making requests to api.digitalocean.com and cloud.digitalocean.com. Users may experience sporadic errors, resulting in a 504 response code, when attempting to interact with our API or Cloud services.<br /><br />At this point, the issue appears to be intermittent, and not all requests are being affected. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-e8b7a2b138791715
Last Updated Dec 18, 21:34 UTC
End Time Dec 18, 21:34 UTC
DO
Spaces Access Keys and DigitalOcean Container Registry Spaces Access Keys and DigitalOcean Container Registry Resolved low Dec 15, 16:19 UTC

Description

<p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>16:19</var> UTC</small><br><strong>Resolved</strong> - From 08:28 to 13:03 UTC, Our Engineering team observed an issue with Spaces Access keys for DOCR in AMS3 region. During this time, users encountered an error with "403 (InvalidAccessKeyId): The access key ID you provided does not exist in our records" while accessing spaces keys. Our team has fully resolved the issues as of 13:03 UTC. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.</p>

Details

External ID do-1b4aa07819fe0e98
Last Updated Dec 15, 16:19 UTC
End Time Dec 15, 16:19 UTC
DO
Recovery Console Accessibility Recovery Console Accessibility Resolved low Dec 11, 00:00 UTC

Description

<p><small>Dec <var data-var='date'>11</var>, <var data-var='time'>00:00</var> UTC</small><br><strong>Resolved</strong> - From 18:57 UTC to 22:05 UTC, customers may have experienced issues accessing the Recovery Console due to a service interruption. During this time, Droplet functionality remained unaffected, and customers were still able to use the Recovery ISO option via SSH.<br /><br />Our Engineering team has confirmed that the issue is now fully resolved, and Recovery Console access has been fully restored and is operating normally.<br /><br />If you continue to experience any difficulties, please open a ticket with our Support team. We apologize for the inconvenience caused.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>22:43</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has deployed a fix to resolve the issue causing the Recovery Console to be unavailable.<br /><br />We are currently monitoring the situation to ensure access is fully restored and stable. Please note that Droplet functionality was not impacted by this issue.<br /><br />We will post another update once we confirm the issue is fully resolved.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>20:33</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is actively investigating an issue causing the Recovery Console to be unavailable. <br /><br />Droplet functionality is not impacted. If customers need recovery iso, they can still select the "Boot from Recovery ISO" option in the recovery tab as seen in the guide here https://docs.digitalocean.com/products/droplets/how-to/recovery/recovery-iso/ but will need to use SSH to access their droplets. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-3825a8583f88fcd7
Last Updated Dec 11, 00:00 UTC
End Time Dec 11, 00:00 UTC
DO
Core Infrastructure Maintenance 2025-12-10 18:00 UTC Core Infrastructure Maintenance 2025-12-10 18:00 UTC Resolved low Dec 10, 20:00 UTC

Description

<p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>20:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Dec <var data-var='date'> 7</var>, <var data-var='time'>17:43</var> UTC</small><br><strong>Scheduled</strong> - Start: 2025-12-10 18:00 UTC<br />End: 2025-12-10 20:00 UTC<br /><br /><br />During the above window, our Engineering team will be performing maintenance on principal infrastructure in order to improve reliability of the services. Please note that existing infrastructure will continue running without issue. This maintenance impacts create, read, update, and delete (CRUD) operations in all regions.<br /><br />Expected Impact:<br /><br />During the maintenance window, users may experience increased latency for the following platform operations:<br /><br />Cloud Control Panel and API operations<br />Event processing<br />Droplet creates, resizes, rebuilds, and power events<br />Managed Kubernetes reconciliation and scaling<br />Load Balancer operations<br />Container Registry operations<br />App Platform operations<br />Managed Database creation and scaling<br /><br />We expect to see two periods of 10 second impact, for a total of 20 seconds within the hour window. If unexpected impact occurs or continues for longer than expected, we will provide updates via our public status page.<br /><br />If you have any questions related to this issue please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket<br /><br />Thank you,<br />Team DigitalOcean</p>

Details

External ID do-5660c0cc1585dba9
Last Updated Dec 10, 20:00 UTC
End Time Dec 10, 20:00 UTC
DO
App Platform Static Websites App Platform Static Websites Resolved low Dec 10, 19:08 UTC

Description

<p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>19:08</var> UTC</small><br><strong>Resolved</strong> - As of 18:15 UTC, our Engineering team has confirmed the issue impacting accessibility of App Platform static websites has been resolved. Service has been restored and are now functioning normally.<br /><br />We appreciate your patience and regret the inconvenience caused. If you continue to experience any issues, feel free to open a Support ticket for further investigation.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>18:41</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix impacting the availability of App Platform static websites. Users should now experience improved performance when accessing the sites. <br /><br />We are actively monitoring the situation and will provide an update once we can confirm the issue has been fully resolved.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>18:08</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating an issue impacting App Platform static websites. During this period, users may notice 404 Not Found errors while accessing the sites. <br /><br />Our team is actively working on identifying the root cause and implementing the fix.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p>

Details

External ID do-f122d742b8e97944
Last Updated Dec 10, 19:08 UTC
End Time Dec 10, 19:08 UTC
DO
Rescheduled: Core Infrastructure Maintenance SFO2 Rescheduled: Core Infrastructure Maintenance SFO2 Resolved low Dec 10, 12:21 UTC

Description

<p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>12:21</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Dec <var data-var='date'>10</var>, <var data-var='time'>09:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Dec <var data-var='date'> 9</var>, <var data-var='time'>21:26</var> UTC</small><br><strong>Scheduled</strong> - Hello, <br /><br />We are reaching out again to inform you that the core control plane infrastructure maintenance in SFO2 region which was previously scheduled to complete on 2025-12-02 09:00 UTC has been rescheduled to the following window:<br /><br />Start: 2025-12-10 09:00 UTC<br />End: 2025-12-10 15:00 UTC<br /><br />We apologize for any inconvenience this short notice causes, and thank you for your understanding. You may find the initial maintenance notice along with a description of any expected impact related to this work included at the bottom of this message.<br /><br />If you have questions or concerns about this maintenance, please reach out to us by opening up a ticket on your account.<br /><br />---BEGIN INITIAL MAINTENANCE NOTICE---<br /><br />Start: 2025-12-02 09:00 UTC<br />End: 2025-12-02 15:00 UTC<br /><br />Hello,<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in SFO2. Please note that the existing infrastructure will continue running without issue.<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.<br /><br />Thank you,<br /><br />Team DigitalOcean</p>

Details

External ID do-a094b755b63c3910
Last Updated Dec 10, 12:21 UTC
End Time Dec 10, 12:21 UTC

Affected Regions

sfo2 San Francisco 2