Cloud Service Status
Last updated: May 6, 2026 11:47:31 UTC
AWS Incidents
2
2 medium
GCP Incidents
0
DO Incidents
0
Total Active
2
Recent Incidents
101 incidents| Provider | Service | Title | Status | Severity | Started |
|---|---|---|---|---|---|
|
aws
|
Service impact: Increased connectivity issues and API Error Rates | Active | medium | Mar 1, 21:56 UTC | |
DescriptionWe are investigating increased API error rates in a single Availability Zone (mes1-az2) in the ME-SOUTH-1 Region. Details |
|||||
|
aws
|
Service impact: Increased Error Rates | Active | medium | Mar 1, 04:51 UTC | |
DescriptionWe are investigating issues with AWS services in the ME-CENTRAL-1 Region. Details |
|||||
|
DO
|
SFO2 Network Maintenance | SFO2 Network Maintenance | Resolved | low | May 4, 13:00 UTC |
Description<p><small>May <var data-var='date'> 4</var>, <var data-var='time'>15:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>May <var data-var='date'> 4</var>, <var data-var='time'>13:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>May <var data-var='date'> 2</var>, <var data-var='time'>12:39</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-05-04 13:00 UTC<br />End: 2026-05-04 15:00 UTC<br /><br />During the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the SFO2 region.<br /><br />Expected impact:<br /><br />We do not anticipate any downtime for Droplets or Droplet-related services, including Managed Databases, Load Balancers, App Platform, and Managed Kubernetes, as this maintenance has been carefully designed and tested to be seamless. In the unlikely event that an undetected misconfiguration occurs, a subset of customers could experience temporary network disruption. We will endeavor to keep this to a minimum for the duration of the change.<br /><br />If you have any questions related to this issue, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket</p> DetailsAffected Regions
sfo2
San Francisco 2
Status Timeline
Resolved
May 4, 15:00 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
Core Infrastructure Maintenance May 4th, 2026, 13:00 UTC | Core Infrastructure Maintenance May 4th, 2026, 13:00 UTC | Resolved | low | May 4, 13:00 UTC |
Description<p><small>May <var data-var='date'> 4</var>, <var data-var='time'>21:01</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>May <var data-var='date'> 4</var>, <var data-var='time'>13:01</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>May <var data-var='date'> 2</var>, <var data-var='time'>13:25</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-05-04 13:00 UTC<br />End: 2026-05-04 21:00 UTC<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure. Please note that the existing infrastructure will continue running without issue. This maintenance may impact create, read, update, and delete (CRUD) operations in all regions.<br /><br />Expected Impact:<br /><br />During the maintenance window, users may experience brief periods of increased latency with the following platform operations:<br /><br />Cloud Control Panel and API operations<br />Event processing<br />Droplet creates, resizes, rebuilds, and power events<br />Managed Kubernetes reconciliation and scaling<br />Load Balancer operations<br />Container Registry operations<br />App Platform operations<br />Managed Database creation and scaling<br /><br />We do not expect any impact to customer traffic due to this maintenance. If an unexpected issue for the control plane arises, we will endeavor to keep any impact to a minimum and may revert if required.<br /><br />If you have any questions related to this maintenance please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket</p> DetailsStatus Timeline
Resolved
May 4, 21:01 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
Core Infrastructure Maintenance May 4, 2026, 13:00 UTC | Core Infrastructure Maintenance May 4, 2026, 13:00 UTC | Resolved | low | May 4, 13:00 UTC |
Description<p><small>May <var data-var='date'> 4</var>, <var data-var='time'>21:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>May <var data-var='date'> 4</var>, <var data-var='time'>13:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>May <var data-var='date'> 2</var>, <var data-var='time'>09:15</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-05-04 13:00 UTC<br />End: 2025-05-04 21:00 UTC<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure. Please note that the existing infrastructure will continue running without issue. This maintenance may impact create, read, update, and delete (CRUD) operations in all regions.<br /><br />Expected Impact:<br /><br />During the maintenance window, users may experience brief periods of increased latency with the following platform operations:<br /><br />Cloud Control Panel and API operations<br />Event processing<br />Droplet creates, resizes, rebuilds, and power events<br />Managed Kubernetes reconciliation and scaling<br />Load Balancer operations<br />Container Registry operations<br />App Platform operations<br />Managed Database creation and scaling<br /><br />We do not expect any impact to customer traffic due to this maintenance. If an unexpected issue for the control plane arises, we will endeavor to keep any impact to a minimum and may revert if required.<br /><br />If you have any questions related to this maintenance please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket</p> DetailsStatus Timeline
Resolved
May 4, 21:00 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
FRA1 Network Maintenance | FRA1 Network Maintenance | Resolved | low | Apr 30, 16:00 UTC |
Description<p><small>Apr <var data-var='date'>30</var>, <var data-var='time'>19:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Apr <var data-var='date'>30</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>16:19</var> UTC</small><br><strong>Scheduled</strong> - Start: Apr 30, 2026, 16:00 UTC to <br />End: Apr 30, 2026, 19:00 UTC<br /><br />During the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the FRA1 region.<br /><br />Expected impact:<br /><br />We do not anticipate any downtime for Droplets or Droplet-related services, including Managed Databases, Load Balancers, App Platform, and Managed Kubernetes, as this maintenance has been carefully designed and tested to be seamless. In the unlikely event that an undetected misconfiguration occurs, a subset of customers could experience temporary network disruption. We will endeavor to keep this to a minimum for the duration of the change.<br /><br />If you have any questions related to this issue, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket</p> DetailsAffected Regions
fra1
Frankfurt 1
Status Timeline
Resolved
Apr 30, 19:00 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
SFO2 Network Maintenance | SFO2 Network Maintenance | Resolved | low | Apr 30, 13:00 UTC |
Description<p><small>Apr <var data-var='date'>30</var>, <var data-var='time'>06:43</var> UTC</small><br><strong>Completed</strong> - During the dry-run for the scheduled SFO2 Network Maintenance , our team identified some potential risks that need to be addressed to ensure a smoother and safer implementation. As a result, we are cancelling the current maintenance window and will be rescheduling it to 2026-05-04 13:00-15:00 UTC.<br /><br />Thank you for your understanding and patience as we work to improve performance and scalability.<br /><br />If you have any questions related to this issue, please send us a ticket from your cloud support page: https://cloudsupport.digitalocean.com/s/createticket</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>13:16</var> UTC</small><br><strong>Scheduled</strong> - Start: Apr 30, 2026, 13:00 UTC<br />End: Apr 30, 2026, 15:00 UTC<br /><br />During the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the SFO2 region.<br /><br />Expected impact:<br /><br />We do not anticipate any downtime for Droplets or Droplet-related services, including Managed Databases, Load Balancers, App Platform, and Managed Kubernetes, as this maintenance has been carefully designed and tested to be seamless. In the unlikely event that an undetected misconfiguration occurs, a subset of customers could experience temporary network disruption. We will endeavor to keep this to a minimum for the duration of the change.<br /><br />If you have any questions related to this issue, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket</p> DetailsAffected Regions
sfo2
San Francisco 2
Status Timeline
Resolved
Apr 30, 06:43 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
NYC1 Network Maintenance | NYC1 Network Maintenance | Resolved | low | Apr 30, 10:00 UTC |
Description<p><small>Apr <var data-var='date'>30</var>, <var data-var='time'>12:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Apr <var data-var='date'>30</var>, <var data-var='time'>10:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>10:14</var> UTC</small><br><strong>Scheduled</strong> - Start: Apr 30, 2026, 10:00 UTC <br />End: Apr 30, 2026, 12:00 UTC<br /><br />Hello,<br /><br />During the above window, our Networking team will be making changes to the core networking infrastructure to improve performance and scalability in the NYC1 region.<br /><br />Expected impact:<br /><br />We do not anticipate any downtime for Droplets or Droplet-related services, including Managed Databases, Load Balancers, App Platform, and Managed Kubernetes, as this maintenance has been carefully designed and tested to be seamless. In the unlikely event that an undetected misconfiguration occurs, a subset of customers could experience temporary network disruption. We will endeavor to keep this to a minimum for the duration of the change.<br /><br />If you have any questions related to this issue, please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket<br /><br />Thank you,<br />Team DigitalOcean</p> DetailsAffected Regions
nyc1
New York 1
Status Timeline
Resolved
Apr 30, 12:00 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
Core Infrastructure Maintenance Apr 29, 2026, 13:00 UTC | Core Infrastructure Maintenance Apr 29, 2026, 13:00 UTC | Resolved | low | Apr 29, 13:00 UTC |
Description<p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>15:29</var> UTC</small><br><strong>Completed</strong> - During the dry-run for the scheduled Core Infrastructure Maintenance, our team identified some potential risks that need to be addressed to ensure a smoother and safer implementation. As a result, we are cancelling the current maintenance window and will be rescheduling it to 2026-05-04 13:00-21:00 UTC.<br /><br />Thank you for your understanding and patience as we work to improve performance and scalability.<br /><br />If you have any questions related to this issue, please send us a ticket from your cloud support page: https://cloudsupport.digitalocean.com/s/createticket</p><p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>13:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'>27</var>, <var data-var='time'>09:35</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-04-29 13:00 UTC<br />End: 2026-04-29 21:00 UTC<br /><br />Hello,<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure. Please note that the existing infrastructure will continue running without issue. This maintenance may impact create, read, update, and delete (CRUD) operations in all regions.<br /><br />Expected Impact:<br /><br />During the maintenance window, users may experience brief periods of increased latency with the following platform operations:<br /><br />Cloud Control Panel and API operations<br />Event processing<br />Droplet creates, resizes, rebuilds, and power events<br />Managed Kubernetes reconciliation and scaling<br />Load Balancer operations<br />Container Registry operations<br />App Platform operations<br />Managed Database creation and scaling<br /><br />We do not expect any impact to customer traffic due to this maintenance. If an unexpected issue for the control plane arises, we will endeavor to keep any impact to a minimum and may revert if required.<br /><br />If you have any questions related to this maintenance please send us a ticket from your cloud support page. https://cloudsupport.digitalocean.com/s/createticket<br /><br />Thank you,<br />Team DigitalOcean</p> DetailsStatus Timeline
Resolved
Apr 29, 15:29 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
Elevated 5xx “context canceled” errors impacting serverless inference customers | Elevated 5xx “context canceled” errors impacting serverless inference | Resolved | low | Apr 28, 13:45 UTC |
Description<p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>19:33</var> UTC</small><br><strong>Resolved</strong> - All services are operating normally. We will continue to monitor the system to ensure ongoing reliability.<br /><br />Thank you for your patience while we worked to resolve this issue.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>19:00</var> UTC</small><br><strong>Monitoring</strong> - Service for Serverless Inference has been restored.<br />We’ve implemented tighter rate limits to help prevent recurrence and are closely monitoring system performance. Some users may still experience intermittent latency as we complete final stabilization efforts.<br />Our team remains actively engaged to ensure full recovery. We appreciate your patience and will provide further updates as needed.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>15:59</var> UTC</small><br><strong>Identified</strong> - We have identified an issue affecting our service and are currently working to implement a fix. Our team is actively investigating and taking the necessary steps to restore normal operations as quickly as possible.<br /><br />We appreciate your patience and will provide updates as soon as more information becomes available.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>13:45</var> UTC</small><br><strong>Investigating</strong> - Serverless inference customers are experiencing elevated 5xx errors, including “context canceled” responses. This may result in intermittent request failures. Our team is actively investigating and will provide updates as more information becomes available.</p> DetailsStatus Timeline
Resolved
Apr 28, 19:33 UTC
Status changed from Monitoring to Resolved
Monitoring
Apr 28, 19:00 UTC
Status changed from Identified to Monitoring
Identified
Apr 28, 15:59 UTC
Status changed from Investigating to Identified |
|||||
|
DO
|
Serverless Inference | Serverless Inference - Intermittent Rate Limiting Affecting Some Customers Using Anthropic Models | Resolved | low | Apr 27, 10:38 UTC |
Description<p><small>Apr <var data-var='date'>27</var>, <var data-var='time'>11:37</var> UTC</small><br><strong>Resolved</strong> - The issue is resolved, and service is operating normally.</p><p><small>Apr <var data-var='date'>27</var>, <var data-var='time'>11:07</var> UTC</small><br><strong>Monitoring</strong> - We identified the cause of intermittent HTTP 429 responses affecting some customers using Anthropic models on DigitalOcean Serverless Inference and applied a mitigation. Service has recovered, and we are monitoring stability.</p><p><small>Apr <var data-var='date'>27</var>, <var data-var='time'>10:38</var> UTC</small><br><strong>Investigating</strong> - We are investigating an issue affecting some customers using DigitalOcean Serverless Inference with Anthropic models. Over the last two hours, impacted customers may have experienced intermittent request failures, including HTTP 429 responses, on some Anthropic model requests. Our engineering team is actively investigating the issue. We apologize for the inconvenience and will share another update as soon as more information is available.</p> DetailsStatus Timeline
Resolved
Apr 27, 11:37 UTC
Status changed from Monitoring to Resolved
Monitoring
Apr 27, 11:09 UTC
Status changed from Investigating to Monitoring |
|||||
|
aws
|
Service impact: Increased Connectivity Issues | Resolved | medium | Apr 27, 04:27 UTC | |
DescriptionWe are investigating instance connectivity issues in a single Availability Zone (euw3-az2) in the EU-WEST-3 Region. DetailsStatus Timeline
Resolved
Apr 28, 17:15 UTC
Auto-resolved: no longer in provider status feed |
|||||
|
DO
|
Intermittent errors impacting Serverless Inference in ATL1 | Intermittent errors impacting some Serverless Inference models in ATL1 | Resolved | low | Apr 23, 22:26 UTC |
Description<p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>23:51</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>22:26</var> UTC</small><br><strong>Investigating</strong> - As of 21:53 UTC, our Engineering team is investigating reports of increased internal errors for models Llama 3.3 70B, GPT OSS 120B, GPT OSS 20B, Qwen3 32B and Deepseek R1 70B hosted in the ATL1 region, impacting Serverless Inference. At this point, users with models hosted in ATL1 may experience intermittent errors when using Serverless Inference. We apologize for the inconvenience and will share an update once we have more information.</p> DetailsStatus Timeline
Resolved
Apr 23, 23:51 UTC
Status changed from Investigating to Resolved |
|||||
|
DO
|
App Platform Deployments | App Platform Deployments | Resolved | low | Apr 23, 08:08 UTC |
Description<p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>10:09</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed that the issue impacting App Platform deployments and Kubernetes (DOKS) nodes has been fully resolved at 09:22 UTC. Users may already notice improvements while deploying apps and DOKS nodes.<br /><br />All App Platform deployments are now succeeding as expected. Customers who previously encountered build failures should now be able to deploy their applications without further issues.<br /><br />If you continue to experience any problems, please open a ticket with our support team. Thank you for your patience, and we apologize for any inconvenience.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>09:52</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to address the issue causing in App Platform deployments and Kubernetes (DOKS) nodes. We are actively monitoring the situation to ensure overall stability.<br />Users may already notice improvements while deploying apps and DOKS nodes. We appreciate your patience throughout the process and will provide a further update once the issue is fully confirmed to be resolved.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>08:31</var> UTC</small><br><strong>Update</strong> - Our Engineering team is currently investigating reports of build failures on App Platform. During this time, some users may encounter errors while building their applications, which may result in failed deployments.<br />In addition, we are observing an issue where Kubernetes (DOKS) nodes are being marked as unhealthy by load balancers, which may impact traffic routing for affected services.<br />Our Engineering team is actively working to resolve these issues and will share an update as soon as more information becomes available.<br />We apologize for the inconvenience this may be causing and appreciate your patience.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>08:08</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating reports of build failures on App Platform. Users may experience errors when attempting to build their applications, resulting in failed deployments.<br /><br />Our Engineering team is working to fix the issue and will share an update once we have more information. <br /><br />We apologize for the inconvenience this issue may be causing and appreciate your patience as we work to resolve it.</p> DetailsStatus Timeline
Resolved
Apr 23, 10:09 UTC
Status changed from Monitoring to Resolved
Monitoring
Apr 23, 09:52 UTC
Status changed from Investigating to Monitoring |
|||||
|
DO
|
Onboarding UI Page for Kubernetes | Cloud UI for Managed Kubernetes | Resolved | low | Apr 23, 07:01 UTC |
Description<p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>08:46</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed full resolution of the issue with Cloud UI for Managed Kubernetes at 09:19 UTC.<br /><br />All the services should be functioning as expected.<br /><br />If you continue to experience problems, please open a ticket with our support team. Thank you for your patience throughout this incident!</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:52</var> UTC</small><br><strong>Update</strong> - Our Engineering team is currently investigating an issue impacting the Managed Kubernetes UI across all regions. During this time, users with a Member role may experience the Kubernetes UI page not loading in the cloud console.<br />As a workaround, the DigitalOcean API and doctl (CLI) continue to function normally, and you can use them to manage your Kubernetes resources in the meantime.<br />We apologize for the inconvenience and will share more information as soon as it becomes available</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:14</var> UTC</small><br><strong>Update</strong> - Our Engineering team is currently investigating an issue impacting the Create Managed Kubernetes UI across all regions. During this time, users with a Member role may experience the Kubernetes UI page not loading in the cloud console.<br />As a workaround, the DigitalOcean API and doctl (CLI) continue to function normally, and you can use them to manage your Kubernetes resources in the meantime.<br />We apologize for the inconvenience and will share more information as soon as it becomes available.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:01</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating an issue impacting Managed Kubernetes UI across all regions. During this time, some users may experience the Kubernetes UI page not loading in DigitalOcean Onboarding.<br /><br />We apologize for the inconvenience and will share more information as soon as it's available.</p> DetailsStatus Timeline
Resolved
Apr 23, 08:46 UTC
Status changed from Investigating to Resolved |
|||||
|
DO
|
Cloud Control Panel and API | Cloud Control Panel and API | Resolved | low | Apr 22, 11:06 UTC |
Description<p><small>Apr <var data-var='date'>22</var>, <var data-var='time'>11:47</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has resolved the issue that was impacting the DigitalOcean API and the Cloud Control Panel. The system is now operating normally.<br /><br />During the incident, users may have experienced difficulties accessing droplets, viewing the Droplets page, or performing CRUD (Create, Read, Update, Delete) operations via the API. We have verified that the issue has been fully resolved.<br /><br />We apologize for the inconvenience this may have caused. If you continue to experience any issues, please submit a support ticket through the Cloud Control Panel so our team can assist you further.</p><p><small>Apr <var data-var='date'>22</var>, <var data-var='time'>11:25</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix for the issue affecting the DigitalOcean API and the Cloud Control Panel. We are now actively monitoring the system to ensure full stability and will provide a final update once the issue is completely resolved.<br /><br />We sincerely appreciate your patience and understanding as we work to restore normal service.</p><p><small>Apr <var data-var='date'>22</var>, <var data-var='time'>11:06</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating an issue affecting the Cloud Control Panel and API. During this time, API requests to create, destroy, or trigger events on droplets may not succeed. Additionally, the Droplets page in the Cloud Control Panel may not load properly, and users could experience issues while reviewing the droplet listing.<br /><br />We are actively looking into the root cause and will provide updates as soon as more information becomes available.<br /><br />We apologize for the inconvenience and appreciate your patience.</p> DetailsStatus Timeline
Resolved
Apr 22, 11:47 UTC
Status changed from Monitoring to Resolved
Monitoring
Apr 22, 11:25 UTC
Status changed from Investigating to Monitoring |
|||||
|
DO
|
DNS Resolution Issues with .co TLD | DNS Resolution for .co TLD | Resolved | low | Apr 17, 21:22 UTC |
Description<p><small>Apr <var data-var='date'>18</var>, <var data-var='time'>14:10</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed that the issue affecting the DNS resolution of .co top-level domain (TLD) has been resolved. The DNS resolution for .co domains is now working as expected.<br /><br />We apologize for the inconvenience. If you continue to face any issues, please open a support ticket from within your account.</p><p><small>Apr <var data-var='date'>17</var>, <var data-var='time'>22:53</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to resolve the issue that was affecting the DNS resolution of .co top-level domain (TLD).<br /><br />During this time, users who are using DigitalOcean DNS resolvers in their resources should no longer experience issues related to the DNS resolution. <br /><br />Our Engineers are currently monitoring the situation. We will post an update as soon as the issue is fully resolved.</p><p><small>Apr <var data-var='date'>17</var>, <var data-var='time'>21:22</var> UTC</small><br><strong>Identified</strong> - Our Engineering team is aware of a widespread, external issue affecting .co top-level domain (TLD). While this incident originates outside of DigitalOcean's infrastructure, you may experience errors when querying a .co domain, regardless of the DNS resolver being used.<br /><br />Our Engineers are actively deploying temporary backend mitigations to help minimize the impact on our customers. We will continue to monitor the situation closely and post updates as more information becomes available.</p> DetailsStatus Timeline
Resolved
Apr 18, 14:10 UTC
Status changed from Monitoring to Resolved
Monitoring
Apr 17, 22:53 UTC
Status changed from Identified to Monitoring |
|||||
|
DO
|
Spaces Availability in NYC3 | Spaces availability in NYC3 | Resolved | low | Apr 17, 18:27 UTC |
Description<p><small>Apr <var data-var='date'>17</var>, <var data-var='time'>18:27</var> UTC</small><br><strong>Resolved</strong> - From 17:22 to 17:46 UTC, our Engineering team observed an issue impacting Spaces availability in the NYC3 region. During this time, customers may have encountered 500 errors and degraded performance while accessing Spaces buckets. The issue has now been fully resolved. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.</p> DetailsAffected Regions
nyc3
New York 3
|
|||||
|
DO
|
Core Infrastructure Maintenance in All Regions 2026-04-16 09:00 UTC | Core Infrastructure Maintenance in All Regions 2026-04-16 09:00 UTC | Resolved | low | Apr 16, 09:00 UTC |
Description<p><small>Apr <var data-var='date'>22</var>, <var data-var='time'>18:31</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Apr <var data-var='date'>22</var>, <var data-var='time'>09:07</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'>21</var>, <var data-var='time'>22:15</var> UTC</small><br><strong>Scheduled</strong> - Phase 3 maintenance is complete. Phase 4 is scheduled to begin on the 22nd of April 2026 (Wednesday) at 09:00 UTC.</p><p><small>Apr <var data-var='date'>21</var>, <var data-var='time'>09:07</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'>20</var>, <var data-var='time'>15:22</var> UTC</small><br><strong>Scheduled</strong> - Phase 2 maintenance is complete. Phase 3 is scheduled to begin on the 21st of April 2026 (Tuesday) at 09:00 UTC.</p><p><small>Apr <var data-var='date'>20</var>, <var data-var='time'>09:25</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>15:05</var> UTC</small><br><strong>Scheduled</strong> - Phase 1 maintenance is complete. Phase 2 is scheduled to begin on the 20th of April 2026 (Monday) at 09:00 UTC.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>09:10</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>09:37</var> UTC</small><br><strong>Scheduled</strong> - Hello,<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure across all regions. Please note that the existing infrastructure will continue running without issue. <br /><br />This maintenance will be carried out in four phases as outlined below:<br /><br />16 April 2026 (Thursday), 09:00–15:00 UTC<br />20 April 2026 (Monday), 09:00–15:00 UTC<br />21 April 2026 (Tuesday), 09:00–22:00 UTC<br />22 April 2026 (Wednesday), 09:00–22:00 UTC<br /><br />Expected Impact:<br /><br />We do not anticipate any impact, however, there is a small possibility that Control Panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from https://cloudsupport.digitalocean.com/s/createticket. We’re here to help.<br /><br />Thank you,<br /><br />Team DigitalOcean</p> DetailsStatus Timeline
Resolved
Apr 22, 18:31 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
Serverless Inference | Serverless Inference | Resolved | low | Apr 15, 01:07 UTC |
Description<p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>02:56</var> UTC</small><br><strong>Resolved</strong> - From 23:20 UTC to 02:00 UTC, users may have experienced elevated error rates due to service instability, which resulted in intermittent HTTP 500 errors and terminated connections.<br /><br />Our Engineering team has confirmed full resolution of the issue, and all systems are now operating normally.<br /><br />If you continue to experience any issues, please open a ticket with our support team. We apologize for any inconvenience caused.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>02:24</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix for the issue causing elevated error rates due to service instability. We are currently monitoring the situation to ensure stability and confirm that error rates, including HTTP 500 responses, have returned to normal levels.<br /><br />We will provide a further update once we confirm the issue is fully resolved.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>01:07</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue causing elevated error rates due to service instability and terminating open connections which causes some 500s. Some requests may fail while we work to resolve it.<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p> DetailsStatus Timeline
Resolved
Apr 15, 02:56 UTC
Status changed from Monitoring to Resolved
Monitoring
Apr 15, 02:24 UTC
Status changed from Investigating to Monitoring |
|||||
|
DO
|
App Platform Deployments | App Platform Deployments | Resolved | low | Apr 14, 17:57 UTC |
Description<p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>17:57</var> UTC</small><br><strong>Resolved</strong> - From 16:07 to 16:50 UTC, Our Engineering team observed an issue with App Platform Deployments in all regions. During this time, App deployments of both new and existing apps would have been affected. Our team has fully resolved the issues as of 16:50 UTC. All new and existing App deployments should now be functioning as expected. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.</p> Details |
|||||
|
DO
|
Managed Database Resizes | Managed Database Resizes | Resolved | low | Apr 14, 12:53 UTC |
Description<p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>15:19</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has resolved the issue with resize operations for Managed Databases and should now be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>14:30</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has taken action to mitigate the issue with resize operations for Managed Databases and implemented a fix . We are monitoring the situation and will post an update as soon as we confirm that the issue is fully resolved.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>12:53</var> UTC</small><br><strong>Investigating</strong> - Our engineering team is investigating an issue impacting resize operations for Managed Databases. During this time, users may experience error when attempting to resize Managed Database via Cloud Control Panel and API in all regions. We apologize for the inconvenience and will share an update once we have more information.</p> DetailsStatus Timeline
Resolved
Apr 14, 15:19 UTC
Status changed from Monitoring to Resolved
Monitoring
Apr 14, 14:30 UTC
Status changed from Investigating to Monitoring |
|||||
|
DO
|
Droplet Availability in All Regions | Droplet Availability in All Regions | Resolved | low | Apr 10, 20:32 UTC |
Description<p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>21:06</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed full resolution of the issue with creating Droplets in all regions. Users should be able to create Droplets without issue.<br /><br />We apologize for the inconvenience. If you continue to face any issues, please open a support ticket from within your account.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>20:32</var> UTC</small><br><strong>Monitoring</strong> - Subject: Droplet Availability in All Regions<br /><br />Our Engineering team has identifed an issue with Droplet creates in all regions. A root cause has been found, a fix has been put in place and we are currently monitoring the situation to ensure full resolution. Users should be able to create new Droplets at this time. <br /><br />We will continue to monitor and we will post an update as soon as it is fully resolved. We apologize for the inconvenience.</p> DetailsStatus Timeline
Resolved
Apr 10, 21:06 UTC
Status changed from Monitoring to Resolved |
|||||
|
DO
|
MongoDB Maintenance | MongoDB Maintenance - BLR1, NYC3, SFO2, SGP1, SYD1, TOR1 | Resolved | low | Apr 9, 18:00 UTC |
Description<p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>22:51</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>18:23</var> UTC</small><br><strong>Update</strong> - We will be undergoing scheduled maintenance during this time.</p><p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>18:18</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-04-09 18:00 UTC <br />End: 2026-04-10 24:00 UTC <br /><br />During the above window, our Engineering team will perform maintenance on core MongoDB services in the BLR1, NYC3, SFO2, SGP1, SYD1 & TOR1regions to enhance security and improve auditing and compliance. Please note that existing databases and workloads will continue to function normally and will not be impacted.<br /><br />Expected Impact:<br /><br />We do not anticipate any service disruptions during this window. Your existing databases and workloads will continue to run normally without interruption.<br /><br />In the event that an unexpected issue occurs, administrative actions, such as creating, deleting, or scaling Managed MongoDB databases in the BLR1, NYC3, SFO2, SGP1, SYD1 & TOR1 regions, may experience delays.<br /><br />If an unexpected issue arises, we will work to keep any impact to a minimum and may revert the changes if required.<br /><br />If you have any questions related to this event, please open a ticket from your cloud support page: https://cloudsupport.digitalocean.com/s/createticket</p> DetailsAffected Regions
blr1
Bangalore 1
nyc3
New York 3
sfo2
San Francisco 2
sgp1
Singapore 1
syd1
Sydney 1
tor1
Toronto 1
Status Timeline
Resolved
Apr 9, 22:51 UTC
Status changed from Update to Resolved
Update
Apr 9, 18:00 UTC
Status changed from Scheduled to Update |
|||||
|
DO
|
MongoDB Maintenance | MongoDB Maintenance - AMS3, ATL1, LON1, NYC1, NYC2, SFO3 | Resolved | low | Apr 7, 18:00 UTC |
Description<p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>22:25</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'> 5</var>, <var data-var='time'>18:12</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-04-07 18:00 UTC<br />End: 2026-04-08 00:00 UTC<br /><br />During the above window, our Engineering team will perform maintenance on core MongoDB services in the AMS3, ATL1, LON1, NYC1, NYC2 & SFO3 regions to enhance security and improve auditing and compliance. Please note that existing databases and workloads will continue to function normally and will not be impacted.<br /><br />Expected Impact:<br /><br />We do not anticipate any service disruptions during this window. Your existing databases and workloads will continue to run normally without interruption.<br /><br />In the event that an unexpected issue occurs, administrative actions, such as creating, deleting, or scaling Managed MongoDB databases in the AMS3, ATL1, LON1, NYC1, NYC2 & SFO3 regions, may experience delays.<br /><br />If an unexpected issue arises, we will work to keep any impact to a minimum and may revert the changes if required.<br /><br />If you have any questions related to this event, please open a ticket from your cloud support page: https://cloudsupport.digitalocean.com/s/createticket</p> DetailsAffected Regions
ams3
Amsterdam 3
lon1
London 1
nyc1
New York 1
nyc2
New York 2
sfo3
San Francisco 3
Status Timeline
Resolved
Apr 7, 22:25 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
Control Plane | Control Plane | Resolved | low | Apr 7, 17:49 UTC |
Description<p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>17:49</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has resolved the control plane disruption that occurred from 17:06 to 17:18 UTC. During this time, users may have experienced intermittent issues with managing their resources through the Cloud Control Panel or DigitalOcean API. The root cause of the disruption was identified and addressed, and all services are now operating normally. <br /><br />If you continue to experience any problems, please open a ticket with our Support team. We apologize for any inconvenience this may have caused.</p> Details |
|||||
|
DO
|
Serverless Inference | Serverless Inference - High error rates for open source models ( Qwen 3 32B) | Resolved | low | Apr 7, 12:49 UTC |
Description<p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>15:50</var> UTC</small><br><strong>Resolved</strong> - Service has been fully restored, and the model is now operating normally. We have implemented improvements to enhance stability and reduce the likelihood of similar issues in the future.</p><p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>12:55</var> UTC</small><br><strong>Identified</strong> - We are currently investigating reports of elevated latency affecting requests to this model when using Serverless Inference and Agents.<br /><br />Earlier observations indicated increased error rates for the open-source Qwen 3 32B model. The Ray dashboard also showed multiple workers in a pending state, suggesting capacity constraints.<br /><br />Our analysis determined that the model was experiencing higher-than-expected request volume without sufficient resources to scale accordingly. To address this, the node pool size has been increased to improve available capacity. However, there are still insufficient nodes to fully support the desired number of model replicas.<br /><br />Following the node pool expansion, a new pod-related error has been identified. Our Engineering team is actively working to resolve this issue and restore full service performance.</p><p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>12:49</var> UTC</small><br><strong>Investigating</strong> - Serverless inference for alibaba-qwen3-32b (Qwen 3 32B) in tor1 is experiencing high error rates starting at 10:46 UTC.</p> DetailsStatus Timeline
Resolved
Apr 7, 15:50 UTC
Status changed from Identified to Resolved
Identified
Apr 7, 12:55 UTC
Status changed from Investigating to Identified |
|||||
|
DO
|
Serverless Inference Issue | Serverless Inference Issue | Resolved | low | Apr 6, 12:28 UTC |
Description<p><small>Apr <var data-var='date'> 6</var>, <var data-var='time'>18:02</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'> 6</var>, <var data-var='time'>15:15</var> UTC</small><br><strong>Monitoring</strong> - A fix has been implemented and we are monitoring the results.</p><p><small>Apr <var data-var='date'> 6</var>, <var data-var='time'>12:28</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue with Serverless inference.<br /><br />At this time, users may experience high error rates for open source models (llama 3.3 70b).<br /><br />We apologize for the inconvenience and will share an update once we have more information.</p> DetailsStatus Timeline
Resolved
Apr 6, 18:02 UTC
Status changed from Monitoring to Resolved
Monitoring
Apr 6, 15:15 UTC
Status changed from Investigating to Monitoring |
|||||
|
DO
|
FRA1 MongoDB Maintenance | FRA1 MongoDB Maintenance | Resolved | low | Mar 27, 19:00 UTC |
Description<p><small>Mar <var data-var='date'>27</var>, <var data-var='time'>23:16</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Mar <var data-var='date'>27</var>, <var data-var='time'>19:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'>25</var>, <var data-var='time'>19:01</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-03-27 19:00 UTC<br />End: 2026-03-28 02:00 UTC<br /><br />During the above window, our Engineering team will perform maintenance on core MongoDB services in the FRA1 region to enhance security and improve auditing and compliance. Please note that existing databases and workloads will continue to function normally and will not be impacted.<br /><br />Expected Impact:<br /><br />We do not anticipate any service disruptions during this window. Your existing databases and workloads will continue to run normally without interruption.<br /><br />In the event that an unexpected issue occurs, administrative actions, such as creating, deleting, or scaling Managed MongoDB databases in the FRA1 region, may experience delays.<br /><br />If an unexpected issue arises, we will work to keep any impact to a minimum and may revert the changes if required.<br /><br />If you have any questions related to this event, please open a ticket from your cloud support page: https://cloudsupport.digitalocean.com/s/createticket</p> DetailsAffected Regions
fra1
Frankfurt 1
Status Timeline
Resolved
Mar 27, 23:16 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
App platform seeing delays in deployments across FRA1 region | App platform seeing delays in deployments across FRA1 region | Resolved | low | Mar 20, 10:32 UTC |
Description<p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>12:01</var> UTC</small><br><strong>Resolved</strong> - The issue impacting delays in App Platform deployments has been confirmed to be resolved. Between approximately 00:08am UTC & 11:46am UTC, users may have noticed delays while creating or updating apps, or may have encountered failed deployments. For failed deployments, please trigger a redeploy, which should successfully resolve the issue.<br /><br />We confirmed that the service is functioning as expected. Once again, we sincerely apologize for the inconvenience caused and appreciate your understanding.<br /><br />However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation. We'll be happy to assist you.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>11:14</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has deployed a fix to resolve the issue impacting new App Platform deployments using Dedicated Egress IP in FRA1 region. We are actively monitoring the situation to ensure stability and will provide an update once the incident has been fully resolved. Thank you for your patience and we apologize for the inconvenience.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>10:32</var> UTC</small><br><strong>Investigating</strong> - Our engineers are currently investigating an issue impacting new App Platform deployments using Dedicated Egress IP in FRA1 region. During this time, some users may experience delay when creating new App Platform apps or deploying existing apps. Existing apps are not affected and should continue to function normally. We apologize for any inconvenience, and we'll share more information as it becomes available.</p> DetailsAffected Regions
fra1
Frankfurt 1
Status Timeline
Resolved
Mar 20, 12:01 UTC
Status changed from Monitoring to Resolved
Monitoring
Mar 20, 11:14 UTC
Status changed from Investigating to Monitoring |
|||||
|
DO
|
Gradient AI Platform agents and services Accessibility | Gradient AI Platform agents and services Accessibility | Resolved | low | Mar 20, 08:50 UTC |
Description<p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>14:14</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has implemented a fix, the issues impacting Gradient AI Platform have been resolved. All agents are back up and healthy. Service has been fully restored.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>14:04</var> UTC</small><br><strong>Monitoring</strong> - A fix has been implemented and services have been restored. We are continuing to monitor the system to ensure stability. We will provide further updates if needed.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>11:05</var> UTC</small><br><strong>Update</strong> - We've identified the issue and are actively working to restore the affected services. We're making steady progress and closely monitoring the situation. Further updates will be shared as they become available.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>09:51</var> UTC</small><br><strong>Identified</strong> - We’ve identified the issue and are currently working on restoring the services. We’ll continue to provide updates as progress is made.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>08:50</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating issue affecting the accessibility of agents and services on the Gradient AI Platform. Users may experience failures or unresponsiveness when attempting to use these features. Our engineering team is actively working to identify the root cause and restore full functionality. We apologize for the inconvenience and will share an update once we have more information.</p> DetailsStatus Timeline
Resolved
Mar 20, 14:14 UTC
Status changed from Monitoring to Resolved
Monitoring
Mar 20, 14:04 UTC
Status changed from Identified to Monitoring
Identified
Mar 20, 09:51 UTC
Status changed from Investigating to Identified |
|||||
|
DO
|
Gradient AI model availability | Gradient AI model availability | Resolved | low | Mar 17, 15:00 UTC |
Description<p><small>Mar <var data-var='date'>17</var>, <var data-var='time'>19:49</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has implemented a fix, the issues impacting model availability and performance have been resolved. All models, including those previously degraded, are back up and healthy. Service has been fully restored.</p><p><small>Mar <var data-var='date'>17</var>, <var data-var='time'>15:00</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating reports of Gradient AI model availability issues impacting multiple models. Users may experience issues with models availability, including Llama3.1-8b and Qwen3-32b, as well as embedding models such as GTE Large (v1.5), All-MiniLM-L6-v2, Multi-QA-mpnet-base-dot-v1, and Qwen3 Embedding 0.6B. <br /><br />Additionally, Guardrails are not available, affecting associated agents, and users attempting to run inference on the Llama3.3-70b model will see degraded performance. <br /><br />We apologize for the inconvenience and will share an update once we have more information.</p> DetailsStatus Timeline
Resolved
Mar 17, 19:49 UTC
Status changed from Investigating to Resolved |
|||||
|
DO
|
Cloud Control Panel and API | Cloud Control Panel and API | Resolved | low | Mar 16, 17:39 UTC |
Description<p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>17:39</var> UTC</small><br><strong>Resolved</strong> - From 16:14 to 16:38 UTC, Our Engineering team observed an issue impacting Cloud control panel and API. During this time, users may experienced errors when trying to access the Cloud control panel and when trying to use the API. Our team has fully resolved the issues as of 16:38 UTC. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.</p> Details |
|||||
|
DO
|
Degraded performance with BYOK Anthropic models | Degraded performance with BYOK Anthropic models | Resolved | low | Mar 15, 02:55 UTC |
Description<p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>03:31</var> UTC</small><br><strong>Resolved</strong> - The issue is now resolved, all Anthropic BYOK models in Gradient AI should work normally.<br />Contact support if issues persist.</p><p><small>Mar <var data-var='date'>15</var>, <var data-var='time'>02:55</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue related to all Gradient AI agents and serverless inference that require BYOK Anthropic modles.<br />Impacted users may experience degraded performance.<br />We will provide an update as soon as possible</p> DetailsStatus Timeline
Resolved
Mar 15, 03:31 UTC
Status changed from Investigating to Resolved |
|||||
|
DO
|
Delay in App Platform Deployments | Delay in App Platform Deployments | Resolved | low | Mar 13, 21:30 UTC |
Description<p><small>Mar <var data-var='date'>14</var>, <var data-var='time'>01:47</var> UTC</small><br><strong>Resolved</strong> - As of 23:00 UTC, our Engineering team has confirmed that the issue causing delays in App Platform deployments has been fully resolved. The fix implemented earlier has been successful, and we are no longer seeing any delays or errors with deployments.<br /><br />Users should now be able to deploy their apps successfully and without any issues. We apologize again for the inconvenience caused.<br /><br />However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>23:39</var> UTC</small><br><strong>Monitoring</strong> - After working with our upstream provider, our Engineering team has implemented a fix to resolve the issue that was causing delays in the deployment of new apps, and they are currently monitoring the situation. <br /><br />During this time, users should no longer experience issues with creating new apps and all the stalled creation events should provision completely.<br /><br />We will post an update as soon as the issue is fully resolved.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>22:01</var> UTC</small><br><strong>Identified</strong> - Our Engineering team is starting to see delays once again with new App Platform deployments. During this time, users may still experience delays with deploying new apps. We're working with our upstream provider to resolve the issue.<br /><br />We again apologize for the inconvenience. We will post further updates once we have more information.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>21:30</var> UTC</small><br><strong>Monitoring</strong> - Starting at 20:40 UTC, users may have seen delays with deploying new apps on App Platform. <br /><br />At this time, our Engineering team is seeing signs of recovery, and users should be able to deploy new apps without issue. We're currently monitoring the situation to ensure full recovery.<br /><br />We apologize for the inconvenience. We'll post an update once the issue has been confirmed to be resolved.</p> DetailsStatus Timeline
Resolved
Mar 14, 01:47 UTC
Status changed from Monitoring to Resolved |
|||||
|
DO
|
Newly Created Managed Kubernetes Nodes | Newly Created Managed Kubernetes Nodes | Resolved | low | Mar 13, 11:26 UTC |
Description<p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>16:35</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed the resolution of the issue impacting DNS timeouts for newly provisioned Managed Kubernetes nodes. At this time all cluster services should now be functioning normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>13:55</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to address the issue causing DNS timeouts for newly provisioned Managed Kubernetes nodes. Further investigation has confirmed that this issue primarily affected customers utilizing a NAT Gateway within their VPC and running a VPC-native cluster. We are actively monitoring the situation to ensure overall stability.<br /><br />We appreciate your patience and will provide a further update once the issue is fully confirmed to be resolved.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>12:32</var> UTC</small><br><strong>Identified</strong> - Our Engineering team is investigating an issue impacting newly provisioned Managed Kubernetes nodes. During this time, Only customers who run a NAT Gateway in their VPC and a VPC-native clusters are affected and may experience DNS timeouts. We apologize for the inconvenience and will share an update once we have more information.</p><p><small>Mar <var data-var='date'>13</var>, <var data-var='time'>11:26</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue impacting newly provisioned Managed Kubernetes nodes. During this time, new nodes may experience DNS timeouts, which could temporarily affect cluster services. We apologize for the inconvenience and will share an update once we have more information.</p> DetailsStatus Timeline
Resolved
Mar 13, 16:35 UTC
Status changed from Monitoring to Resolved
Monitoring
Mar 13, 13:55 UTC
Status changed from Identified to Monitoring
Identified
Mar 13, 12:32 UTC
Status changed from Investigating to Identified |
|||||
|
DO
|
Ubuntu/Debian Package Mirror Failure | Ubuntu/Debian Package Mirror Failure | Resolved | low | Mar 9, 19:23 UTC |
Description<p><small>Mar <var data-var='date'> 9</var>, <var data-var='time'>19:23</var> UTC</small><br><strong>Resolved</strong> - From 17:50 to 19:06 UTC, Our Engineering team observed an issue with mirrors.digitalocean.com. During this time, users may have experienced errors when trying to update packages on Debian and Ubuntu Images. Our team has fully resolved the issues as of 19:06 UTC. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.</p> Details |
|||||
|
aws
|
Service degradation: Increased Error Rates | Resolved | medium | Mar 7, 11:53 UTC | |
DescriptionWe are investigating increased error rates in the EU-CENTRAL-2 Region. DetailsStatus Timeline
Resolved
Mar 8, 21:26 UTC
Auto-resolved: no longer in provider status feed |
|||||
|
DO
|
HTTP 522 Error on App Platform | HTTP 522 Error on App Platform | Resolved | low | Mar 6, 21:22 UTC |
Description<p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>21:22</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team identified an issue affecting the App Platform. During the incident, users may have experienced HTTP 522 (Connection Timed Out) errors when accessing their apps. The issue seems to be resolved now.<br /><br />We apologize for the inconvenience caused. If you continue to experience any related errors, please contact our Support team by opening a ticket at https://www.digitalocean.com/support/contact/.</p> Details |
|||||
|
DO
|
App Platform Deployments | App Platform Deployments | Resolved | low | Mar 5, 23:28 UTC |
Description<p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>01:12</var> UTC</small><br><strong>Resolved</strong> - As of 00:22 UTC, our Engineering team has confirmed that the issue causing delays in App Platform deployments has been fully resolved. The fix implemented earlier has been successful, and we are no longer seeing any delays or errors with deployments.<br /><br />Users should now be able to deploy their apps successfully and without any issues. We apologize again for the inconvenience caused.<br /><br />However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>00:32</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented a fix to address the issue causing delays in App Platform deployments. We are actively monitoring the situation to ensure overall stability. <br /><br />We appreciate your patience and will provide a further update once the issue is fully confirmed to be resolved.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>23:28</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is currently investigating an issue impacting App Platform deployments. During this time, users may experience a delay or failure when deploying new and existing App Platform apps. <br /><br />We apologize for any inconvenience, and we'll share more information as it becomes available.</p> DetailsStatus Timeline
Resolved
Mar 6, 01:12 UTC
Status changed from Monitoring to Resolved
Monitoring
Mar 6, 00:32 UTC
Status changed from Investigating to Monitoring |
|||||
|
DO
|
Internal Load Balancers Connectivity | Internal Load Balancers Connectivity | Resolved | low | Mar 5, 00:23 UTC |
Description<p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:52</var> UTC</small><br><strong>Resolved</strong> - From 19:57 UTC to 01:03 UTC, customers may have experienced connectivity issues between Internal Load Balancers and their associated target droplets, which could have resulted in service disruption or traffic routing failures.<br /><br />Our Engineering team has confirmed full resolution of the issue, and Internal Load Balancers should now be functioning normally.<br /><br />If you continue to experience any problems, please open a ticket with our Support team. We apologize for any inconvenience caused.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:19</var> UTC</small><br><strong>Monitoring</strong> - Our Engineering team has implemented mitigation measures to address the connectivity issues affecting Internal Load Balancers and their associated target droplets. We are actively monitoring the situation to ensure stability and to prevent any recurrence.<br /><br />We will provide a further update once we confirm the issue is fully resolved.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:23</var> UTC</small><br><strong>Investigating</strong> - Our Engineering team is investigating an issue affecting Internal Load Balancers. Customers may experience connectivity loss between Internal Load Balancers and their associated target droplets.<br /><br />We apologize for the inconvenience and will share an update as soon as more information becomes available.</p> DetailsStatus Timeline
Resolved
Mar 5, 01:52 UTC
Status changed from Monitoring to Resolved
Monitoring
Mar 5, 01:19 UTC
Status changed from Investigating to Monitoring |
|||||
|
DO
|
Core Infrastructure Maintenance in All Regions 2026-03-03 10:00 UTC | Core Infrastructure Maintenance in All Regions 2026-03-03 10:00 UTC | Resolved | low | Mar 3, 10:00 UTC |
Description<p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>15:55</var> UTC</small><br><strong>Completed</strong> - This scheduled maintenance is now complete across all regions. Thank you for your patience and understanding throughout this process.</p><p><small>Mar <var data-var='date'> 6</var>, <var data-var='time'>11:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>13:05</var> UTC</small><br><strong>Scheduled</strong> - Phase 2 maintenance is complete. Phase 3 is scheduled to begin at March 06, 11:00 UTC.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>10:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>13:03</var> UTC</small><br><strong>Scheduled</strong> - Phase 1 maintenance is complete. Phase 2 is scheduled to begin at March 05, 10:00 UTC.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>10:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Mar <var data-var='date'> 1</var>, <var data-var='time'>10:44</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-03-03 10:00 UTC<br />End: 2026-03-06 13:00 UTC<br /><br />Hello,<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in all regions. Please note that the existing infrastructure will continue running without issue.This maintenance will be carried out in three phases as outlined below:<br /><br />March 03, 10:00 to 13:00 UTC<br />March 05, 10:00 to 13:00 UTC<br />March 06, 11:00 to 13:00 UTC<br /><br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.</p> DetailsStatus Timeline
Resolved
Mar 6, 15:55 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
Core Infrastructure Maintenance in SFO2 and SFO3 | Core Infrastructure Maintenance in SFO2 and SFO3 | Resolved | low | Mar 2, 13:00 UTC |
Description<p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Mar <var data-var='date'> 2</var>, <var data-var='time'>13:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>28</var>, <var data-var='time'>13:20</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-03-02 13:00 UTC<br />End: 2026-03-02 16:00 UTC<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in SFO2 and SFO3. Please note that the existing infrastructure will continue running without issue.<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.</p> DetailsAffected Regions
sfo2
San Francisco 2
sfo3
San Francisco 3
Status Timeline
Resolved
Mar 2, 16:00 UTC
Status changed from Scheduled to Resolved |
|||||
|
gcp
|
Multiple Products | Multiple Products - | Resolved | low | Feb 27, 12:36 UTC |
DescriptionVertex AI Gemini API customers experienced increased error rates when accessing the global endpoint. Details |
|||||
|
DO
|
Intermittent Errors with Llama 3.3-70B | Intermittent Errors with Llama 3.3-70B | Resolved | low | Feb 26, 19:39 UTC |
Description<p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>22:23</var> UTC</small><br><strong>Resolved</strong> - Issue resolved.<br />Cause: A few requests made to the Llama 3.3-70B model caused issues.<br />Impact: Intermittent errors when interacting with the model through serverless inference and/or with agents created using this model.<br />Contact support if issues persist.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>21:52</var> UTC</small><br><strong>Monitoring</strong> - Fix deployed. Monitoring resources related to the Llama 3.3-70B.<br />Users should no longer experience intermittent errors when making serverless inference requests via APIs and Agents . Awaiting confirmation before closure.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating an issue affecting the Llama 3.3-70B model.<br />Symptoms: Users may encounter intermittent errors when making serverless inference requests via APIs and Agents.<br />Current Status: Our engineering team is actively investigating the issue to determine the root cause.</p> DetailsStatus Timeline
Resolved
Feb 26, 22:23 UTC
Status changed from Monitoring to Resolved
Monitoring
Feb 26, 21:52 UTC
Status changed from Investigating to Monitoring |
|||||
|
DO
|
App Platform Deployments | App Platform Deployments | Resolved | low | Feb 26, 17:14 UTC |
Description<p><small>Feb <var data-var='date'>27</var>, <var data-var='time'>04:55</var> UTC</small><br><strong>Resolved</strong> - Our Engineering team has confirmed that the issue impacting build failures on App Platform has been resolved. Between approximately 14:30 UTC on the 26th and 00:01 UTC on the 27th, users may have experienced errors when attempting to build or deploy applications using older versions of the Node.js buildpack. A fix has been implemented, and build and deployment operations have been restored to normal.<br /><br />All App Platform builds are now succeeding as expected. Customers who previously encountered build failures should now be able to deploy their applications without further issues.<br /><br />If you continue to experience any problems, please open a ticket with our support team. Thank you for your patience, and we apologize for any inconvenience.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>17:14</var> UTC</small><br><strong>Investigating</strong> - As of 14:30 UTC, our Engineering team is investigating reports of build failures on App Platform for customers using older version of the Node.js buildpack. Users may experience errors when attempting to build their applications, resulting in failed deployments.<br /><br />Our Engineering team is working to fix the issue and will share an update once we have more information. In the meantime, as a workaround, we recommend that customers upgrade to the latest version of Node.js build packs. This may help to resolve the build failures and allow for successful deployments. To upgrade, please follow the instructions outlined here:<br /><br />https://docs.digitalocean.com/products/app-platform/how-to/migrate-nodejs-buildpack/<br /><br />We apologize for the inconvenience this issue may be causing and appreciate your patience as we work to resolve it.</p> DetailsStatus Timeline
Resolved
Feb 27, 04:55 UTC
Status changed from Investigating to Resolved |
|||||
|
DO
|
Core Infrastructure Maintenance in AMS3, FRA1, and LON1 | Core Infrastructure Maintenance in AMS3, FRA1, and LON1 | Resolved | low | Feb 26, 16:00 UTC |
Description<p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>20:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>15:04</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-02-26 16:00 UTC<br />End: 2026-02-26 20:00 UTC<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in AMS1, FRA1, and LON1. Please note that the existing infrastructure will continue running without issue.<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.</p> DetailsAffected Regions
ams3
Amsterdam 3
fra1
Frankfurt 1
lon1
London 1
Status Timeline
Resolved
Feb 26, 20:00 UTC
Status changed from Scheduled to Resolved |
|||||
|
DO
|
Core Infrastructure Maintenance in BLR1 and SGP1 | Core Infrastructure Maintenance in BLR1 and SGP1 | Resolved | low | Feb 25, 15:00 UTC |
Description<p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>15:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>16:05</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-02-25 15:00 UTC<br />End: 2026-02-25 18:00 UTC<br /><br />Hello,<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in BLR1 and SGP1. Please note that the existing infrastructure will continue running without issue.<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.<br /><br />Thank you,<br />Team DigitalOcean</p> DetailsAffected Regions
blr1
Bangalore 1
sgp1
Singapore 1
Status Timeline
Resolved
Feb 25, 18:00 UTC
Status changed from Scheduled to Resolved |
|||||
|
aws
|
Service impact: Intermittent missing or delayed EC2 instance and status check metrics | Resolved | medium | Feb 25, 10:14 UTC | |
DescriptionWe are experiencing intermittent missing or delayed EC2 instance and status check metrics in the US-EAST-1 Region. Alarms on delayed or missing metrics may transition into an INSUFFICIENT_DATA state. We are taking multiple parallel paths to mitigate this issue. While underlying resources are not affected by this issue, customers with automated actions based off of delayed or missing metric data may see their automations start. EC2 APIs are not impacted and therefore EC2 AutoScaling will not be affected by this issue. DetailsStatus Timeline
Resolved
Feb 26, 21:01 UTC
Auto-resolved: no longer in provider status feed |
|||||
|
DO
|
Core Infrastructure Maintenance in SYD1 | Core Infrastructure Maintenance in SYD1 | Resolved | low | Feb 24, 15:00 UTC |
Description<p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Feb <var data-var='date'>24</var>, <var data-var='time'>15:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Feb <var data-var='date'>22</var>, <var data-var='time'>15:12</var> UTC</small><br><strong>Scheduled</strong> - Start: 2026-02-24 15:00 UTC<br />End: 2026-02-24 17:00 UTC<br /><br />During the above window, our Engineering team will be performing maintenance on core control plane infrastructure in SYD1. Please note that the existing infrastructure will continue running without issue.<br /><br />We do not anticipate any impact; however, there is a small possibility that control panel functionality specifically CRUD (Create, Read, Update, Delete) operations may be affected during the maintenance window. All running workloads are expected to continue operating normally without interruption.<br /><br />Our team will be actively monitoring the environment throughout the maintenance, and any unexpected events will be promptly communicated through our status page.<br /><br />If you have any questions or concerns regarding this maintenance, please feel free to open a support ticket from within your account. We’re here to help.</p> DetailsAffected Regions
syd1
Sydney 1
Status Timeline
Resolved
Feb 24, 17:00 UTC
Status changed from Scheduled to Resolved |
|||||