<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Webdock Status - Incident history</title>
    <link>https://status.webdock.io</link>
    <description>Webdock</description>
    <pubDate>Fri, 6 Mar 2026 03:21:37 +0000</pubDate>
    
<item>
  <title>A single host needs a reboot</title>
  <description>
    Type: Incident
    Duration: 1 hour and 4 minutes

    Affected Components: Denmark: General Infrastructure
    Mar 6, 03:21:37 GMT+0 - Investigating - One of the hosts (lxd based legacy host) seems unresponsive or crashed. This one needs a reboot. Mar 6, 04:26:01 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 1 hour and 4 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 6&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:21:37&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  One of the hosts (lxd based legacy host) seems unresponsive or crashed. This one needs a reboot..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 6&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:26:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 6 Mar 2026 03:21:37 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmmebvlw700y9i99hfmu0lw5s</link>
  <guid>https://status.webdock.io/incident/cmmebvlw700y9i99hfmu0lw5s</guid>
</item>

<item>
  <title>A single host needs a reboot</title>
  <description>
    Type: Incident
    Duration: 16 minutes

    Affected Components: Denmark: General Infrastructure
    Mar 5, 03:16:29 GMT+0 - Investigating - One of the hosts (lxd based legacy host) seems unresponsive or crashed. This one needs a reboot. Mar 5, 03:32:33 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 16 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:16:29&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  One of the hosts (lxd based legacy host) seems unresponsive or crashed. This one needs a reboot..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:32:33&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 5 Mar 2026 03:16:29 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmmcw953l0w5z50p1jh9s4h22</link>
  <guid>https://status.webdock.io/incident/cmmcw953l0w5z50p1jh9s4h22</guid>
</item>

<item>
  <title>A single host needs a reboot</title>
  <description>
    Type: Incident
    Duration: 6 minutes

    Affected Components: Denmark: General Infrastructure
    Mar 4, 12:35:33 GMT+0 - Investigating - One of the hosts (lxd based legacy host) seems unresponsive or crashed. This one needs a reboot. Mar 4, 12:41:20 GMT+0 - Resolved - This incident has been resolved. Sorry for the inconvenience caused. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 6 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:35:33&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  One of the hosts (lxd based legacy host) seems unresponsive or crashed. This one needs a reboot..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:41:20&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved. Sorry for the inconvenience caused..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 4 Mar 2026 12:35:33 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmmc0s9m100wlro8jszakrg93</link>
  <guid>https://status.webdock.io/incident/cmmc0s9m100wlro8jszakrg93</guid>
</item>

<item>
  <title>Issue with another host (lxd-based)</title>
  <description>
    Type: Incident
    Duration: 14 minutes

    Affected Components: Denmark: General Infrastructure
    Mar 2, 14:08:30 GMT+0 - Identified - This one as well needs a reboot. Apologies for the inconvenience. Mar 2, 14:22:09 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 14 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:08:30&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  This one as well needs a reboot. Apologies for the inconvenience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:22:09&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 2 Mar 2026 14:08:30 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmm99834d05lsocckac7nzk8u</link>
  <guid>https://status.webdock.io/incident/cmm99834d05lsocckac7nzk8u</guid>
</item>

<item>
  <title>Network issue in Denmark</title>
  <description>
    Type: Incident
    Duration: 6 minutes

    Affected Components: Denmark: Network Infrastructure
    Mar 2, 12:49:19 GMT+0 - Investigating - Some ip ranges are not being routed correctly with BGP at the moment. This seems to be related to a fault in Frankfurt where some network device went down. We are currently investigating this incident. Mar 2, 12:55:06 GMT+0 - Resolved - This incident has been resolved. It was caused by a DC tech pulling a power cable in Frankfurt they shouldn&#039;t have. The device needed to boot and rebuild routing tables - the impact was only parts of our IP space and for about \~10 minutes.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 6 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:49:19&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Some ip ranges are not being routed correctly with BGP at the moment. This seems to be related to a fault in Frankfurt where some network device went down. We are currently investigating this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:55:06&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved. It was caused by a DC tech pulling a power cable in Frankfurt they shouldn&#039;t have. The device needed to boot and rebuild routing tables - the impact was only parts of our IP space and for about \~10 minutes. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 2 Mar 2026 12:49:19 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmm96e5ft04zkocckzv7k1h1n</link>
  <guid>https://status.webdock.io/incident/cmm96e5ft04zkocckzv7k1h1n</guid>
</item>

<item>
  <title>A single host needs a reboot</title>
  <description>
    Type: Incident
    Duration: 7 minutes

    Affected Components: Denmark: General Infrastructure
    Mar 2, 09:56:00 GMT+0 - Identified - The host needs a reboot. Instances will be again reachable in 5-10 minutes. Apologies for any inconvenience caused. Mar 2, 10:02:31 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 7 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:56:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The host needs a reboot. Instances will be again reachable in 5-10 minutes. Apologies for any inconvenience caused..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:02:31&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 2 Mar 2026 09:56:00 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmm907dkl036cvnuvapwy61oz</link>
  <guid>https://status.webdock.io/incident/cmm907dkl036cvnuvapwy61oz</guid>
</item>

<item>
  <title>Problems communicating with Epyc Host</title>
  <description>
    Type: Incident
    Duration: 1 hour and 28 minutes

    Affected Components: Denmark: General Infrastructure
    Feb 25, 00:46:26 GMT+0 - Identified - We are having issues communicating with one of our Epyc hosts - all customers there are running fine, we just have problems interacting with and sending commands to the host system. Feb 25, 01:55:21 GMT+0 - Identified - We have to now escalate this to a downtime scenario as we are completely unable to reach the control plane of this host. The host will need a reboot and some maintenance. This means all customers on this host will now go down for anything up to 2 hours unfortunately, depending on the severity of the issue. We will update once we know more.  Feb 25, 02:14:42 GMT+0 - Resolved - Fortunately the issue was not as critical as feared and all services came up after a reboot. We saw at most 5 minutes of downtime here today. Thank you for your patience. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 1 hour and 28 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:46:26&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are having issues communicating with one of our Epyc hosts - all customers there are running fine, we just have problems interacting with and sending commands to the host system..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:55:21&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have to now escalate this to a downtime scenario as we are completely unable to reach the control plane of this host. The host will need a reboot and some maintenance. This means all customers on this host will now go down for anything up to 2 hours unfortunately, depending on the severity of the issue. We will update once we know more. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;02:14:42&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Fortunately the issue was not as critical as feared and all services came up after a reboot. We saw at most 5 minutes of downtime here today. Thank you for your patience..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 25 Feb 2026 00:46:26 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmm1bdc7t03mn9d6c2rm5p7ni</link>
  <guid>https://status.webdock.io/incident/cmm1bdc7t03mn9d6c2rm5p7ni</guid>
</item>

<item>
  <title>Issue with a single host (lxd-based)</title>
  <description>
    Type: Incident
    Duration: 28 minutes

    Affected Components: Denmark: General Infrastructure
    Feb 22, 14:44:53 GMT+0 - Identified - The host needs a reboot for proper functioning. Sorry for the inconvenience Feb 22, 15:12:59 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 28 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:44:53&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The host needs a reboot for proper functioning. Sorry for the inconvenience.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:12:59&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 22 Feb 2026 14:44:53 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmlxv02bl0aq14exrbt6d6ncg</link>
  <guid>https://status.webdock.io/incident/cmlxv02bl0aq14exrbt6d6ncg</guid>
</item>

<item>
  <title>Issue with lxd host</title>
  <description>
    Type: Incident
    Duration: 16 minutes

    Affected Components: Denmark: General Infrastructure
    Feb 16, 16:56:00 GMT+0 - Investigating - We are currently investigating this incident. Feb 16, 17:12:22 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 16 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:56:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:12:22&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 16 Feb 2026 16:56:00 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmlpf1krk021vm9vlz5o8i9hz</link>
  <guid>https://status.webdock.io/incident/cmlpf1krk021vm9vlz5o8i9hz</guid>
</item>

<item>
  <title>Issue with a single host (lxd-based)</title>
  <description>
    Type: Incident
    Duration: 8 minutes

    Affected Components: Denmark: General Infrastructure
    Feb 10, 09:13:39 GMT+0 - Identified - The host requires a reboot. Instances will face downtime for 2-5 minutes.

Sorry for the inconvenience. Feb 10, 09:21:29 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 8 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 10&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:13:39&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The host requires a reboot. Instances will face downtime for 2-5 minutes.

Sorry for the inconvenience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 10&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:21:29&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 10 Feb 2026 09:13:39 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmlgdvvrq000hy4pvgd55emgy</link>
  <guid>https://status.webdock.io/incident/cmlgdvvrq000hy4pvgd55emgy</guid>
</item>

<item>
  <title>Issue with lxd host</title>
  <description>
    Type: Incident
    Duration: 31 minutes

    Affected Components: Denmark: Network Infrastructure
    Jan 20, 16:07:40 GMT+0 - Investigating - We are currently investigating this incident. Jan 20, 16:38:06 GMT+0 - Identified - The issue has been fixed. Jan 20, 16:38:19 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 31 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:07:40&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:38:06&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The issue has been fixed..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:38:19&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 20 Jan 2026 16:07:40 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmkmsffep016dxjngse32op08</link>
  <guid>https://status.webdock.io/incident/cmkmsffep016dxjngse32op08</guid>
</item>

<item>
  <title>LXD-based host needs reboot</title>
  <description>
    Type: Incident
    Duration: 38 minutes

    Affected Components: Denmark: General Infrastructure
    Jan 16, 04:47:15 GMT+0 - Investigating - The host will need a reboot for about 5-10 minutes. We apologize for any inconvenience. Jan 16, 05:24:56 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 38 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:47:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  The host will need a reboot for about 5-10 minutes. We apologize for any inconvenience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:24:56&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 16 Jan 2026 04:47:15 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmkgeczop07y5z5eaf83s0awi</link>
  <guid>https://status.webdock.io/incident/cmkgeczop07y5z5eaf83s0awi</guid>
</item>

<item>
  <title>Issue with another EPYC host</title>
  <description>
    Type: Incident
    Duration: 7 hours and 13 minutes

    Affected Components: Denmark: General Infrastructure
    Jan 14, 17:11:44 GMT+0 - Resolved - We have completed all migrations. This should conclude this incident. We apologize for any inconvenience caused. Jan 14, 09:58:22 GMT+0 - Identified - Our DC guys looking into the issue. Apparently it looks like one of the CPU has failed (dual CPU setup). The guys are working on to bring up the host.

Apologies for the inconvenience. Jan 14, 10:31:04 GMT+0 - Resolved - This incident has been resolved. Once again, sorry for the inconvenience. Jan 14, 10:43:34 GMT+0 - Postmortem - ### Incident Post-Mortem – Unexpected Server Reboots

**Affected system:** Single compute node (Dell R6525, dual AMD EPYC)

### Summary

One compute node experienced repeated unexpected reboots caused by hardware-level **Machine Check Exceptions (MCEs)** reported by the system firmware and operating system. The issue was resolved after on-site hardware intervention, and the system is now operating normally.

### Impact

Customers hosted on this node experienced service interruptions during the reboot loop. No data loss occurred.

### Root Cause (most likely)

The most likely cause was a **marginal CPU socket contact** (pin pressure / seating issue) on one processor socket. This can occasionally occur even on new systems and may only surface after some time in production.

When the CPUs were removed, inspected, reseated, and swapped between sockets, the errors stopped and have not recurred.

### Other causes considered

While investigating, we also evaluated and ruled out:

* ECC memory failures (no memory errors were logged by firmware or iDRAC)
* Operating system or kernel issues
* Sustained thermal overload

Other less likely contributors include transient socket power instability or inter-CPU fabric retraining issues, both of which can be cleared by a full power-off and reseat.

### Background

The server was newly installed approximately **1½ months ago** and successfully passed a **10-hour full system stress test** before being placed into production. The issue developed later and was not present during initial burn-in.

### Resolution &amp; current status

* CPUs were reseated and swapped between sockets
* System firmware counters were cleared
* The server is now stable and operating normally under load
* Ongoing monitoring has been increased Jan 14, 13:18:38 GMT+0 - Identified - The issue reappeared. We are looking into it. Jan 14, 13:51:15 GMT+0 - Monitoring - Turns out the fault did follow the CPU, so the CPU is simply bad. We just now removed the CPU and booted the system in a single CPU configuration. However, this resulted in our NVMe drives no longer being visible. For this reason we are switching the healthy CPU to another CPU socket, in the hopes that the PCIe lanes for the drives are tied to that socket and we can run with that single socket that way. If it turns out both CPUs are required for NVME to come up correctly, we will need to reinsert the bad CPU, and do a quick-as-possible migration away from this system for all customers currently there. We will update once we know more.  
  
Unfortunately we do not have a spare CPU of this exact type available in the DC, so these are the options open to us at the moment. Jan 14, 14:16:31 GMT+0 - Monitoring - Unfortunately it turns out this system will not and can not support a single CPU layout while allowing our NVMe drives to function. The only and last option available to us is to insert the faulty CPU again and live-migrate all users away from this system as quickly as we can. You will get migration start and end notifications by email. We assume and expect we will be able to complete the migrations before the faulty CPU kicks up a fuss again. We will update here once the migrations are complete and this issue is fully resolved. We do not have a firm ETA on this, potentially this will take a couple of hours. You should see your instance come up within long, then at some point it will go down for a minute or two while being started in the new location, after which you should see no further disruption. Jan 14, 14:36:15 GMT+0 - Monitoring - All customers are now starting on the unstable system. You should see your VPS come up very soon. Migrations will begin shortly. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 7 hours and 13 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:11:44&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We have completed all migrations. This should conclude this incident. We apologize for any inconvenience caused..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:58:22&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Our DC guys looking into the issue. Apparently it looks like one of the CPU has failed (dual CPU setup). The guys are working on to bring up the host.

Apologies for the inconvenience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:31:04&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved. Once again, sorry for the inconvenience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:43:34&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Postmortem&lt;/strong&gt; -
  ### Incident Post-Mortem – Unexpected Server Reboots

**Affected system:** Single compute node (Dell R6525, dual AMD EPYC)

### Summary

One compute node experienced repeated unexpected reboots caused by hardware-level **Machine Check Exceptions (MCEs)** reported by the system firmware and operating system. The issue was resolved after on-site hardware intervention, and the system is now operating normally.

### Impact

Customers hosted on this node experienced service interruptions during the reboot loop. No data loss occurred.

### Root Cause (most likely)

The most likely cause was a **marginal CPU socket contact** (pin pressure / seating issue) on one processor socket. This can occasionally occur even on new systems and may only surface after some time in production.

When the CPUs were removed, inspected, reseated, and swapped between sockets, the errors stopped and have not recurred.

### Other causes considered

While investigating, we also evaluated and ruled out:

* ECC memory failures (no memory errors were logged by firmware or iDRAC)
* Operating system or kernel issues
* Sustained thermal overload

Other less likely contributors include transient socket power instability or inter-CPU fabric retraining issues, both of which can be cleared by a full power-off and reseat.

### Background

The server was newly installed approximately **1½ months ago** and successfully passed a **10-hour full system stress test** before being placed into production. The issue developed later and was not present during initial burn-in.

### Resolution &amp; current status

* CPUs were reseated and swapped between sockets
* System firmware counters were cleared
* The server is now stable and operating normally under load
* Ongoing monitoring has been increased.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:18:38&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The issue reappeared. We are looking into it..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:51:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Turns out the fault did follow the CPU, so the CPU is simply bad. We just now removed the CPU and booted the system in a single CPU configuration. However, this resulted in our NVMe drives no longer being visible. For this reason we are switching the healthy CPU to another CPU socket, in the hopes that the PCIe lanes for the drives are tied to that socket and we can run with that single socket that way. If it turns out both CPUs are required for NVME to come up correctly, we will need to reinsert the bad CPU, and do a quick-as-possible migration away from this system for all customers currently there. We will update once we know more.  
  
Unfortunately we do not have a spare CPU of this exact type available in the DC, so these are the options open to us at the moment..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:16:31&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Unfortunately it turns out this system will not and can not support a single CPU layout while allowing our NVMe drives to function. The only and last option available to us is to insert the faulty CPU again and live-migrate all users away from this system as quickly as we can. You will get migration start and end notifications by email. We assume and expect we will be able to complete the migrations before the faulty CPU kicks up a fuss again. We will update here once the migrations are complete and this issue is fully resolved. We do not have a firm ETA on this, potentially this will take a couple of hours. You should see your instance come up within long, then at some point it will go down for a minute or two while being started in the new location, after which you should see no further disruption..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:36:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  All customers are now starting on the unstable system. You should see your VPS come up very soon. Migrations will begin shortly..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 14 Jan 2026 09:58:22 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmkduldot00u29t48fst5dynm</link>
  <guid>https://status.webdock.io/incident/cmkduldot00u29t48fst5dynm</guid>
</item>

<item>
  <title>LXD-based host needs reboot</title>
  <description>
    Type: Incident
    Duration: 48 minutes

    Affected Components: Denmark: General Infrastructure
    Jan 12, 11:45:40 GMT+0 - Identified - The server will be down for 5-10 minutes. Sorry for the inconvenience. Jan 12, 12:34:04 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 48 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:45:40&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The server will be down for 5-10 minutes. Sorry for the inconvenience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:34:04&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 12 Jan 2026 11:45:40 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmkb3jnhz001cur2cojz6siwe</link>
  <guid>https://status.webdock.io/incident/cmkb3jnhz001cur2cojz6siwe</guid>
</item>

<item>
  <title>One of our AMD Epyc hosts needs a reboot</title>
  <description>
    Type: Incident
    Duration: 6 hours and 2 minutes

    Affected Components: Denmark: General Infrastructure
    Jan 12, 08:49:07 GMT+0 - Identified - The physical host needs a reboot. Instances running there will see a 15-20 minutes of downtime. Jan 12, 10:02:33 GMT+0 - Monitoring - There seems to be a problem with some cabling in the physical host. The DC guys are on this. Sorry for the inconvenience Jan 12, 10:21:58 GMT+0 - Monitoring - Some bad news. Unfortunately our DC guys saw a rare critical 2-drive failure. We&#039;ll reload all the servers on the affected host with the latest snapshots we have. Our sincere apologies for this. Jan 12, 14:51:16 GMT+0 - Resolved - ### This incident has been resolved.   
  
Post-mortem: Dual NVMe Drive Failure on EPYC Host

Here is our post-mortem for the incident today which caused extended downtime for approximately **317 EPYC-based customer instances**.

---

### Summary

Earlier today, a single EPYC hypervisor experienced a storage failure following a planned administrative restart. The restart itself was routine and performed to address degraded disk I/O performance that had been observed over the preceding days.

Following the reboot, the system failed to come back online due to the unexpected loss of **two NVMe drives**, which together formed a complete ZFS top-level mirror vdev. The simultaneous loss of both members of a mirror rendered the ZFS pool unimportable and resulted in extended downtime while recovery operations were performed.

---

### Timeline and Detection

Prior to the restart, we performed standard pre-maintenance checks:

* The ZFS storage pool reported as **ONLINE**
* No critical ZFS alerts were present
* No hardware warnings or failures were reported by **Dell iDRAC / IPMI**
* There was a single historical ZFS write error recorded on one device, but this was not accompanied by device faulting, checksum storms, or pool degradation

This type of isolated write error is something we occasionally observe across large fleets and, based on long operational experience, does **not normally indicate imminent or catastrophic failure**. The expectation was therefore to proceed with a controlled reboot, followed by a scrub if necessary.

At the time of the restart, there were **no predictive indicators** from either the storage layer or the hardware management layer that suggested an elevated risk of failure.

---

### Failure Event

Upon reboot, the system failed to import its ZFS pool. Investigation revealed that **two NVMe drives were no longer available to the system**. These two drives together constituted an entire mirror vdev at the top level of the pool.

In ZFS, the loss of a complete top-level vdev makes a pool unrecoverable by design, as data is striped across vdevs and cannot be reconstructed without at least one surviving replica.

This immediately escalated the incident from a routine maintenance task to a full host recovery operation.

---

### Hardware Investigation

With assistance from on-site datacenter technicians, extensive hardware diagnostics were performed:

* Drives were reseated and moved to known-good NVMe bays
* Backplane, cabling, and PCIe connectivity were verified
* BIOS and iDRAC inventory were reviewed
* Power cycling and cold starts were attempted

The results were conclusive:

* **One drive was no longer detected at all** by the system
* **The second drive was detected but reported a capacity of 0 GB and failed initialization**

At this point, it was clear that **both NVMe drives had suffered irreversible failure**, likely at the controller or firmware level.

---

### Why This Was Exceptionally Unlikely

This failure mode is statistically extreme:

* Both drives were enterprise-grade NVMe devices
* Both were members of a mirror specifically designed to tolerate single-device failure
* There were **no SMART, iDRAC, or ZFS indicators** suggesting a pending fault
* The failures occurred effectively simultaneously and only became fully visible after a reboot

In many thousands of host-years of operation, we have not previously encountered a scenario where **both members of a ZFS mirror failed in such close succession without advance warning**.

The absence of meaningful alerts meant that there was no operational signal that would normally justify preemptive action such as taking the host out of service prior to the reboot.

---

### Impact

* Approximately **317 customer instances** on the affected host experienced downtime
* The host itself required full storage reinitialization
* Customer instances were restored from backup snapshots via our Incus-based recovery infrastructure

Because the incident occurred while the **current daily backup cycle was still in progress**, restore points varied:

* Approximately **20% of instances** were recovered from backups taken earlier the same morning
* Approximately **80% of instances** were recovered from the most recent completed weekly backup, taken **the previous morning (CET)**

---

### Recovery and Resolution

Once it was clear that the local ZFS pool could not be recovered:

* The affected storage pool was destroyed and recreated
* The host was re-initialized cleanly
* Customer instances were restored from the most recent available snapshots
* All affected services were brought back online

---

### Lessons Learned and Preventive Measures

Although this incident stemmed from an extremely improbable hardware failure, we are still taking concrete steps to reduce the blast radius of similar edge cases in the future:

* More conservative handling and escalation of **any ZFS device-level errors**, even when isolated
* Additional scrutiny around storage health prior to maintenance reboots on high-density hosts
* Adjustments to maintenance timing relative to active backup windows
* Review of power and firmware interactions specific to NVMe devices under sustained I/O load
* Continued evaluation of pool layout and recovery strategies to further limit worst-case scenarios

---

### Closing Notes

This incident was not caused by a single mistake, misconfiguration, or ignored alert. It was the result of a rare and unfortunate convergence of hardware failures that only became fully apparent at reboot time.

While ZFS behaved exactly as designed — refusing to mount a pool whose integrity could not be proven — the lack of advance warning made the outcome both surprising and severe.

We regret the disruption caused and appreciate the patience shown while recovery was underway. Incidents like this feed directly into improving our operational resilience and recovery procedures going forward. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 6 hours and 2 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:49:07&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The physical host needs a reboot. Instances running there will see a 15-20 minutes of downtime..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:02:33&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  There seems to be a problem with some cabling in the physical host. The DC guys are on this. Sorry for the inconvenience.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:21:58&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Some bad news. Unfortunately our DC guys saw a rare critical 2-drive failure. We&#039;ll reload all the servers on the affected host with the latest snapshots we have. Our sincere apologies for this..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:51:16&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  ### This incident has been resolved.   
  
Post-mortem: Dual NVMe Drive Failure on EPYC Host

Here is our post-mortem for the incident today which caused extended downtime for approximately **317 EPYC-based customer instances**.

---

### Summary

Earlier today, a single EPYC hypervisor experienced a storage failure following a planned administrative restart. The restart itself was routine and performed to address degraded disk I/O performance that had been observed over the preceding days.

Following the reboot, the system failed to come back online due to the unexpected loss of **two NVMe drives**, which together formed a complete ZFS top-level mirror vdev. The simultaneous loss of both members of a mirror rendered the ZFS pool unimportable and resulted in extended downtime while recovery operations were performed.

---

### Timeline and Detection

Prior to the restart, we performed standard pre-maintenance checks:

* The ZFS storage pool reported as **ONLINE**
* No critical ZFS alerts were present
* No hardware warnings or failures were reported by **Dell iDRAC / IPMI**
* There was a single historical ZFS write error recorded on one device, but this was not accompanied by device faulting, checksum storms, or pool degradation

This type of isolated write error is something we occasionally observe across large fleets and, based on long operational experience, does **not normally indicate imminent or catastrophic failure**. The expectation was therefore to proceed with a controlled reboot, followed by a scrub if necessary.

At the time of the restart, there were **no predictive indicators** from either the storage layer or the hardware management layer that suggested an elevated risk of failure.

---

### Failure Event

Upon reboot, the system failed to import its ZFS pool. Investigation revealed that **two NVMe drives were no longer available to the system**. These two drives together constituted an entire mirror vdev at the top level of the pool.

In ZFS, the loss of a complete top-level vdev makes a pool unrecoverable by design, as data is striped across vdevs and cannot be reconstructed without at least one surviving replica.

This immediately escalated the incident from a routine maintenance task to a full host recovery operation.

---

### Hardware Investigation

With assistance from on-site datacenter technicians, extensive hardware diagnostics were performed:

* Drives were reseated and moved to known-good NVMe bays
* Backplane, cabling, and PCIe connectivity were verified
* BIOS and iDRAC inventory were reviewed
* Power cycling and cold starts were attempted

The results were conclusive:

* **One drive was no longer detected at all** by the system
* **The second drive was detected but reported a capacity of 0 GB and failed initialization**

At this point, it was clear that **both NVMe drives had suffered irreversible failure**, likely at the controller or firmware level.

---

### Why This Was Exceptionally Unlikely

This failure mode is statistically extreme:

* Both drives were enterprise-grade NVMe devices
* Both were members of a mirror specifically designed to tolerate single-device failure
* There were **no SMART, iDRAC, or ZFS indicators** suggesting a pending fault
* The failures occurred effectively simultaneously and only became fully visible after a reboot

In many thousands of host-years of operation, we have not previously encountered a scenario where **both members of a ZFS mirror failed in such close succession without advance warning**.

The absence of meaningful alerts meant that there was no operational signal that would normally justify preemptive action such as taking the host out of service prior to the reboot.

---

### Impact

* Approximately **317 customer instances** on the affected host experienced downtime
* The host itself required full storage reinitialization
* Customer instances were restored from backup snapshots via our Incus-based recovery infrastructure

Because the incident occurred while the **current daily backup cycle was still in progress**, restore points varied:

* Approximately **20% of instances** were recovered from backups taken earlier the same morning
* Approximately **80% of instances** were recovered from the most recent completed weekly backup, taken **the previous morning (CET)**

---

### Recovery and Resolution

Once it was clear that the local ZFS pool could not be recovered:

* The affected storage pool was destroyed and recreated
* The host was re-initialized cleanly
* Customer instances were restored from the most recent available snapshots
* All affected services were brought back online

---

### Lessons Learned and Preventive Measures

Although this incident stemmed from an extremely improbable hardware failure, we are still taking concrete steps to reduce the blast radius of similar edge cases in the future:

* More conservative handling and escalation of **any ZFS device-level errors**, even when isolated
* Additional scrutiny around storage health prior to maintenance reboots on high-density hosts
* Adjustments to maintenance timing relative to active backup windows
* Review of power and firmware interactions specific to NVMe devices under sustained I/O load
* Continued evaluation of pool layout and recovery strategies to further limit worst-case scenarios

---

### Closing Notes

This incident was not caused by a single mistake, misconfiguration, or ignored alert. It was the result of a rare and unfortunate convergence of hardware failures that only became fully apparent at reboot time.

While ZFS behaved exactly as designed — refusing to mount a pool whose integrity could not be proven — the lack of advance warning made the outcome both surprising and severe.

We regret the disruption caused and appreciate the patience shown while recovery was underway. Incidents like this feed directly into improving our operational resilience and recovery procedures going forward..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 12 Jan 2026 08:49:07 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmkax8mii00u5kwzkkf9lv7fq</link>
  <guid>https://status.webdock.io/incident/cmkax8mii00u5kwzkkf9lv7fq</guid>
</item>

<item>
  <title>issue with KVM host</title>
  <description>
    Type: Incident
    Duration: 6 minutes

    Affected Components: Denmark: General Infrastructure
    Jan 10, 15:32:25 GMT+0 - Investigating - We are currently investigating this incident. Jan 10, 15:38:45 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 6 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 10&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:32:25&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 10&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:38:45&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 10 Jan 2026 15:32:25 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmk8grk0m19g89aba1vlc0bod</link>
  <guid>https://status.webdock.io/incident/cmk8grk0m19g89aba1vlc0bod</guid>
</item>

<item>
  <title>Issue with KVM host</title>
  <description>
    Type: Incident
    Duration: 12 minutes

    Affected Components: Denmark: Network Infrastructure
    Jan 8, 20:08:00 GMT+0 - Investigating - We are currently investigating this incident. Jan 8, 20:20:06 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 12 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:08:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:20:06&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 8 Jan 2026 20:08:00 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmk5vq9g301bvfgr9cb30cx7s</link>
  <guid>https://status.webdock.io/incident/cmk5vq9g301bvfgr9cb30cx7s</guid>
</item>

<item>
  <title>Flaky network for some /24 in DK DC1</title>
  <description>
    Type: Incident
    Duration: 2 hours and 53 minutes

    Affected Components: Denmark: Network Infrastructure
    Dec 30, 12:21:03 GMT+0 - Resolved - For the past two hours or so, no issues whatsoever. We are still investigating what happened exactly, and it&#039;s looking like some misconfiguration happened on our side. We will reopen this if we see further problems. Dec 30, 09:28:25 GMT+0 - Investigating - We are seeing intermittent packet loss on certain prefixes (IP ranges) in DK DC-1\. This may be another small scale DOS which is not being properly mitigated. We are currently investigating this incident. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 2 hours and 53 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:21:03&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  For the past two hours or so, no issues whatsoever. We are still investigating what happened exactly, and it&#039;s looking like some misconfiguration happened on our side. We will reopen this if we see further problems..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:28:25&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are seeing intermittent packet loss on certain prefixes (IP ranges) in DK DC-1\. This may be another small scale DOS which is not being properly mitigated. We are currently investigating this incident..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 30 Dec 2025 09:28:25 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmjsdx2t202ezdxui8j9ocq21</link>
  <guid>https://status.webdock.io/incident/cmjsdx2t202ezdxui8j9ocq21</guid>
</item>

<item>
  <title>Network issue in DK DC</title>
  <description>
    Type: Incident
    Duration: 56 minutes

    Affected Components: Denmark: Network Infrastructure
    Dec 24, 23:07:11 GMT+0 - Investigating - Something is happening with our network at this time. We are currently investigating this incident. Dec 24, 23:19:13 GMT+0 - Identified - This is looking like a DOS event. We are investigating why auto-mitigation failed us. Dec 24, 23:30:20 GMT+0 - Monitoring - Still seeing up to 90% packet loss from some regions, depending on the route the traffic takes to reach us. It&#039;s a single 10Gbit line which is currently overloaded. Our NOC team is looking at blocking the source of the DOS (which should be automatic, not sure why that is not happening) or shifting the traffic to our healthy 100Gbit lines. We will hopefully have this resolved soon. Dec 24, 23:46:27 GMT+0 - Identified - NOC team is still working on this. We are unsure on this side why the fix is taking so long, especially given our pretty good DOS protection - this may be some sort of novel/interesting attack we haven&#039;t seen before. We will update once our NOC gives us information. Dec 24, 23:59:21 GMT+0 - Monitoring - We implemented a fix and are currently monitoring the result. Dec 25, 00:03:11 GMT+0 - Resolved - This incident has been resolved. We apologize for the inconvenience this evening.! 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 56 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:07:11&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Something is happening with our network at this time. We are currently investigating this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:19:13&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  This is looking like a DOS event. We are investigating why auto-mitigation failed us..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:30:20&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  Still seeing up to 90% packet loss from some regions, depending on the route the traffic takes to reach us. It&#039;s a single 10Gbit line which is currently overloaded. Our NOC team is looking at blocking the source of the DOS (which should be automatic, not sure why that is not happening) or shifting the traffic to our healthy 100Gbit lines. We will hopefully have this resolved soon..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:46:27&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  NOC team is still working on this. We are unsure on this side why the fix is taking so long, especially given our pretty good DOS protection - this may be some sort of novel/interesting attack we haven&#039;t seen before. We will update once our NOC gives us information..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:59:21&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We implemented a fix and are currently monitoring the result..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:03:11&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved. We apologize for the inconvenience this evening.!.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 24 Dec 2025 23:07:11 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmjkmiurp09mx53rulob3njti</link>
  <guid>https://status.webdock.io/incident/cmjkmiurp09mx53rulob3njti</guid>
</item>

<item>
  <title>Issue with a single host (lxd-based)</title>
  <description>
    Type: Incident
    Duration: 7 minutes

    Affected Components: Denmark: General Infrastructure
    Dec 24, 07:21:51 GMT+0 - Identified - The host needs a reboot, there will be 5-10 minutes of downtime. Sorry for the inconvenience. Dec 24, 07:28:22 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 7 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:21:51&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The host needs a reboot, there will be 5-10 minutes of downtime. Sorry for the inconvenience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:28:22&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 24 Dec 2025 07:21:51 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmjjor77d055mq1gqjsx0ef8o</link>
  <guid>https://status.webdock.io/incident/cmjjor77d055mq1gqjsx0ef8o</guid>
</item>

<item>
  <title>Another LXD-based host needs a reboot as the system became unresponsive</title>
  <description>
    Type: Incident
    Duration: 8 minutes

    Affected Components: Denmark: General Infrastructure
    Dec 16, 08:31:12 GMT+0 - Identified - The host will be rebooted now. There will be 5-10 minutes of downtime. Dec 16, 08:39:00 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 8 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:31:12&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The host will be rebooted now. There will be 5-10 minutes of downtime..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:39:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 16 Dec 2025 08:31:12 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmj8bpl2n006p11dnz64zly6e</link>
  <guid>https://status.webdock.io/incident/cmj8bpl2n006p11dnz64zly6e</guid>
</item>

<item>
  <title>An lxd-based host needs a reboot</title>
  <description>
    Type: Incident
    Duration: 6 minutes

    Affected Components: Denmark: General Infrastructure
    Dec 15, 10:54:37 GMT+0 - Identified - The host needs a reboot. Server will see a 5-10 minutes of downtime. Apologies for the inconvenience. Dec 15, 11:00:57 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 6 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:54:37&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The host needs a reboot. Server will see a 5-10 minutes of downtime. Apologies for the inconvenience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 15&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:00:57&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 15 Dec 2025 10:54:37 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmj71e5qc06yv10wr6172qcnn</link>
  <guid>https://status.webdock.io/incident/cmj71e5qc06yv10wr6172qcnn</guid>
</item>

<item>
  <title>A single host needs a reboot</title>
  <description>
    Type: Incident
    Duration: 9 minutes

    Affected Components: Denmark: General Infrastructure
    Dec 2, 03:44:06 GMT+0 - Investigating - The host needs a reboot for normal operation. There will be 2-5 minutes of downtime. Sorry for the inconvenience. Dec 2, 03:53:16 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 9 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:44:06&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  The host needs a reboot for normal operation. There will be 2-5 minutes of downtime. Sorry for the inconvenience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:53:16&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 2 Dec 2025 03:44:06 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmio1afsr044zly8k6oxelcde</link>
  <guid>https://status.webdock.io/incident/cmio1afsr044zly8k6oxelcde</guid>
</item>

<item>
  <title>A single host needs a reboot</title>
  <description>
    Type: Incident
    Duration: 12 minutes

    Affected Components: Denmark: General Infrastructure
    Dec 1, 04:00:35 GMT+0 - Identified - One of our lxd-based container host has become unresponsive and hence needs a reboot. Dec 1, 04:12:32 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 12 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:00:35&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  One of our lxd-based container host has become unresponsive and hence needs a reboot..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:12:32&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 1 Dec 2025 04:00:35 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmimmfs5o037won34qt8muiyb</link>
  <guid>https://status.webdock.io/incident/cmimmfs5o037won34qt8muiyb</guid>
</item>

<item>
  <title>A single host needs a reboot</title>
  <description>
    Type: Incident
    Duration: 8 minutes

    Affected Components: Denmark: General Infrastructure
    Nov 18, 06:46:22 GMT+0 - Identified - One of our lxd-based host needs a reboot as it has become unresponsive. Nov 18, 06:54:02 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 8 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 18&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:46:22&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  One of our lxd-based host needs a reboot as it has become unresponsive..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 18&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:54:02&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 18 Nov 2025 06:46:22 +0000</pubDate>
  <link>https://status.webdock.io/incident/cmi47mwg601b5kslid8pw2vgq</link>
  <guid>https://status.webdock.io/incident/cmi47mwg601b5kslid8pw2vgq</guid>
</item>

<item>
  <title>Fiber optics and cable replacement </title>
  <description>
    Type: Maintenance
    Duration: 8 minutes

    Affected Components: Denmark: Network Infrastructure
    Oct 11, 17:15:00 GMT+0 - Identified - Our NOC team has identified and issue with a flapping link and high CRC errors rate for the past 2 days. We will attempt to fix the issue by changing optics. Oct 11, 17:15:01 GMT+0 - Identified - Maintenance is now in progress Oct 11, 17:22:54 GMT+0 - Completed - Maintenance has completed successfully. No impact to customer workloads was seen. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 8 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:15:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Our NOC team has identified and issue with a flapping link and high CRC errors rate for the past 2 days. We will attempt to fix the issue by changing optics..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:15:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:22:54&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully. No impact to customer workloads was seen..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 11 Oct 2025 17:15:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cmgmj9z0l0a4z2dweodmb7ccc</link>
  <guid>https://status.webdock.io/maintenance/cmgmj9z0l0a4z2dweodmb7ccc</guid>
</item>

<item>
  <title>Webdock Website and Dashboard maintenance</title>
  <description>
    Type: Maintenance
    Duration: 3 hours

    Affected Components: Webdock Dashboard, Webdock Website, Webdock REST API
    Oct 2, 08:00:01 GMT+0 - Identified - Maintenance is now in progress Oct 2, 08:00:00 GMT+0 - Identified - Edit this maintenance has been bumped by +1 day from what was originally planned.  
  
We will be splitting our front-end website and docs from our dashboard systems tomorrow during the day, sometime between 10.00 and 13.00 CET. This should cause minimal disruption, but we do expect a couple of minutes of potential web server error messages as SSL Certificates are being generated and DNS is switching over. Oct 2, 11:00:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 3 hours</p>
    <p><strong>Affected Components:</strong> , , </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Edit this maintenance has been bumped by +1 day from what was originally planned.  
  
We will be splitting our front-end website and docs from our dashboard systems tomorrow during the day, sometime between 10.00 and 13.00 CET. This should cause minimal disruption, but we do expect a couple of minutes of potential web server error messages as SSL Certificates are being generated and DNS is switching over..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 2 Oct 2025 08:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cmg6el8kv00xif33oc02ht4re</link>
  <guid>https://status.webdock.io/maintenance/cmg6el8kv00xif33oc02ht4re</guid>
</item>

<item>
  <title>Electric Works in DK-DC1</title>
  <description>
    Type: Maintenance
    Duration: 2 hours

    Affected Components: Denmark: General Infrastructure
    Sep 25, 06:00:00 GMT+0 - Identified - This thursday morning some electric works will be happening at the DK DC1\. We are getting new power supply added increasing our allowed power draw from the grid. The procedure means that mains power will need to be cut for about 5-10 minutes. This should be handled by our UPS and Generator systems - but just in case something unexpected happens we are announcing this 2 hour maintenance window. Sep 25, 06:00:01 GMT+0 - Identified - Maintenance is now in progress Sep 25, 08:00:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 2 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  This thursday morning some electric works will be happening at the DK DC1\. We are getting new power supply added increasing our allowed power draw from the grid. The procedure means that mains power will need to be cut for about 5-10 minutes. This should be handled by our UPS and Generator systems - but just in case something unexpected happens we are announcing this 2 hour maintenance window..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 25 Sep 2025 06:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cmfv04pap00lkxc92tfuxp8a3</link>
  <guid>https://status.webdock.io/maintenance/cmfv04pap00lkxc92tfuxp8a3</guid>
</item>

<item>
  <title>Electric works in DK DC1</title>
  <description>
    Type: Maintenance
    Duration: 1 day, 7 hours and 44 minutes

    Affected Components: Denmark: General Infrastructure
    Jun 27, 07:00:00 GMT+0 - Identified - We are activating new solar power generating capacity in the DK-DC1 today. The electricians do not expect any disruption of power or think that we need to do any sort of emergency power testing today, so no disruption is expected. Jun 27, 15:00:00 GMT+0 - Completed - Maintenance has completed successfully Jun 26, 07:15:46 GMT+0 - Identified - Maintenance is now in progress. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 day, 7 hours and 44 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are activating new solar power generating capacity in the DK-DC1 today. The electricians do not expect any disruption of power or think that we need to do any sort of emergency power testing today, so no disruption is expected..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:15:46&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 27 Jun 2025 07:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cmcd1sqim002ids0xy6tn22nw</link>
  <guid>https://status.webdock.io/maintenance/cmcd1sqim002ids0xy6tn22nw</guid>
</item>

<item>
  <title>Maintenace Regarding Routing Policy</title>
  <description>
    Type: Maintenance
    Duration: 15 minutes

    Affected Components: Denmark: Network Infrastructure
    Jun 20, 10:50:01 GMT+0 - Identified - Maintenance is now in progress Jun 20, 11:05:00 GMT+0 - Completed - Maintenance has completed successfully Jun 20, 10:50:00 GMT+0 - Identified - A 15 minute maintenance to adjust routing policy on core routers. No downtime expected. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 15 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:50:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:05:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:50:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  A 15 minute maintenance to adjust routing policy on core routers. No downtime expected..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 20 Jun 2025 10:50:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cmc4nsqva00175do6wjuc45tm</link>
  <guid>https://status.webdock.io/maintenance/cmc4nsqva00175do6wjuc45tm</guid>
</item>

<item>
  <title>Short Network Maintenance</title>
  <description>
    Type: Maintenance
    Duration: 23 hours and 42 minutes

    Affected Components: Denmark: Network Infrastructure
    Jun 18, 13:33:10 GMT+0 - Completed - Maintenance has completed successfully. Jun 19, 13:15:00 GMT+0 - Identified - We will adjust bgp policy on Core routers, this will lead to bgp flaps between Core routers and Spine/leaf switches. Traffic disruption is not expected. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 23 hours and 42 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 18&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:33:10&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:15:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We will adjust bgp policy on Core routers, this will lead to bgp flaps between Core routers and Spine/leaf switches. Traffic disruption is not expected..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 19 Jun 2025 13:15:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cmc1ym0lx000bttyz7wffhfig</link>
  <guid>https://status.webdock.io/maintenance/cmc1ym0lx000bttyz7wffhfig</guid>
</item>

<item>
  <title>Maintenance on Core Routers</title>
  <description>
    Type: Maintenance
    Duration: 15 minutes

    Affected Components: Denmark: Network Infrastructure
    Jun 19, 13:00:01 GMT+0 - Identified - Maintenance is now in progress Jun 19, 13:15:00 GMT+0 - Completed - Maintenance has completed successfully Jun 19, 13:00:00 GMT+0 - Identified - We are planning for a scheduled maintenance to adjust core routers BGP policy. Downtime not expected 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 15 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:15:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a scheduled maintenance to adjust core routers BGP policy. Downtime not expected.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 19 Jun 2025 13:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cmc3dytfo000km71jscp9vj3m</link>
  <guid>https://status.webdock.io/maintenance/cmc3dytfo000km71jscp9vj3m</guid>
</item>

<item>
  <title>Storage backend maintenance</title>
  <description>
    Type: Maintenance
    Duration: 18 hours and 41 minutes

    Affected Components: Denmark: Storage Backend
    Jun 4, 12:18:43 GMT+0 - Completed - Maintenance has completed successfully. Jun 5, 07:00:00 GMT+0 - Identified - We are doing maintenance work on our backup storage backend today. The work will be ongoing throughout the day and for periods of time you will not be able to interact with snapshots. This work also means that for today some subset of our customers will not receive an automatic system snapshot for today, but these should resume tomorrow as normal. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 18 hours and 41 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:18:43&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are doing maintenance work on our backup storage backend today. The work will be ongoing throughout the day and for periods of time you will not be able to interact with snapshots. This work also means that for today some subset of our customers will not receive an automatic system snapshot for today, but these should resume tomorrow as normal..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 5 Jun 2025 07:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cmbhn4znb0001d5de7bso2k3c</link>
  <guid>https://status.webdock.io/maintenance/cmbhn4znb0001d5de7bso2k3c</guid>
</item>

<item>
  <title>Host needs kernel upgrade and restart</title>
  <description>
    Type: Maintenance
    Duration: 11 minutes

    
    May 5, 07:36:29 GMT+0 - Identified - Maintenance is now in progress. May 5, 07:47:54 GMT+0 - Completed - Maintenance complete and all customers are up May 5, 07:15:44 GMT+0 - Identified - We have a host machine which needs a kernel upgrade and reboot. Your VPS will go down for at most 5-10 minutes, usually a lot less. You can check if your server(s) are affected by logging in to Webdock as this will then be shown at the top of the page in a red alert window. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 11 minutes</p>
    
    &lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:36:29&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:47:54&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance complete and all customers are up.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:15:44&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have a host machine which needs a kernel upgrade and reboot. Your VPS will go down for at most 5-10 minutes, usually a lot less. You can check if your server(s) are affected by logging in to Webdock as this will then be shown at the top of the page in a red alert window..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 5 May 2025 07:15:44 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cmaaro85c000j11pm9j0zddxm</link>
  <guid>https://status.webdock.io/maintenance/cmaaro85c000j11pm9j0zddxm</guid>
</item>

<item>
  <title>Host needs kernel upgrade and restart</title>
  <description>
    Type: Maintenance
    Duration: 13 minutes

    Affected Components: Denmark: General Infrastructure
    Apr 30, 16:38:39 GMT+0 - Completed - The reboot has been completed and all customer instances are up. We apologize for the brief downtime. Apr 30, 16:26:01 GMT+0 - Identified - We have a host machine which needs a kernel upgrade and reboot. Your VPS will go down for at most 5-10 minutes, usually a lot less. You can check if your server(s) are affected by logging in to Webdock as this will then be shown at the top of the page in a red alert window. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 13 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:38:39&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  The reboot has been completed and all customer instances are up. We apologize for the brief downtime..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:26:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have a host machine which needs a kernel upgrade and reboot. Your VPS will go down for at most 5-10 minutes, usually a lot less. You can check if your server(s) are affected by logging in to Webdock as this will then be shown at the top of the page in a red alert window..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 30 Apr 2025 16:26:01 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cma45ey5e00d8c0crnhoc3o0c</link>
  <guid>https://status.webdock.io/maintenance/cma45ey5e00d8c0crnhoc3o0c</guid>
</item>

<item>
  <title>Fiber rewiring in single rack in DK DC Monday Feb. 17th 16.00-19.00</title>
  <description>
    Type: Maintenance
    Duration: 3 hours

    Affected Components: Denmark: Network Infrastructure
    Feb 17, 18:00:00 GMT+0 - Completed - Maintenance has completed successfully Feb 17, 15:00:00 GMT+0 - Identified - On monday we will be tidying up wiring in a rack in our Denmark DC for better airflow and accessibility. The procedure involves replacing fiber uplinks one by one. As we have a fully redundant and routed infrastructure, if no mistakes are made, then our customers should not notice anything happening during this maintenance window. As there is always a small chance that a mistake happens or a cable gets snagged in which case a brief network outage could happen. For this reason we are announcing this maintenance window. Feb 17, 15:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 3 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 17&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 17&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  On monday we will be tidying up wiring in a rack in our Denmark DC for better airflow and accessibility. The procedure involves replacing fiber uplinks one by one. As we have a fully redundant and routed infrastructure, if no mistakes are made, then our customers should not notice anything happening during this maintenance window. As there is always a small chance that a mistake happens or a cable gets snagged in which case a brief network outage could happen. For this reason we are announcing this maintenance window..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 17&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 17 Feb 2025 15:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm74l6q91007g13m6bmy47e8d</link>
  <guid>https://status.webdock.io/maintenance/cm74l6q91007g13m6bmy47e8d</guid>
</item>

<item>
  <title>Rescheduled: Electrics maintenance for single rack in DK DC1 Thursday January 30th</title>
  <description>
    Type: Maintenance
    Duration: 1 hour and 12 minutes

    Affected Components: Denmark: General Infrastructure
    Jan 30, 07:00:01 GMT+0 - Identified - Maintenance is now in progress Jan 30, 07:00:00 GMT+0 - Identified - This maintenance has been rescheduled for Thursday January 30th, due to some required equipment not making it to us in time.   
  
On Thursday we will be performing some electrics related maintenance for the rack which tripped fuses when we did our generator upgrade and power outage tests the other day. The maintenance consists of putting less sensitive fuses on both of our circuits and potentially replacing a power transfer switch. We hope that the work will happen without incident, but there is a chance that power may go away from that rack again - we cannot guarantee this will not happen. This will impact most of our Ryzen workloads, our Epyc workloads and our core services such as our dashboard. If a power incident occurs, we will be operational again within about 5-10 minutes and our entire team will be standing by to make sure everything comes up as quickly as possible after a potential failure.  

We absolutely need to solve this issue as the situation is now, then if we have an unplanned power outage there is a chance this rack will go down, and we don&#039;t want that. In fact, everything we&#039;ve designed in our DC is designed to prevent exactly this. We hope to get this issue resolved fully on thursday and hopefully without incident. Jan 30, 08:11:47 GMT+0 - Completed - The maintenance for today is complete. All customer VPS servers are up and running. Unfortunately we saw another outage in the rack but that helped us diagnose the issue, which was a faulty power transfer switch as we had surmised. The switch has been replaced and thus the issue has been solved. This means that in case of an unplanned power outage, all our emergency power systems in that rack will behave properly. After the initial crash we did repeated tests and everything was nominal for all racks and equipment in the DC.  
  
We apologize for any inconvenience caused here this morning. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour and 12 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  This maintenance has been rescheduled for Thursday January 30th, due to some required equipment not making it to us in time.   
  
On Thursday we will be performing some electrics related maintenance for the rack which tripped fuses when we did our generator upgrade and power outage tests the other day. The maintenance consists of putting less sensitive fuses on both of our circuits and potentially replacing a power transfer switch. We hope that the work will happen without incident, but there is a chance that power may go away from that rack again - we cannot guarantee this will not happen. This will impact most of our Ryzen workloads, our Epyc workloads and our core services such as our dashboard. If a power incident occurs, we will be operational again within about 5-10 minutes and our entire team will be standing by to make sure everything comes up as quickly as possible after a potential failure.  

We absolutely need to solve this issue as the situation is now, then if we have an unplanned power outage there is a chance this rack will go down, and we don&#039;t want that. In fact, everything we&#039;ve designed in our DC is designed to prevent exactly this. We hope to get this issue resolved fully on thursday and hopefully without incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:11:47&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  The maintenance for today is complete. All customer VPS servers are up and running. Unfortunately we saw another outage in the rack but that helped us diagnose the issue, which was a faulty power transfer switch as we had surmised. The switch has been replaced and thus the issue has been solved. This means that in case of an unplanned power outage, all our emergency power systems in that rack will behave properly. After the initial crash we did repeated tests and everything was nominal for all racks and equipment in the DC.  
  
We apologize for any inconvenience caused here this morning..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 30 Jan 2025 07:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm6g7rssq004txutrgvd07uw2</link>
  <guid>https://status.webdock.io/maintenance/cm6g7rssq004txutrgvd07uw2</guid>
</item>

<item>
  <title>Electrics maintenance for single rack in DK DC1 Tuesday January 28th</title>
  <description>
    Type: Maintenance
    Duration: 1 hour and 25 minutes

    Affected Components: Denmark: General Infrastructure
    Jan 28, 07:00:01 GMT+0 - Identified - Maintenance is now in progress Jan 28, 07:00:00 GMT+0 - Identified - On Tuesday January 28th we will be performing some electrics related maintenance for the rack which tripped fuses when we did our generator upgrade and power outage tests the other day. The maintenance consists of putting less sensitive fuses on both of our circuits and potentially replacing a power transfer switch. We hope that the work will happen without incident, but there is a chance that power may go away from that rack again - we cannot guarantee this will not happen. This will impact most of our Ryzen workloads, our Epyc workloads and our core services such as our dashboard. If a power incident occurs, we will be operational again within about 5-10 minutes and our entire team will be standing by to make sure everything comes up as quickly as possible after a potential failure.

We absolutely need to solve this issue as the situation is now, then if we have an unplanned power outage there is a chance this rack will go down, and we don&#039;t want that. In fact, everything we&#039;ve designed in our DC is designed to prevent exactly this. We hope to get this issue resolved fully on tuesday and hopefully without incident. Jan 28, 08:24:47 GMT+0 - Completed - The maintenance has been rescheduled for thursday January 30th 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour and 25 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  On Tuesday January 28th we will be performing some electrics related maintenance for the rack which tripped fuses when we did our generator upgrade and power outage tests the other day. The maintenance consists of putting less sensitive fuses on both of our circuits and potentially replacing a power transfer switch. We hope that the work will happen without incident, but there is a chance that power may go away from that rack again - we cannot guarantee this will not happen. This will impact most of our Ryzen workloads, our Epyc workloads and our core services such as our dashboard. If a power incident occurs, we will be operational again within about 5-10 minutes and our entire team will be standing by to make sure everything comes up as quickly as possible after a potential failure.

We absolutely need to solve this issue as the situation is now, then if we have an unplanned power outage there is a chance this rack will go down, and we don&#039;t want that. In fact, everything we&#039;ve designed in our DC is designed to prevent exactly this. We hope to get this issue resolved fully on tuesday and hopefully without incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:24:47&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  The maintenance has been rescheduled for thursday January 30th.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 28 Jan 2025 07:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm695avkr002vhvf8o5xnlqak</link>
  <guid>https://status.webdock.io/maintenance/cm695avkr002vhvf8o5xnlqak</guid>
</item>

<item>
  <title>Network Maintenance in Montreal January 23rd</title>
  <description>
    Type: Maintenance
    Duration: 13 hours and 5 minutes

    
    Jan 22, 23:00:00 GMT+0 - Identified - Maintenance is now in progress Jan 22, 23:00:00 GMT+0 - Identified - Update: this maintenance window has been rescheduled to January 23rd. The new information from our DC provider is as follows:

Time Frame:  
Start Time: 6:00AM EST  
End Time: 8:00AM EST  
Expected Downtime: 30 minutes  
  
As before, we are expecting only a brief loss of network connectivity. Thank you for your patience. Jan 23, 12:05:27 GMT+0 - Completed - The maintenance is complete and we have connectivity again. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 13 hours and 5 minutes</p>
    
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Update: this maintenance window has been rescheduled to January 23rd. The new information from our DC provider is as follows:

Time Frame:  
Start Time: 6:00AM EST  
End Time: 8:00AM EST  
Expected Downtime: 30 minutes  
  
As before, we are expecting only a brief loss of network connectivity. Thank you for your patience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:05:27&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  The maintenance is complete and we have connectivity again..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 22 Jan 2025 23:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm5p8cibd004udi8kou2k9emk</link>
  <guid>https://status.webdock.io/maintenance/cm5p8cibd004udi8kou2k9emk</guid>
</item>

<item>
  <title>General Network Maintenance</title>
  <description>
    Type: Maintenance
    Duration: 2 days

    Affected Components: Denmark: Network Infrastructure
    Jan 21, 08:30:00 GMT+0 - Identified - Today and tomorrow we are doing some general network maintenance. The changes made are to improve our routing within our Denmark DC. The changes should not affect any customer workloads other than a tiny percentage of VPS servers which currently have erroneous configuration. In cases where a VPS is affected, it is only impacted for a few seconds as routing is being reconfigured. The impact should be minimal and likely not noticeable by our customers. If you do experience some sort of network issue that lasts more than a minute or two, please reach out to our support team. Jan 21, 08:30:01 GMT+0 - Identified - Maintenance is now in progress Jan 23, 08:30:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 2 days</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Today and tomorrow we are doing some general network maintenance. The changes made are to improve our routing within our Denmark DC. The changes should not affect any customer workloads other than a tiny percentage of VPS servers which currently have erroneous configuration. In cases where a VPS is affected, it is only impacted for a few seconds as routing is being reconfigured. The impact should be minimal and likely not noticeable by our customers. If you do experience some sort of network issue that lasts more than a minute or two, please reach out to our support team..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:30:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 21 Jan 2025 08:30:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm667eox5005ot1rkw93j3013</link>
  <guid>https://status.webdock.io/maintenance/cm667eox5005ot1rkw93j3013</guid>
</item>

<item>
  <title>Generator upgrade in Denmark</title>
  <description>
    Type: Maintenance
    Duration: 5 hours

    Affected Components: Denmark: General Infrastructure
    Jan 16, 12:00:00 GMT+0 - Completed - Maintenance has completed successfully Jan 16, 07:00:01 GMT+0 - Identified - Maintenance is now in progress Jan 16, 07:00:00 GMT+0 - Identified - Our emergency power system in Denmark is getting an upgrade today in the form of a new generator. We do not expect any disruption. The only theoretical scenario where things could go wrong is if we have a mains power outage at the exact same time the generator upgrade is happening and that this power outage has a duration of more than 50 minutes. The odds of this are extremely low and we do not expect any disruption of service today. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 5 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Our emergency power system in Denmark is getting an upgrade today in the form of a new generator. We do not expect any disruption. The only theoretical scenario where things could go wrong is if we have a mains power outage at the exact same time the generator upgrade is happening and that this power outage has a duration of more than 50 minutes. The odds of this are extremely low and we do not expect any disruption of service today..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 16 Jan 2025 07:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm5yyy5ic003sgrku6i6oecbw</link>
  <guid>https://status.webdock.io/maintenance/cm5yyy5ic003sgrku6i6oecbw</guid>
</item>

<item>
  <title>Host needs kernel upgrade and restart in Canada</title>
  <description>
    Type: Maintenance
    Duration: 15 minutes

    
    Dec 20, 09:27:38 GMT+0 - Identified - Maintenance is now in progress. Dec 20, 09:27:16 GMT+0 - Identified - We have a host machine in Canada which needs a kernel upgrade and reboot. Your VPS will go down for at most 5-10 minutes, usually a lot less. You can check if your server(s) are affected by logging in to Webdock as this will then be shown at the top of the page in a red alert window. Dec 20, 09:42:08 GMT+0 - Completed - Maintenance complete and all customer servers operational 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 15 minutes</p>
    
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:27:38&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:27:16&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have a host machine in Canada which needs a kernel upgrade and reboot. Your VPS will go down for at most 5-10 minutes, usually a lot less. You can check if your server(s) are affected by logging in to Webdock as this will then be shown at the top of the page in a red alert window..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:42:08&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance complete and all customer servers operational.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 20 Dec 2024 09:27:16 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm4wjrhlb0069ebe2v1718ojh</link>
  <guid>https://status.webdock.io/maintenance/cm4wjrhlb0069ebe2v1718ojh</guid>
</item>

<item>
  <title>Network Maintenance in Denmark</title>
  <description>
    Type: Maintenance
    Duration: 7 days, 23 hours and 13 minutes

    Affected Components: Denmark: Network Infrastructure
    Dec 2, 13:01:00 GMT+0 - Identified - Maintenance is now in progress. Nov 27, 13:01:00 GMT+0 - Identified - We will be performing a rolling update of network configuration across our entire fleet in Denmark today and tomorrow. This update is to ensure the stability of our network and solve a long-standing issue whereby some customers have experienced sudden loss of connectivity due to routes beind dropped on our hosts. This update will eliminate this long-standing issue.  
  
Unfortunately, a subset of our customer instances need to be touched whereby our team adjust the network configuration inside your instance. This will typically only impact customers who have created servers with us recently - older instances and instances created within the last couple of weeks are unaffected and will not be touched.  
  
The operations usually just mean a reload of network configuration whereby - at worst - you will see a few packets lost and that&#039;s it. Only in rare cases where network configuration was already non-standard or bad beforehand in a customer instance will scenarios come up where you may see some disruption. We expect this to be minimal.  
  
Since this is a large task, we expect this maintenance to last throughout the day today and all day tomorrow, possibly longer depending on progress, in which case we will update here.  
  
Thank you for your patience and understanding. Nov 27, 10:05:20 GMT+0 - Identified - This network maintenance has been on-going in stops and starts for the past week or so. We expect the final changes to be completed within the next 2 days. These changes should have NO impact on your server or workloads. Nov 29, 09:19:24 GMT+0 - Identified - We have been slowly pecking away at this in the safest manner possible, and for that reason only a small portion of our fleet in Denmark has been converted to the new configuration. We have decided to let this portion run until monday before doing the remainder and finally completing this maintenance. As stated before: This maintenance should have no impact on your server whatsoever. Dec 5, 09:18:26 GMT+0 - Completed - After slow methodical work over the past two weeks we have completed all the network changes to our entire fleet in Denmark with no disruption to customers. Everything has been checked, double checked and triple checked :D We are happy to close this maintenance at last. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 7 days, 23 hours and 13 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:01:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:01:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We will be performing a rolling update of network configuration across our entire fleet in Denmark today and tomorrow. This update is to ensure the stability of our network and solve a long-standing issue whereby some customers have experienced sudden loss of connectivity due to routes beind dropped on our hosts. This update will eliminate this long-standing issue.  
  
Unfortunately, a subset of our customer instances need to be touched whereby our team adjust the network configuration inside your instance. This will typically only impact customers who have created servers with us recently - older instances and instances created within the last couple of weeks are unaffected and will not be touched.  
  
The operations usually just mean a reload of network configuration whereby - at worst - you will see a few packets lost and that&#039;s it. Only in rare cases where network configuration was already non-standard or bad beforehand in a customer instance will scenarios come up where you may see some disruption. We expect this to be minimal.  
  
Since this is a large task, we expect this maintenance to last throughout the day today and all day tomorrow, possibly longer depending on progress, in which case we will update here.  
  
Thank you for your patience and understanding..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:05:20&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  This network maintenance has been on-going in stops and starts for the past week or so. We expect the final changes to be completed within the next 2 days. These changes should have NO impact on your server or workloads..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 29&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:19:24&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have been slowly pecking away at this in the safest manner possible, and for that reason only a small portion of our fleet in Denmark has been converted to the new configuration. We have decided to let this portion run until monday before doing the remainder and finally completing this maintenance. As stated before: This maintenance should have no impact on your server whatsoever..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 5&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:18:26&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  After slow methodical work over the past two weeks we have completed all the network changes to our entire fleet in Denmark with no disruption to customers. Everything has been checked, double checked and triple checked :D We are happy to close this maintenance at last..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 2 Dec 2024 13:01:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm3pwnu6m0008qo87u4hpta07</link>
  <guid>https://status.webdock.io/maintenance/cm3pwnu6m0008qo87u4hpta07</guid>
</item>

<item>
  <title>Host Memory Upgrade in Denmark</title>
  <description>
    Type: Maintenance
    Duration: 52 minutes

    Affected Components: Denmark: General Infrastructure
    Dec 1, 20:46:01 GMT+0 - Completed - Everything is up and looks good. Maintenance completed. Thank you for your patience this evening.! Dec 1, 20:38:03 GMT+0 - Identified - We literally hit some cable snags which we had to work through when pulling the system from the rack. The system is back up and all instances are starting. We are watching and doing final touches. Dec 1, 20:00:00 GMT+0 - Identified - We need to add in some RAM sticks to a system which has been unstable for many weeks now, due to OOM events. By adding more RAM we hope to resolve the issues on that host. The procedure means we will need to shut down the host completely, pull it out of the rack and add in the new RAM modules and start it up again. Ideally the procedure should take no more than 20 minutes (so 20 minutes of downtime) but we are reserving a full hour for this sunday evening in case of issues. If you are affected by this maintenance you will receive a seperate email informing you of this. Dec 1, 19:54:10 GMT+0 - Identified - The maintenance will start in a few minutes. We expect your VPS will go down for somewhere between 15 and 30 minutes depending on how things go. Our target is no more than 20 minutes for the entire operation. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 52 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:46:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Everything is up and looks good. Maintenance completed. Thank you for your patience this evening.!.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:38:03&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We literally hit some cable snags which we had to work through when pulling the system from the rack. The system is back up and all instances are starting. We are watching and doing final touches..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We need to add in some RAM sticks to a system which has been unstable for many weeks now, due to OOM events. By adding more RAM we hope to resolve the issues on that host. The procedure means we will need to shut down the host completely, pull it out of the rack and add in the new RAM modules and start it up again. Ideally the procedure should take no more than 20 minutes (so 20 minutes of downtime) but we are reserving a full hour for this sunday evening in case of issues. If you are affected by this maintenance you will receive a seperate email informing you of this..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;19:54:10&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The maintenance will start in a few minutes. We expect your VPS will go down for somewhere between 15 and 30 minutes depending on how things go. Our target is no more than 20 minutes for the entire operation..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 1 Dec 2024 20:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm42kc7u20001lypfv9n0jd8b</link>
  <guid>https://status.webdock.io/maintenance/cm42kc7u20001lypfv9n0jd8b</guid>
</item>

<item>
  <title>AlmaLinux9 / Centos9 Images Not Available</title>
  <description>
    Type: Maintenance
    Duration: 12 days, 23 hours and 13 minutes

    Affected Components: Denmark: General Infrastructure
    Nov 7, 14:05:06 GMT+0 - Identified - We have identified an issue with our AlmaLinux9/Centos9 images. For this reason these images is not available while we debug. We hope to have these images available again soon. Nov 7, 14:06:50 GMT+0 - Identified - Maintenance is now in progress. Nov 20, 13:19:39 GMT+0 - Completed - Images were re-enabled a while ago, but somehow we forgot to update this maintenance post :D Sorry about that. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 12 days, 23 hours and 13 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:05:06&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have identified an issue with our AlmaLinux9/Centos9 images. For this reason these images is not available while we debug. We hope to have these images available again soon..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:06:50&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:19:39&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Images were re-enabled a while ago, but somehow we forgot to update this maintenance post :D Sorry about that..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 7 Nov 2024 14:05:06 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm37dscqu000vxj2z5bn2ycay</link>
  <guid>https://status.webdock.io/maintenance/cm37dscqu000vxj2z5bn2ycay</guid>
</item>

<item>
  <title>Storage Backend Down for Maintenance</title>
  <description>
    Type: Maintenance
    Duration: 3 hours and 16 minutes

    Affected Components: Denmark: Storage Backend
    Nov 7, 12:38:32 GMT+0 - Completed - Maintenance completed and snapshot operations available again in Denmark. Nov 7, 09:21:48 GMT+0 - Identified - We need to do some debugging on our storage backend today, so this will be unavailable for a short while today. Nov 7, 09:22:46 GMT+0 - Identified - Maintenance is now in progress. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 3 hours and 16 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:38:32&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance completed and snapshot operations available again in Denmark..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:21:48&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We need to do some debugging on our storage backend today, so this will be unavailable for a short while today..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:22:46&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 7 Nov 2024 09:21:48 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm373nkb7002r4epelseyjxnx</link>
  <guid>https://status.webdock.io/maintenance/cm373nkb7002r4epelseyjxnx</guid>
</item>

<item>
  <title>Network maintenance on a single host in Denmark</title>
  <description>
    Type: Maintenance
    Duration: 3 hours and 28 minutes

    Affected Components: Denmark: Network Infrastructure
    Oct 24, 07:55:00 GMT+0 - Identified - We are performing network related maintenance on one of our newer hosts in Denmark today which has had issues with missing routes over the past few days. This may cause slight disruption of network connectivity for your VPS. If you are affected, you will see a big red alert once logged in to the Webdock dashboard. Oct 24, 11:22:48 GMT+0 - Completed - The required changes to network configuration has been completed with no downtime for customers. Oct 24, 07:55:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 3 hours and 28 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:55:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are performing network related maintenance on one of our newer hosts in Denmark today which has had issues with missing routes over the past few days. This may cause slight disruption of network connectivity for your VPS. If you are affected, you will see a big red alert once logged in to the Webdock dashboard..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:22:48&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  The required changes to network configuration has been completed with no downtime for customers..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:55:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 24 Oct 2024 07:55:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm2n09gr90014y08fcoqhgkbl</link>
  <guid>https://status.webdock.io/maintenance/cm2n09gr90014y08fcoqhgkbl</guid>
</item>

<item>
  <title>Fiber provider doing scheduled maintenance on a single fiber pair in DK DC1 on October 3rd</title>
  <description>
    Type: Maintenance
    Duration: 6 hours

    Affected Components: Denmark: Network Infrastructure
    Oct 3, 04:01:00 GMT+0 - Completed - Maintenance has completed successfully Oct 2, 22:01:00 GMT+0 - Identified - One of our Fiber providers has announced a maintenance window on October 3rd from midnight to 06.00 AM CEST on October 3rd. We should be fully redundant and the maintenance should not impact any customers.  Oct 2, 22:01:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 6 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 3&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;04:01:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;22:01:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  One of our Fiber providers has announced a maintenance window on October 3rd from midnight to 06.00 AM CEST on October 3rd. We should be fully redundant and the maintenance should not impact any customers. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;22:01:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 2 Oct 2024 22:01:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm0xpkp07001727w5imy3g2hs</link>
  <guid>https://status.webdock.io/maintenance/cm0xpkp07001727w5imy3g2hs</guid>
</item>

<item>
  <title>Host needs kernel upgrade and restart in Denmark</title>
  <description>
    Type: Maintenance
    Duration: 11 minutes

    Affected Components: Denmark: General Infrastructure
    Oct 2, 16:45:53 GMT+0 - Completed - Maintenance complete and all customers up. Your VPS should be more responsive now. Oct 2, 16:33:39 GMT+0 - Identified - We have a host machine in Denmark which is hanging on certain operations and which needs a kernel upgrade and reboot. Your VPS will go down for at most 5-10 minutes, usually a lot less. You can check if your server(s) are affected by logging in to Webdock as this will then be shown at the top of the page in a red alert window. Oct 2, 16:34:39 GMT+0 - Identified - Maintenance is now in progress. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 11 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:45:53&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance complete and all customers up. Your VPS should be more responsive now..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:33:39&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have a host machine in Denmark which is hanging on certain operations and which needs a kernel upgrade and reboot. Your VPS will go down for at most 5-10 minutes, usually a lot less. You can check if your server(s) are affected by logging in to Webdock as this will then be shown at the top of the page in a red alert window..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:34:39&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 2 Oct 2024 16:33:39 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm1s37s3m0009hrczwu9otq7a</link>
  <guid>https://status.webdock.io/maintenance/cm1s37s3m0009hrczwu9otq7a</guid>
</item>

<item>
  <title>Webdock Dashboard unavailable for 2-3 hours tomorrow morning CEST</title>
  <description>
    Type: Maintenance
    Duration: 1 hour and 58 minutes

    Affected Components: Webdock Dashboard, Webdock Website
    Sep 12, 08:00:00 GMT+0 - Identified - We will be upgrading and improving the Webdock Dashboard tomorrow morning September 12th between 10.00 AM and 13.00 PM CEST. You will not be able to access the Webdock dashboard in this time frame, or the dashboard may come online briefly and then go away again with an error message in your browser as we finalize deployment. For this reason, you should complete any work you are doing in our dash before the maintenance window. Your server will NOT go down or otherwise be affected.  
  
We hope the maintenance will not take the entire 3 hours, but we are reserving a lot of time just to be sure we can get it all done as it is a complicated procedure we are doing. Thank you for your patience with us! Sep 12, 09:58:04 GMT+0 - Completed - The maintenance went well and the dashboard was up after about an hour. We believe everything is nominal, but will be monitoring all systems closely. Thank you for your patience today. Sep 12, 08:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour and 58 minutes</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We will be upgrading and improving the Webdock Dashboard tomorrow morning September 12th between 10.00 AM and 13.00 PM CEST. You will not be able to access the Webdock dashboard in this time frame, or the dashboard may come online briefly and then go away again with an error message in your browser as we finalize deployment. For this reason, you should complete any work you are doing in our dash before the maintenance window. Your server will NOT go down or otherwise be affected.  
  
We hope the maintenance will not take the entire 3 hours, but we are reserving a lot of time just to be sure we can get it all done as it is a complicated procedure we are doing. Thank you for your patience with us!.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:58:04&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  The maintenance went well and the dashboard was up after about an hour. We believe everything is nominal, but will be monitoring all systems closely. Thank you for your patience today..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 12 Sep 2024 08:00:00 +0000</pubDate>
  <link>https://status.webdock.io/maintenance/cm0xstzyf002h14045y11khuo</link>
  <guid>https://status.webdock.io/maintenance/cm0xstzyf002h14045y11khuo</guid>
</item>

  </channel>
  </rss>