GitHub header
All Systems Operational
Git Operations ? Operational
API Requests ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
May 29, 2024

No incidents reported today.

May 28, 2024
Investigating - Codespaces is operating normally.
May 28, 21:24 UTC
Resolved - This incident has been resolved.
May 28, 21:24 UTC
Update - A fix has been applied and we are seeing some recovery. We will continue to monitor for a bit before marking this issue resolved.
May 28, 21:19 UTC
Update - We are still investigating root cause and remediation options. In the meantime, here is a workaround to be able to pull images from DockerHub:

1. Make a free DockerHub account at https://hub.docker.com (or use an existing account if you have one).
2. Create a DockerHub secret/PAT from https://hub.docker.com/settings/security (Read permission should be sufficient).
3. Go to https://github.com/settings/codespaces

Add three Codespace secrets:

- DOCKERHUB_CONTAINER_REGISTRY_PASSWORD (equal to the DockerHub PAT you created)
- DOCKERHUB_CONTAINER_REGISTRY_SERVER (equal to https://index.docker.io/v1/)
- DOCKERHUB_CONTAINER_REGISTRY_USER (equal to your DockerHub username)

4. Make sure these secrets are set as visible to the target repo.
5. Create/rebuild your Codespace

Steps above are distilled from the official docs: https://docs.github.com/en/codespaces/reference/allowing-your-codespace-to-access-a-private-registry#example-secrets

May 28, 20:53 UTC
Update - Duplicate update, same as above
May 28, 20:53 UTC
Update - Some Codespaces are currently failing to be properly created for images hosted by DockerHub. Other registries should be unaffected. We are investigating root cause and will report back shortly.
May 28, 20:23 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
May 28, 20:17 UTC
May 27, 2024

No incidents reported.

May 26, 2024

No incidents reported.

May 25, 2024

No incidents reported.

May 24, 2024

No incidents reported.

May 23, 2024
Resolved - On May 23, 2024 between 15:31 and 16:02 the Codespaces service reported a degraded experience in codespaces across all regions. Upon further investigation this was found to be an error reporting issue and did not have user facing impact. The new error reporting that was implemented began raising on existing non-user facing errors that are handled further in the flow, at the controller level, which do not cause user impact. We are working to improve our reporting roll out process to reduce issues like this in the future which includes updating monitors and dashboards to exclude this class of error. We are also reclassifying and correcting internal API responses to better represent when errors are user facing for more accurate reporting.
May 23, 16:02 UTC
Update - We are investigating increased error rates for customers attempting to start Codespaces across all regions, around 15% of attempts are affected. Any affected customers may attempt to retry starting their Codespace. We are continuing to investigate.
May 23, 15:41 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
May 23, 15:31 UTC
May 22, 2024

No incidents reported.

May 21, 2024
Resolved - On May 21, 2024, between 11:40 UTC and 19:06 UTC various services experienced elevated latency due to a configuration change in an upstream cloud provider.

GitHub Copilot Chat experienced P50 latency of up to 2.5s and P95 latency of up to 6s. GitHub Actions was degraded with 20 - 60 minute delays for workflow run updates. GitHub Enterprise Importer customers experienced longer migration run times due to GitHub Actions delays. Additionally, billing related metrics for budget notifications and UI reporting were delayed leading to outdated billing details. No data was lost and systems caught up after the incident.

At 12:31 UTC, we detected increased latency to cloud hosts. At 14:09 UTC, non-critical traffic was paused, which did not result in restoration of service. At 14:27 UTC, we identified high CPU load within a network gateway cluster caused by a scheduled operating system upgrade that resulted in unintended, uneven distribution of traffic within the cluster. We initiated deployment of additional hosts at 16:35 UTC. Rebalancing completed by 17:58 UTC with system recovery observed at 18:03 UTC and completion at 19:06 UTC.

We have identified gaps in our monitoring and alerting for load thresholds. We have prioritized these fixes to improve time to detection and mitigation of this class of issues.

May 21, 19:06 UTC
Update - Actions is operating normally.
May 21, 18:14 UTC
Update - We are beginning to see recovery for any delays to Actions Workflow Runs, Workflow Job Runs, and Check Steps. Customers who are still experiencing jobs which appear to be stuck may re-run the workflow in order to see a completed state. We are also seeing recovery for GitHub Enterprise Importer migrations. We are continuing to monitor recovery.
May 21, 18:03 UTC
Update - We are continuing to investigate delays to status updates to Actions Workflow Runs, Workflow Job Runs, and Check Steps. This is impacting 100% of customers using these features, with an average delay of 20 minutes and P99 delay of 1 hour. Customers may see that their Actions workflows may have completed, but the run may appear to be hung waiting for its status to update. This is also impacting GitHub Enterprise Importer migrations. Migrations may take longer to complete. We are are working with our provider to address the issue and will continue to provide updates as we learn more.
May 21, 17:41 UTC
Update - We are continuing to investigate delays to status updates to Actions Workflow Runs, Workflow Job Runs, and Check Steps. Customers may see that their Actions workflows may have completed, but the run may appear to be hung waiting for its status to update. This is also impacting GitHub Enterprise Importer migrations. Migrations may take longer to complete. We are are working with our provider to address the issue and will continue to provide updates as we learn more.
May 21, 17:14 UTC
Update - We are continuing to investigate delays to Actions Workflow Runs, Workflow Job Runs, and Check Steps and will provide further updates as we learn more.
May 21, 16:02 UTC
Update - We have identified a change in a third party network configuration and are working with the provider to address the issue. We will continue to provide updates as we learn more.
May 21, 15:00 UTC
Update - We have identified network connectivity issues causing delays in Actions Workflow Runs, Workflow Job Runs, and Check Steps. We are continuing to investigate.
May 21, 14:34 UTC
Update - We are investigating delayed updates to Actions job statuses.
May 21, 13:58 UTC
Investigating - We are investigating reports of degraded performance for Actions
May 21, 12:45 UTC
May 20, 2024
Resolved - Between May 19th 3:40AM UTC and May 20th 5:40PM UTC the service responsible for rendering Jupyter notebooks was degraded. During this time customers were unable to render Jupyter Notebooks.

This occurred due to an issue with a Redis dependency which was mitigated by restarting. An issue with our monitoring led to a delay in our response. We are working to improve the quality and accuracy of our monitors to reduce the time to detection.

May 20, 17:05 UTC
Update - We are beginning to see recovery rendering Jupyter notebooks and are continuing to monitor.
May 20, 17:01 UTC
Update - Customers may experience errors viewing rendered Jupyter notebooks from PR diff pages or the files tab
May 20, 16:50 UTC
Investigating - We are currently investigating this issue.
May 20, 16:47 UTC
May 19, 2024

No incidents reported.

May 18, 2024

No incidents reported.

May 17, 2024

No incidents reported.

May 16, 2024
Resolved - On May 16, 2024, between 4:10 UTC and 5:02 UTC customers experienced various delays in background jobs, primarily UI updates for Actions. This issue was due to degradation in our background job service affecting 22.4% of total jobs. Across all affected services, the average job delay was 2m 22s. Actions jobs themselves were unaffected, this issue affected the timeliness of UI updates, with an average delay of 11m 40s and a maximum of 20m 14s.

This incident was due to a performance problem on a single processing node, where Actions UI updates were being processed. Additionally, a misconfigured monitor did not alert immediately, resulting in a 25m late detection time and a 37m total increase in time to mitigate.

We mitigated the incident by removing the problem node from the cluster and service was restored. No data was lost, and all jobs executed successfully.

To reduce our time to detection and mitigation of issues like this one in the future, we have repaired our misconfigured monitor and added additional monitoring to this service.

May 16, 05:15 UTC
Investigating - We are investigating reports of degraded performance for Actions
May 16, 04:43 UTC
May 15, 2024

No incidents reported.