According to our service metrics everything is working normally again now. Apologies for any inconvenience. We'll be working internally to ensure we mitigate the effects of an incident like this again in the future.
Posted Nov 02, 2021 - 16:37 UTC
Scaling the service resources up has resolved the problem. Unfortunately we were not able to process many of the events sent by Atlassian during the duration of this incident. There is now a delay of approximately 10mins in processing events from Atlassian as our systems recover. For clarity, some scripts will be run after a delay, and some search results powered by Enhanced Search will be out of date by a few minutes.
Posted Nov 02, 2021 - 16:18 UTC
We have scaled up the CPU and Memory resources for the service that processes events and the service is looking healthier. We'll continue to monitor and mitigate the high load and post and update shortly.
Posted Nov 02, 2021 - 16:04 UTC
Our services that receive events from Atlassian in order to trigger scripts are currently overwhelmed with a high volume of requests. We are scaling up the number of instances of the service and attempting to block the source of the high volume of requests. Some customers will be experiencing a delay in how quickly their scripts run once an event has occurred in Jira or Confluence, and some search results powered by Enhanced Search may be out-of-date.
Posted Nov 02, 2021 - 15:37 UTC
This incident affected: ScriptRunner for Jira Cloud (ScriptRunner Service), ScriptRunner for Confluence Cloud (ScriptRunner Service), and Enhanced Search for Jira Cloud (Enhanced Search Services).