Incidents | Minerva Incidents reported on status page for Minerva https://status.minerva.io/ https://d1lppblt9t2x15.cloudfront.net/logos/0c9c430078947c4dc79501797593d9ac.png Incidents | Minerva https://status.minerva.io/ en LinkedIn Contact Data recovered https://status.minerva.io/ Wed, 28 Jan 2026 02:09:07 +0000 https://status.minerva.io/#21dc7f609b24c6e0d465fb27f4d7c74152ad6248adbcf9c86cb2a6d0614351c5 LinkedIn Contact Data recovered Enrich recovered https://status.minerva.io/ Wed, 28 Jan 2026 02:08:15 +0000 https://status.minerva.io/#14bcc3c983ec960db273f494266ef57b20184aa297128a5b3af3546537c59d48 Enrich recovered Enrich went down https://status.minerva.io/ Wed, 28 Jan 2026 01:33:03 +0000 https://status.minerva.io/#14bcc3c983ec960db273f494266ef57b20184aa297128a5b3af3546537c59d48 Enrich went down LinkedIn Contact Data went down https://status.minerva.io/ Wed, 28 Jan 2026 01:29:08 +0000 https://status.minerva.io/#21dc7f609b24c6e0d465fb27f4d7c74152ad6248adbcf9c86cb2a6d0614351c5 LinkedIn Contact Data went down LinkedIn Contact Data recovered https://status.minerva.io/ Tue, 20 Jan 2026 03:40:52 +0000 https://status.minerva.io/#c3b8fd066cbb6c42a40b787ee2616b555b713f85eafc1287b6f9686a08f4a1c6 LinkedIn Contact Data recovered Enrich recovered https://status.minerva.io/ Tue, 20 Jan 2026 03:37:47 +0000 https://status.minerva.io/#04f4184ca0cdffa8b7a6d74b377aced9cd2b0a76d7c14dfa245f4c9b53772337 Enrich recovered LinkedIn Contact Data went down https://status.minerva.io/ Tue, 20 Jan 2026 03:19:27 +0000 https://status.minerva.io/#c3b8fd066cbb6c42a40b787ee2616b555b713f85eafc1287b6f9686a08f4a1c6 LinkedIn Contact Data went down Enrich went down https://status.minerva.io/ Tue, 20 Jan 2026 03:18:47 +0000 https://status.minerva.io/#04f4184ca0cdffa8b7a6d74b377aced9cd2b0a76d7c14dfa245f4c9b53772337 Enrich went down Complete API Service Outage https://status.minerva.io/incident/804678 Wed, 14 Jan 2026 23:23:00 -0000 https://status.minerva.io/incident/804678#8c7b7fa3e235b0ee72bccf4ed3e2fde3a08ae2ade07180ebaae56fd802bc4992 Status: Resolved Duration: 2 hours, 8 minutes Date: January 13, 2026 Affected Services: All API endpoints (Enrich, Resolve, LinkedIn Contact Data), OAuth 2.0 Authentication Impact: Complete service unavailability for all customers TIMELINE (All times EST) Jan 13, 2026 - 1:12 PM - Investigating Our monitoring systems detected widespread failures across API endpoints. Engineering team notified and investigating. Jan 13, 2026 - 1:45 PM - Identified Root cause identified: AWS Lambda account-level concurrency exhaustion (1,000 concurrent executions limit) caused by high-volume customer operation. This triggered cascading database connection pool exhaustion, resulting in complete API unavailability. Jan 13, 2026 - 2:55 PM - Monitoring Applied reserved concurrency limits to critical authentication services. Monitoring for stability and additional throttling. Jan 13, 2026 - 3:40 PM - Resolved All services restored to normal operation. API response times returned to baseline. No data loss or corruption occurred. WHAT HAPPENED A large-scale data operation from a single customer generated approximately 83,000 API requests within one hour, exhausting our AWS Lambda account-wide concurrency limit. This caused Lambda function throttling across all customer-facing services, database connection pool exhaustion (approximately 4,000 simultaneous connections), and complete unavailability of authentication and API endpoints. Data Security: No customer data was compromised, exposed, or modified. This was strictly an availability incident. RESOLUTION Applied reserved concurrency allocation to critical authentication services. Identified and throttled the root cause Lambda function. Restored database connection pool to normal levels (47 active, 169 idle connections). Validated all services operational before marking resolved. PREVENTIVE MEASURES Immediate (Completed): Reserved concurrency for all critical Lambda functions. Enhanced monitoring with automated on-call escalation. This Week: Per-customer rate limiting at API Gateway layer. Deprecation of legacy routing component that was the root cause. Customer documentation on API batching best practices. Next 30 Days: Load testing to validate 200k+ requests/hour capacity. Bulk operations coordination policy for high-volume integrations. Database connection pool optimization. Complete API Service Outage https://status.minerva.io/incident/804678 Wed, 14 Jan 2026 23:23:00 -0000 https://status.minerva.io/incident/804678#8c7b7fa3e235b0ee72bccf4ed3e2fde3a08ae2ade07180ebaae56fd802bc4992 Status: Resolved Duration: 2 hours, 8 minutes Date: January 13, 2026 Affected Services: All API endpoints (Enrich, Resolve, LinkedIn Contact Data), OAuth 2.0 Authentication Impact: Complete service unavailability for all customers TIMELINE (All times EST) Jan 13, 2026 - 1:12 PM - Investigating Our monitoring systems detected widespread failures across API endpoints. Engineering team notified and investigating. Jan 13, 2026 - 1:45 PM - Identified Root cause identified: AWS Lambda account-level concurrency exhaustion (1,000 concurrent executions limit) caused by high-volume customer operation. This triggered cascading database connection pool exhaustion, resulting in complete API unavailability. Jan 13, 2026 - 2:55 PM - Monitoring Applied reserved concurrency limits to critical authentication services. Monitoring for stability and additional throttling. Jan 13, 2026 - 3:40 PM - Resolved All services restored to normal operation. API response times returned to baseline. No data loss or corruption occurred. WHAT HAPPENED A large-scale data operation from a single customer generated approximately 83,000 API requests within one hour, exhausting our AWS Lambda account-wide concurrency limit. This caused Lambda function throttling across all customer-facing services, database connection pool exhaustion (approximately 4,000 simultaneous connections), and complete unavailability of authentication and API endpoints. Data Security: No customer data was compromised, exposed, or modified. This was strictly an availability incident. RESOLUTION Applied reserved concurrency allocation to critical authentication services. Identified and throttled the root cause Lambda function. Restored database connection pool to normal levels (47 active, 169 idle connections). Validated all services operational before marking resolved. PREVENTIVE MEASURES Immediate (Completed): Reserved concurrency for all critical Lambda functions. Enhanced monitoring with automated on-call escalation. This Week: Per-customer rate limiting at API Gateway layer. Deprecation of legacy routing component that was the root cause. Customer documentation on API batching best practices. Next 30 Days: Load testing to validate 200k+ requests/hour capacity. Bulk operations coordination policy for high-volume integrations. Database connection pool optimization. Complete API Service Outage https://status.minerva.io/incident/804678 Wed, 14 Jan 2026 23:23:00 -0000 https://status.minerva.io/incident/804678#8c7b7fa3e235b0ee72bccf4ed3e2fde3a08ae2ade07180ebaae56fd802bc4992 Status: Resolved Duration: 2 hours, 8 minutes Date: January 13, 2026 Affected Services: All API endpoints (Enrich, Resolve, LinkedIn Contact Data), OAuth 2.0 Authentication Impact: Complete service unavailability for all customers TIMELINE (All times EST) Jan 13, 2026 - 1:12 PM - Investigating Our monitoring systems detected widespread failures across API endpoints. Engineering team notified and investigating. Jan 13, 2026 - 1:45 PM - Identified Root cause identified: AWS Lambda account-level concurrency exhaustion (1,000 concurrent executions limit) caused by high-volume customer operation. This triggered cascading database connection pool exhaustion, resulting in complete API unavailability. Jan 13, 2026 - 2:55 PM - Monitoring Applied reserved concurrency limits to critical authentication services. Monitoring for stability and additional throttling. Jan 13, 2026 - 3:40 PM - Resolved All services restored to normal operation. API response times returned to baseline. No data loss or corruption occurred. WHAT HAPPENED A large-scale data operation from a single customer generated approximately 83,000 API requests within one hour, exhausting our AWS Lambda account-wide concurrency limit. This caused Lambda function throttling across all customer-facing services, database connection pool exhaustion (approximately 4,000 simultaneous connections), and complete unavailability of authentication and API endpoints. Data Security: No customer data was compromised, exposed, or modified. This was strictly an availability incident. RESOLUTION Applied reserved concurrency allocation to critical authentication services. Identified and throttled the root cause Lambda function. Restored database connection pool to normal levels (47 active, 169 idle connections). Validated all services operational before marking resolved. PREVENTIVE MEASURES Immediate (Completed): Reserved concurrency for all critical Lambda functions. Enhanced monitoring with automated on-call escalation. This Week: Per-customer rate limiting at API Gateway layer. Deprecation of legacy routing component that was the root cause. Customer documentation on API batching best practices. Next 30 Days: Load testing to validate 200k+ requests/hour capacity. Bulk operations coordination policy for high-volume integrations. Database connection pool optimization. Enrich recovered https://status.minerva.io/ Wed, 14 Jan 2026 18:18:14 +0000 https://status.minerva.io/#cb14401e556e06e2de63ba14a8324812dbc2d2071593f72cc6226131eb72e29c Enrich recovered LinkedIn Contact Data recovered https://status.minerva.io/ Wed, 14 Jan 2026 18:03:21 +0000 https://status.minerva.io/#122ac884ac3ae382a17f3f5813acb0d3d3dab67dd79ca37bcbca7dc4d0df5c81 LinkedIn Contact Data recovered LinkedIn Contact Data went down https://status.minerva.io/ Wed, 14 Jan 2026 17:39:20 +0000 https://status.minerva.io/#122ac884ac3ae382a17f3f5813acb0d3d3dab67dd79ca37bcbca7dc4d0df5c81 LinkedIn Contact Data went down LinkedIn Contact Data recovered https://status.minerva.io/ Wed, 14 Jan 2026 17:24:05 +0000 https://status.minerva.io/#2cb7cbf1cccc34457ea2948c6f790ad094c059b63e35e71be10702c04cd58acf LinkedIn Contact Data recovered Enrich went down https://status.minerva.io/ Wed, 14 Jan 2026 17:03:40 +0000 https://status.minerva.io/#cb14401e556e06e2de63ba14a8324812dbc2d2071593f72cc6226131eb72e29c Enrich went down LinkedIn Contact Data went down https://status.minerva.io/ Wed, 14 Jan 2026 16:32:41 +0000 https://status.minerva.io/#2cb7cbf1cccc34457ea2948c6f790ad094c059b63e35e71be10702c04cd58acf LinkedIn Contact Data went down LinkedIn Contact Data recovered https://status.minerva.io/ Wed, 14 Jan 2026 15:04:06 +0000 https://status.minerva.io/#f50a14d5ea68e9a857907f5e780eeaac97bdfc5355e198bcbc5bd78e8d8c44fe LinkedIn Contact Data recovered LinkedIn Contact Data went down https://status.minerva.io/ Wed, 14 Jan 2026 10:49:15 +0000 https://status.minerva.io/#f50a14d5ea68e9a857907f5e780eeaac97bdfc5355e198bcbc5bd78e8d8c44fe LinkedIn Contact Data went down LinkedIn Contact Data recovered https://status.minerva.io/ Wed, 14 Jan 2026 00:49:10 +0000 https://status.minerva.io/#7766d5796ca9fce3533db11b369fb0e5cf7cf4b644c990f5d678d72fa9afc655 LinkedIn Contact Data recovered Enrich recovered https://status.minerva.io/ Tue, 13 Jan 2026 23:48:14 +0000 https://status.minerva.io/#b259c834fb595a06ccd675c15f69b76e4b8defe5431847ccea91d0b2bb83f093 Enrich recovered Enrich went down https://status.minerva.io/ Tue, 13 Jan 2026 21:48:35 +0000 https://status.minerva.io/#b259c834fb595a06ccd675c15f69b76e4b8defe5431847ccea91d0b2bb83f093 Enrich went down LinkedIn Contact Data went down https://status.minerva.io/ Tue, 13 Jan 2026 18:49:14 +0000 https://status.minerva.io/#7766d5796ca9fce3533db11b369fb0e5cf7cf4b644c990f5d678d72fa9afc655 LinkedIn Contact Data went down LinkedIn Contact Data recovered https://status.minerva.io/ Sat, 10 Jan 2026 09:49:09 +0000 https://status.minerva.io/#991f4754533bbca38ad15d29c24f467225f8360a4053478863740ca492b01024 LinkedIn Contact Data recovered Enrich recovered https://status.minerva.io/ Sat, 10 Jan 2026 09:48:13 +0000 https://status.minerva.io/#b198d8fe748d73472bbd42e971acc2f4f1149e0c7ec2838581169651ec197f48 Enrich recovered LinkedIn Contact Data went down https://status.minerva.io/ Sat, 10 Jan 2026 00:49:13 +0000 https://status.minerva.io/#991f4754533bbca38ad15d29c24f467225f8360a4053478863740ca492b01024 LinkedIn Contact Data went down Enrich went down https://status.minerva.io/ Sat, 10 Jan 2026 00:48:23 +0000 https://status.minerva.io/#b198d8fe748d73472bbd42e971acc2f4f1149e0c7ec2838581169651ec197f48 Enrich went down AWS US-East-1 Region Outage Impacting Minerva Services https://status.minerva.io/incident/747429 Mon, 20 Oct 2025 15:19:00 -0000 https://status.minerva.io/incident/747429#c47165a2e5823d82264e2c7dfd63277c11712b6f650b3570cd3710e0267ef762 Status: In Progress Incident Start: October 20 · 3:11 a.m. ET Region Affected: US-East-1 (N. Virginia) Severity: Moderate – Third-Party Infrastructure Impact Summary: Earlier today, Amazon Web Services (AWS) experienced a widespread outage in its US-East-1 region, affecting a number of dependent services globally. Some Minerva BI Inc. workloads that rely on this region experienced temporary latency and job queue delays between 3:11 a.m. and 6:35 a.m. ET. AWS is investigating the root cause of the issue. Impact: Periodic errors or slower response times on enrichment and reporting endpoints. Temporary delay in processing scheduled enrichment jobs. No data loss, unauthorized access, or security incidents detected. Resolution: Service was automatically restored once AWS infrastructure recovered. All systems have been validated, and backlogged tasks were re-queued and processed successfully. Next Steps: Internal review of recovery procedures and incident response documentation Confirmation that RTO/RPO thresholds were met. Post-mortem logged per SOC 2 Availability ( A1.2 – A1.4 ) and CC7.3 controls Customer Communication: This notice satisfies Minerva BI Inc.’s commitment to transparency regarding material service interruptions. No action is required from customers. AWS US-East-1 Region Outage Impacting Minerva Services https://status.minerva.io/incident/747429 Mon, 20 Oct 2025 15:19:00 -0000 https://status.minerva.io/incident/747429#c47165a2e5823d82264e2c7dfd63277c11712b6f650b3570cd3710e0267ef762 Status: In Progress Incident Start: October 20 · 3:11 a.m. ET Region Affected: US-East-1 (N. Virginia) Severity: Moderate – Third-Party Infrastructure Impact Summary: Earlier today, Amazon Web Services (AWS) experienced a widespread outage in its US-East-1 region, affecting a number of dependent services globally. Some Minerva BI Inc. workloads that rely on this region experienced temporary latency and job queue delays between 3:11 a.m. and 6:35 a.m. ET. AWS is investigating the root cause of the issue. Impact: Periodic errors or slower response times on enrichment and reporting endpoints. Temporary delay in processing scheduled enrichment jobs. No data loss, unauthorized access, or security incidents detected. Resolution: Service was automatically restored once AWS infrastructure recovered. All systems have been validated, and backlogged tasks were re-queued and processed successfully. Next Steps: Internal review of recovery procedures and incident response documentation Confirmation that RTO/RPO thresholds were met. Post-mortem logged per SOC 2 Availability ( A1.2 – A1.4 ) and CC7.3 controls Customer Communication: This notice satisfies Minerva BI Inc.’s commitment to transparency regarding material service interruptions. No action is required from customers. AWS US-East-1 Region Outage Impacting Minerva Services https://status.minerva.io/incident/747429 Mon, 20 Oct 2025 15:19:00 -0000 https://status.minerva.io/incident/747429#c47165a2e5823d82264e2c7dfd63277c11712b6f650b3570cd3710e0267ef762 Status: In Progress Incident Start: October 20 · 3:11 a.m. ET Region Affected: US-East-1 (N. Virginia) Severity: Moderate – Third-Party Infrastructure Impact Summary: Earlier today, Amazon Web Services (AWS) experienced a widespread outage in its US-East-1 region, affecting a number of dependent services globally. Some Minerva BI Inc. workloads that rely on this region experienced temporary latency and job queue delays between 3:11 a.m. and 6:35 a.m. ET. AWS is investigating the root cause of the issue. Impact: Periodic errors or slower response times on enrichment and reporting endpoints. Temporary delay in processing scheduled enrichment jobs. No data loss, unauthorized access, or security incidents detected. Resolution: Service was automatically restored once AWS infrastructure recovered. All systems have been validated, and backlogged tasks were re-queued and processed successfully. Next Steps: Internal review of recovery procedures and incident response documentation Confirmation that RTO/RPO thresholds were met. Post-mortem logged per SOC 2 Availability ( A1.2 – A1.4 ) and CC7.3 controls Customer Communication: This notice satisfies Minerva BI Inc.’s commitment to transparency regarding material service interruptions. No action is required from customers.