Do we have another SalesLoft-style incident on our hands? Based on what we've uncovered so far, it certainly looks that way.
On November 19, Salesforce sent out a notification at 10:57 PM Eastern alerting customers to unusual activity involving Gainsight, a popular CRM application that connects to the Salesforce platform. Their response was swift: they revoked all Gainsight access tokens immediately.
What caught our attention was just how fast Salesforce moved. When we looked at Gainsight's own notification, it became clear this wasn't a coordinated effort. Gainsight found out their connections were down only after they stopped working. By 03:00 the following day, they were scrambling to figure out what had happened. Salesforce didn't call ahead. They just pulled the plug.
After the fallout from SalesLoft, Salesforce clearly isn't messing around anymore.
For those unfamiliar with the SalesLoft incident (we covered it extensively in the first episode of our podcast), here's the short version: attackers compromised SalesLoft's infrastructure and stole OAuth credentials that allowed them to connect to Salesforce instances with whatever permissions those credentials had. They used that access to scrape case data and extract additional credentials from it.
One of the companies that publicly confirmed they were compromised during SalesLoft? Gainsight. They acknowledged that a token was stolen.
Now we're seeing Gainsight at the center of another Salesforce-related security event. The pattern is hard to ignore.
Several news publications reached out to ShinyHunters, the group that claimed responsibility for SalesLoft, asking if they were behind this one too. Their reported response: yes, and they claim to have accessed another 250 to 300 Salesforce instances using credentials obtained from Gainsight.
Now, ShinyHunters has taken credit for things they didn't do in the past, so take that with appropriate skepticism. But the circumstantial evidence is piling up.
Not much, honestly. Neither Salesforce nor Gainsight provided technical details about what triggered this response. We don't have indicators of compromise. We don't have a timeline showing when the suspicious activity started versus when tokens were revoked. We don't have confirmation on attribution.
What we do know: both notifications acknowledge that something triggered an investigation, and Salesforce responded by cutting all Gainsight access to the platform.
One of the first things we wanted to understand was how Gainsight's integration architecture differs from SalesLoft, because that directly impacts the scope of potential exposure.
With SalesLoft, every user who wanted to use the AI features of the SalesLoft Drift application would authorize their own OAuth token. This meant large organizations could have hundreds of tokens in play, each scoped to an individual user's permissions. More tokens to steal, but potentially narrower access per token.
Gainsight works differently. Based on their technical documentation, you set up a connector at the organizational level rather than per user. One connection for the whole org.
For defenders, this is a mixed bag. The good news: if you can identify that connector identity, you have a single focal point for investigation. The bad news: org-level connectors typically have broader permissions than individual user tokens. A compromised Gainsight connector could provide significantly wider access than a compromised SalesLoft user token would have.
Without official IOCs from Salesforce or Gainsight, we started digging through telemetry across our client base to understand what normal Gainsight activity looks like. If you're going to spot anomalies, you first need to know what the baseline is.
Here's what we found:
Legitimate Gainsight traffic to Salesforce consistently originates from AWS IP addresses. These appear to be Lambda functions or similar compute making API calls into Salesforce. Across the environments we analyzed over the past month, every Gainsight connection came from AWS infrastructure.
If you're seeing Gainsight connector activity from non-AWS sources, treat that as suspicious immediately.
(Side note: Gainsight's documentation lists allowed IPs starting with just a single octet, which is hilariously unhelpful. They apparently provide more specific ranges if you contact them directly.)
When we looked at what Gainsight typically does in Salesforce, the activity aligned well with their connector documentation. Query API calls represent the bulk of normal traffic. The events are consistent and predictable, which is exactly what you'd expect from a non-human identity.
This consistency is actually great for detection. Unlike human users who access different resources at different times from different locations, NHIs behave the same way over and over. Anomaly detection becomes much more straightforward.
Based on our research and what we observed during the SalesLoft incident, here's what we recommend looking for if you have Gainsight connected to Salesforce.
Identify your Gainsight connector identity first. This sounds obvious but isn't always easy. These connectors aren't always named "Gainsight Integration" or something helpful. Sometimes it's a generic service account name or a GUID, and without insight into typical actions, you might not even know a connector is associated with Gainsight. Use behavioral patterns (AWS source IPs, consistent Query API usage) to identify candidates, then validate against your integration inventory.
The following IP Addresses have been associated with legitimate Gainsight access over the past month:
52.203.252.39
34.234.28.149
34.225.86.95
18.211.252.246
34.205.235.172
Look for connections from unexpected sources. Anything outside AWS IP space should get your attention. Across every environment we analyzed, legitimate Gainsight traffic originated exclusively from AWS infrastructure. During SalesLoft, we saw Tor IP addresses being used. If your logs show Gainsight connector activity from non-AWS sources, investigate immediately.
Watch for unusual event types. Normal Gainsight behavior is highly predictable and aligns closely with their connector documentation, with Query API calls making up the bulk of legitimate activity. If you're seeing event types that have never appeared before or actions that don't fit the integration's purpose, dig into those.
The following event types are part of normal activity for Gainsight within Salesforce:
ApiTotalUsage:DELETE
ApiTotalUsage:GET
ApiTotalUsage:PATCH
ApiTotalUsage:POST
ApiTotalUsage:create
ApiTotalUsage:describeSObject
ApiTotalUsage:describeSObjects
ApiTotalUsage:query
Login:oauthrefreshtoken
Pay attention to Bulk API usage. Bulk API 2.0 in particular. This is where the most damage is done and how attackers pull large volumes of records quickly. Compare current activity against your baseline and flag any spikes or new bulk job creations that weren't there before.
Check for premature job deletion. During SalesLoft, we observed attackers deleting jobs before they hit their normal timeout to clean up evidence. Legitimate integrations typically let jobs complete and expire naturally. If you're seeing jobs created and then immediately deleted, especially bulk extraction jobs, treat that as a red flag.
Hunt for Python user agents. This was a consistent indicator during SalesLoft and suggests scripted access rather than legitimate application traffic. Python-based user agent strings in requests from what should be your Gainsight connector is a strong indication that something else is using those credentials.
Look at query patterns. During SalesLoft, attackers targeted case data using SQL statements with wildcard LIKE operators. Case objects are particularly valuable because attackers were able to extract additional credentials from that data. If you're seeing similar broad query patterns from your Gainsight connector, that's worth deeper analysis.
With technical details being sparse on this attack, it's difficult to name specific detection methods at this stage. That said, our anomaly detection engines are well positioned to catch this type of activity. We also have a comprehensive set of Salesforce detections in place based on what we learned from the SalesLoft incident. Those detections would likely catch any data theft or reconnaissance activity if this threat actor targeted our customers.
Salesforce took a lot of heat for SalesLoft, some of it deserved (particularly around logging levels), some of it arguably unfair given the compromise originated with a third party. Either way, they clearly don't want a repeat of that publicity.
The speed of their response here shows they've adjusted their posture. But this incident highlights a persistent problem in the SaaS ecosystem: third-party integrations create concentrated risk. When one integration provider gets compromised, the downstream impact can hit hundreds of organizations simultaneously.
And most security teams still don't have great visibility into what their non-human identities are actually doing.
We're continuing to monitor this situation and will share updates as more technical details become available. For a deeper walkthrough of the detection strategies, specific indicators from the SalesLoft incident, and our full analysis, listen to the complete podcast episode here.
Gainsight → Salesforce: Another OAuth Supply-Chain Scare?
If your organization uses Gainsight with Salesforce, now is the time to identify that connector, establish your baseline, and start hunting. Don't wait for official IOCs that may never come. Get in touch with our team if you need help investigating your environment or building better visibility into your non-human identity attack surface.