Non-Cyber Threat Intel: A Core Competency for Insider Risk Management Programs?
In late 2023, a few intense professional conversations finally crystallized why my teams have historically struggled to find resources useful for understanding external non-cyber threat actors. The related terms, “threat intel” and “counterintelligence” are still poorly understood outside of niche military units and federal agencies. In fact, within the commercial industry, “threat intel” almost exclusively refers to cyber-enabled threat actors such as Advanced Persistent Threats (APTs), not the higher-order entities driving an APT’s priorities.
I recently asked my industry peers the following question: What does "non-cyber threat intelligence" mean to you in the context of insider risk management programs (IRMP)?” This blog captures leading perspectives and briefly contextualizes these two concepts across standard corporate functions. This article reviews the concepts in relation to the traditional intelligence cycle and non-traditional techniques, and provides ideas about which corporate functions may be useful for collecting and fusing holistic threat intel.
The commercial and defense industries’ adoption of more comprehensive risk intelligence functional teams is still nascent, and typically found only within security departments such as executive protection or crisis response, enterprise risk functions such as supply chain security or Environmental, Social, and Governance (ESG) risk. Pacific Gas’s head of threat intelligence, Ryan Matulka, summed this observation perfectly: “The label "non-cyber threat intelligence" says more about how the organization thinks about intel than it says about threats. I've always found internal activities are strongly influenced by external drivers, so we ignore the whole context at our peril. I also believe the best insider risk/threat program is built on a solid foundation of intel or intel-like processes.”
Kicker
One of my peers in a multinational corporation described a real-world scenario that resulted in the discovery of one internal spy. After detecting a seemingly unrelated anomaly, the investigation led the team to specific external intelligence streams that filled in the full picture. The approach was along the lines of Conflict of Interest (COI) Continuous Monitoring, another nascent process used by some more mature teams. Detecting COIs requires a robust understanding of external threat actors and access to global databases containing business and patent filings, research funding recipients, etc. Once the investigation provided richer findings, a threat signature emerged that helped the team identify a dozen additional spies throughout the company. All had been quiet on the home front for several years until this discovery – it was a wakeup call. The company had been quietly penetrated by non-cyber threat actors for years. Their cyber defenses were excellent, so which risk detection processes were missing or incomplete?
Jumping the Fence
Why is this a pressing topic? Cyber defense is required for a new enterprise to become initially operationally capable, so we should agree that this is a core business process. Once the cyber defensive perimeter is built, we can look at the other standard risks and controls, choose lean options, and slowly mature capabilities. My thesis is that there is a hidden risk that we are missing – the human factor. When the meta-level threat actors can no longer penetrate cyber defenses with run-of-the-mill cyber threat actors, they shift resources toward direct interaction with the workforce and business processes.
My peer and former colleague, Ryan Rambo, described the effect this hidden risk poses to our companies with a great illustration: "Why build a 30' fence when the enemy can only scale an 8' fence? A 10' fence is plenty to prevent an enemy attack and costs a hell of a lot less. From a cyber perspective, we spend so much time and resources on cyber defenses to meet compliance requirements or incident responses that we forget to consider what [else] the adversary is capable of doing or has successfully done in the past." After all, the real adversaries are rarely the cyber criminals executing attacks we see daily at the tactical level.
Who’s In Charge Here?
Why should we identify the higher-order threat actors? In short, when cyber defenses are strong, threat actors look for workarounds like our trusted workforces, unprotected facilities, supply chains and partners, and leaks of restricted information in the public domain. What are the sizes of all of those fences? Are they easier to compromise?
It may seem counterintuitive to look outside for indicators of insider misconduct. In my experience, trusted insiders often had one or more external co-conspirators. We can only put an insider's activities into proper context once we understand the external entity's identity and the illicit collaboration's objective. The intelligence cycle is a well-known framework and is ideal for this task, as we will see below.
What Does Threat Intel Consist Of?
Jerry Davis is a board member at the DHS CISA's Cybersecurity Safety Review Board, and he noted that "we should be able to identify nodes useful for prioritizing responses, investments, etc. This is called OB (Order of Battle) analysis in the military. In this context, an OB overlay seems relevant if combined with a counterintelligence overlay and internal asset/risk/control registers."
The commercial industry has been slow to adopt this perspective and formalize processes designed to counter external threat actors other than the usual Advanced Persistent Threats (APT). The external risks to restricted information are so numerous that it can be daunting, and Davis's paradigm makes sense. To narrow our focus, we must only identify and track those external entities with the potential to do us some harm (and leave the rest alone). But which corporate function is chartered with this task?
When I began working in the defense and commercial industries, there was a mature investment in cyber defense and cyber threat intel, but there was no standardized approach to answering this question. Further, there was an almost complete intentional avoidance of external data sets when monitoring for malicious employee behavior on Information Technology (IT) assets. Within Insider Risk Management Programs (IRMP), analysts can only understand an employee's behavior partially by viewing activity logs or selecting non-technical data points such as human resources (HR) actions.
Security vs. Privacy – Data Sources
Employees (like you and me) have personal lives! We have hopes, dreams, and ambitions. We have gripes, pain points, and ideas. Some turn into side gigs that break employee agreements or laws. How does a standard IRMP detect misconduct in the private domain that can potentially have strategic consequences for an employer, such as by contracting with a competitor to supply trade secrets? Which data sources are best suited to uncovering these types of misconduct? Which should we avoid to protect organizational cultures? Let’s start with a short scenario, see which adversarial intelligence collection techniques may be implicated, and look for the best opportunities to increase security without compromising employee privacy.
Scenario
An employee needs to make more income to pay bills, missed a promotion, and was recently put on an improvement plan. The employee secretly accepted a second remote position in a related field with flexible hours. The contract required that the employee refrain from disclosing the new position with the current employer. In this scenario, this would be a violation of the original employer's Employee Agreement, but not law.
Let's take this scenario a step further: The new contract required the moonlighting employee to provide trade secrets and recruit other company employees, help construct and consult on a mirror lab at a competing company, and help the new employer identify and recruit talent. This is an incredibly common situation, and we only know about the ones that involve successful detection. So, let's take a look at some of the things these clandestine contracts often require; then, we can more easily identify some external data sources with the potential to assist with detections (of conflicts of interest).
Overview: Intelligence Collection Techniques
We can quickly identify multiple threat vectors by using the intelligence cycle as a framework for understanding adversarial collection processes. Starting with cyber-enabled collections, we can view these attacks as offensive Signals Intelligence (SIGINT)). At a high level, adversaries such as competitors, nation-state-controlled agencies and businesses, and criminal enterprises produce intelligence along the same loose model:
[1] Collection Requirements > Planning & Direction > Collection > Processing and Exploitation > Analysis and Production > Dissemination
Planners look at one or more of the following collection processes when considering the best ways to achieve objectives. In the context of an IRMP, they may be attempting to compromise employees or facilities with placement and access to restricted information; compromise business processes such as supply chains; or surveil security processes to identify weaknesses (among many other lines of effort). Besides SIGINT, the most relevant collection disciplines to this discussion include:
Human Intelligence (HUMINT) – This includes targeting trusted insiders with placement and access to desired restricted information. HUMINT is most successful when targeted employees overshare information about their placement and access to desired information in the public domain. This allows threat actors to customize their approach, select the best motivators and rewards, and identify exploitable vulnerabilities. Threat actors use all other techniques prior to contacting a recruitment target to increase chances of a successful encounter.
Imagery Intelligence (IMINT) – This includes long-range photography at test sites, satellite imagery, concealed photography in sensitive locations, etc.
Open Source Intelligence (OSINT) – This includes accessing publicly available information about a company, its assets, plans, personnel, etc.
Measurement and Signature Intelligence (MASINT) – This includes capturing test data by positioning collection assets near test sites, etc.
Technical Intelligence (TECHINT) – This includes reverse engineering and benchmarking and is generally acceptable unless clandestine collection means are used.
Others include Financial Intelligence (FININT), Geospatial Intelligence (GEOINT), Social Media Intelligence (SOCMINT), Supply Chain Intelligence, Acoustic Intelligence (ACINT), Biometric Intelligence, and the list goes on.
In large companies, no single corporate functional team can take care of all of these threat vectors, so these requirements are typically covered by a combination of multiple functional teams. Here are a few examples at a high level:
Compliance: These teams take care of legal risks associated with regulatory and legal requirements.
Security: These teams cover many requirements, including security intelligence and support for incident and misconduct investigations.
Enterprise Risk: These teams are responsible for identifying and countering strategic-level threat actors with the potential to affect the company's viability. While excellent at detecting known, reported risks, they typically do not have a line of effort geared toward discovering new threat actors or vectors.
Cyber defense: These teams defend against malicious external cyber-enabled entities but typically do not monitor employee behavior unless an incident specifically implicates employee misconduct.
Intellectual Property Protection: These teams track the company's IP portfolio and development projects.
Supply Chain Risk: These teams assist with vetting potential partners and suppliers and often consist of specific functions standard to defense industry counterintelligence teams – enhanced due diligence (EDD) is a common process enabled by mature subscription-based vendor offerings for risk discovery, government resources, and OSINT. Once contracts are signed, many of these suppliers gain authorized access to restricted information, bringing them into the scope of an IRMP.
Environmental, Social, and Governance (ESG) Risk: These teams are responsible for understanding all global risk factors to company strategy, operations, financials, assets, and reputations.
Corporate Communications: These teams create bespoke messaging campaigns and otherwise maintain visibility of the public’s interaction with public-facing channels, customer sentiment, and brand security (e.g., leaks of restricted information into the public domain).
Program managers and chief engineers: These individuals share responsibility for designing security controls for their projects, and they are experts in their domains. Many are also experts in identifying the types of employee misconduct that can lead to third-party leaks or other compromises of restricted information.
Human-Enabled Penetrations – Countering the “Worst Case Scenario”
Let's turn this discussion upside down for a moment and consider what happens when a traditional cyber threat actor successfully recruits an insider. John Roberts, Dow’s Director of Global of Intelligence and Protection, mentioned that this type of threat scenario sometimes triggers multiple functional teams' playbooks, complicates detection and response processes, and sometimes gets lost in the shuffle. This scenario implies that we must understand who is driving the cyber threat actors and identify all of their lines of effort in order to comprehensively deny their access. We can do this by fusing cyber and non-cyber threat information automatically at the point of detection (best case scenario), or manually collaborate to deconflict these cases as they come up, but there is still no function chartered with fully understanding the non-cyber threat picture as it relates to insider risk.
Recommendations
The above scenario highlights the crux of my argument: The commercial industry needs a formal intelligence and counterintelligence corporate function to understand non-cyber threat actors' multiple lines of effort. Corporate Enterprise Risk seems like the right place for this requirement to land, but of course, we all know that one size does not fit all.
Such a team will depend on domain experience from specialists in the following disciplines:
Counterintelligence: This is the field of countering adversarial collection activities or shaping their perceptions of company activities, plans, etc.
Supply Chain and Logistics Intelligence: This is the field of Enhanced Due Diligence and Business Continuity, with touchpoints in others.
Information Security Continuous Monitoring (ISCM) and other Risk-Based Analytics (RBA) processes are common to teams involved in employee behavioral monitoring, data loss prevention, compliance, audit, incident response and forensics, and others.
Risk Intelligence: This is the field of collecting the information required to make business decisions – in this context, to reduce risk. The intelligence collection methods discussed above can be divided into multiple functional teams as long as fusion is possible.
Alternatively, all of the “super-hero” two-person teams can continue to do everything, but only if properly resourced. At a minimum, small teams must have the following resources:
Secure web browsers: not to mince words, security analysts must have a robust managed-attribution capability when countering this level of adversarial sophistication.
Automated Conflict of Interest Tips & Cues: risk intelligence vendors like Sayari, Strider Technologies, FiveCast, Thompson Reuters, Lexis-Nexis, Flashpoint, and a host of others offer access to data sets useful for detecting COIs: global patent and business filings, research funding recipients, talent recruitment initiatives, etc.
Internal fusion of cyber threat intel and non-cyber risk intel typically needs to occur outside of those two (or more) siloes. Enterprise Risk Management, Corporate Security, Brand Security, Insider Risk Management, InfoSec Continuous Monitoring, and GRC all have parts to play in daily operations, as overseen by senior leaders within HR, Legal, Audit, and Compliance.
Call to Action
Within the fields of counterintelligence, insider risk management, information security, and risk management, let's elevate our field by collectively advocating for IRMPs to include "non-cyber threat intelligence" charters and resources into our formally approved charters, concepts of operation, and core competencies. Once approved, we can more easily justify investments, even if they may arrive a year or two later than we may want. Once implemented, other functional teams without the resources to comprehensively address this challenge can begin to rely on IRMPs to surface risk signals with enough context to make risk-based business decisions and HR actions.
In practice, I think “strategic threat intelligence” may be an accurate way to describe the difference between the tactical threat actor activity we counter at the cyber perimeter and the meta-level actors responsible for managing a broad array of collection techniques and lower-level actors. At the strategic level, our requirements and objectives are much closer to functions such as ESG, Supply Chain Risk, and select Physical Security departments than cyber defense.
© 2024 Dave Holder. All rights reserved.
Dave Holder is a security specialist with broad experience in the fields of counterintelligence, law enforcement, workplace investigations, and insider risk management. "The views and opinions expressed in this article are solely my own and do not reflect the official policy or position of any agency or company I am affiliated with."
[1] Image credit: FBI, accessed at fbi.gov on 1/7/2023