New findings highlight the potential consequences of neglecting the human dimension in cybersecurity strategies
The existing literature on intrusion detection systems (IDSs) heavily emphasizes technological advancements, particularly the integration of artificial intelligence (AI) for threat detection. Numerous studies showcase new AI-driven methods for identifying potential security breaches, yet there is a noticeable lack of attention to the human aspect of IDS implementation. Critical factors such as how these systems should be introduced within organizations and the scope of their monitoring capabilities are often overlooked.
Organizations need to be aware that cyber threats, including insider threats, may already be present within their systems. The focus on detecting and predicting insider threats has grown, particularly after high-profile incidents like the Snowden case in 2013. Many businesses have responded by deploying IDS to monitor their computer systems and networks, aiming to identify and prevent malicious activities by employees. While these systems are essential, their implementation can have unintended consequences if not carefully managed.
Network IDSs versus host-based IDSs
IDSs are generally categorized into two types: network IDSs and host-based IDSs. Network IDSs monitor network traffic by analyzing data packets to detect suspicious activities. On the other hand, host-based IDSs collect data from individual computers, such as system calls and keystroke dynamics. Despite their widespread use, host-based IDSs have limitations, particularly because many data exfiltration cases occur over networks rather than on individual hosts. For instance, insiders often use their business email accounts to send sensitive company data externally, which a host-based IDS might not catch.
Signature-based versus anomaly-based detection
IDSs use two primary techniques for intrusion detection: signature-based and anomaly-based detection. Signature-based detection relies on pattern-matching techniques to identify known threats by comparing them against a database of threat signatures. While this method is easy to implement, it is limited by its reliance on the constant updating of signature databases and its inability to detect unknown threats, such as zero-day vulnerabilities. Anomaly-based detection, in contrast, involves training the IDS to recognize normal behavior within a system or network, allowing it to flag unusual activities as potential threats. Although anomaly-based detection can identify new and previously unknown attacks, it is prone to generating false positives, especially in dynamic work environments where normal activities can vary widely.
To improve the accuracy of IDSs and reduce false positives, some studies suggest incorporating behavioral and psychological indicators. For example, algorithms have been developed to predict the risk level of employees based on psychosocial factors like personality traits and anti-authority behavior. However, these indicators are supplementary and cannot alone determine the presence of a malicious insider. Moreover, relying on such data can create ethical and privacy concerns, potentially damaging trust within the organization.
Increasing detection to discourage malicious employees
The deterrent effect of IDSs is based on the assumption that increasing the likelihood of detection will discourage employees from violating security policies. Research on digital monitoring technologies, such as internet usage tracking and network activity recording, supports this idea, showing that perceived certainty and severity of penalties can lead to greater compliance with security rules. However, this deterrent effect is most pronounced among employees who already share the organization’s values. For others, especially those who feel their autonomy is threatened, digital surveillance can have the opposite effect, reducing their organizational commitment and leading to increased non-compliance.
Be careful of negative reactions
Surveillance technologies like IDSs can also create negative reactions among employees, particularly if they perceive these measures as invasive or mistrustful. Psychological reactance theory suggests that individuals resist threats to their personal freedom, and this resistance can manifest as deviant behavior in response to surveillance. For instance, employees in organizations that promote autonomy may react negatively when surveillance measures are introduced, leading to greater non-compliance with security policies.
To mitigate these negative effects, organizations should consider implementing feedback mechanisms to involve employees in the design of IDS procedures, which can reduce perceptions of privacy invasion. Additionally, granting employees more autonomy in their daily tasks can help balance the sense of control and reduce resistance to surveillance. Communicating the rationale behind IDS implementation in a way that aligns with employees’ ethical orientations can also help alleviate concerns and foster a more cooperative work environment.
To IDS or not to IDS?
In conclusion, while IDSs are valuable tools for detecting insider threats, their implementation must be handled carefully to avoid unintended consequences that could undermine organizational solidarity and commitment. Organizations should strive to balance security measures with maintaining a positive workplace culture, recognizing that the values and motivations of employees play a crucial role in ensuring compliance and preventing insider threats.
– Andreanne Bergeron