Module 3: Introduction to Security Concepts

Glossary of Terms

A foundational security model that guides policies for information security. It consists of three principles: Confidentiality (data is accessible only to authorized users), Integrity (data is accurate and reliable), and Availability (data is accessible when needed).

A framework for controlling access to resources. It consists of three processes: Authentication (verifying identity), Authorization (granting appropriate permissions), and Accounting (tracking user activities).

A security framework based on the principle of "never trust, always verify." It eliminates implicit trust and requires continuous verification for every user and device, regardless of their location.

The relationship between the collection and dissemination of data, technology, public expectations of privacy, and the surrounding legal issues.

Any data that can be used to identify, contact, or locate a specific individual. Examples include name, Social Security number, IP address, and biometric data.

The process of re-identifying individuals from a dataset that was supposed to be anonymous by cross-referencing it with other available data sources.

A security incident where unauthorized parties gain access to sensitive or confidential data. The primary focus is the theft of information.

A broader term for any incident that compromises a system or network. It can include data breaches, malware infections, or attacks that affect system availability, not just data confidentiality.

A security threat that originates from within an organization, such as a current or former employee, contractor, or business associate who misuses their authorized access.

Proactive security measures designed to mislead and detect attackers by deploying decoy assets within a network.

A decoy computer system designed to attract and trap attackers, allowing security teams to study their methods without exposing real assets.

A network of two or more honeypots designed to simulate a real network, providing a more comprehensive view of an attacker's behavior.

Fake decoy files (honeyfiles) or pieces of data like credentials (honeytokens) placed within a system. Any interaction with them triggers an alert, signaling a potential breach.

The moral principles that govern a person's behavior or the conducting of an activity. In cybersecurity, it involves responsible data handling, respecting privacy, and being transparent.

Core Security Models

The CIA Triad

The CIA Triad is a cornerstone model for information security, composed of three core principles that guide the development of security policies.

  • Confidentiality: Ensures that information is accessible only to authorized individuals. It's about preventing the unauthorized disclosure of data. (e.g., encryption).
  • Integrity: Ensures the accuracy and reliability of data by protecting it from unauthorized alteration. (e.g., hashing, digital signatures).
  • Availability: Ensures that systems and data are accessible to authorized users when needed. It's about preventing disruption of service. (e.g., redundancy, backups).

The AAA Model

The AAA model provides a framework for intelligently controlling access to computer resources.

  • Authentication: This is the process of verifying a user's identity. It answers the question, "Who are you?" (e.g., using a password, fingerprint, or security token).
  • Authorization: After a user is authenticated, this process determines what they are allowed to do. It answers the question, "What are you allowed to access?" (e.g., granting a user "read-only" access to a file).
  • Accounting (Auditing): This process tracks what a user does while accessing the system. It answers the question, "What did you do?" (e.g., logging file access and changes).

The Zero Trust Model

Zero Trust is a modern security framework that shifts away from the traditional "trust but verify" mindset of perimeter-based security.

  • Core Principle: "Never Trust, Always Verify." This model assumes that a breach is inevitable or has likely already occurred.
  • Eliminates Implicit Trust: No user or device is trusted by default, even if it is inside the corporate network. Traditional security often trusted anyone inside the network perimeter.
  • Enforces Least Privilege: Users are granted the absolute minimum level of access required to perform their specific tasks.
  • Continuous Verification: Every access request is continuously authenticated and authorized before resources are granted, often using contextual factors like location, device health, and user risk profile.

Data Privacy & PII

Information Privacy and Regulation

Information privacy concerns the protection of personal data and is governed by an increasingly complex landscape of laws. These regulations dictate how organizations must handle the PII of individuals.

  • GDPR (General Data Protection Regulation): A landmark EU law that gives individuals significant control over their personal data. It applies to any organization processing the data of EU citizens, regardless of location.
  • CCPA/CPRA (California Consumer Privacy Act / California Privacy Rights Act): A key US privacy law giving California consumers rights over their personal information, such as the right to know, delete, and opt-out of the sale of their data. Many other US states have followed with similar laws.

Personally Identifiable Information (PII)

PII is any information that can be used to distinguish or trace an individual's identity, either alone or when combined with other information.

  • Direct Identifiers: Name, Social Security number, driver's license number, biometric records.
  • Indirect Identifiers: Information that can be combined to identify someone, such as birthdate, geographic location, IP address, and device IDs.

De-anonymization: When Anonymous Data Isn't

De-anonymization is the process of re-identifying individuals from a dataset that was meant to be anonymous. This is often done by cross-referencing the "anonymous" data with other public data sources.

Case Study - The Strava Heat Map (2017): The fitness app Strava released a global "heatmap" showing the aggregated GPS tracks of its users. While no names were attached, the data was not truly anonymous. Journalists and researchers were able to identify the locations and patrol routes of secret military bases in conflict zones by observing the consistent running patterns of soldiers using the app. This was a powerful real-world example of de-anonymization.

Breaches & Threats

Data Breach vs. Security Breach

While often used interchangeably, these terms have distinct meanings. The CIA Triad helps explain the difference.

  • A Data Breach specifically involves the unauthorized access and theft of sensitive information, compromising Confidentiality.
    Case Study - Target (2013): Attackers installed malware on point-of-sale systems to steal credit card details and PII for 41 million and 70 million customers, respectively. This was a classic data breach.
  • A Security Breach is a broader term for any incident that compromises a system or network. It can affect any part of the CIA Triad.
    Case Study - MGM Resorts: When attackers breached MGM's systems, they made key cards for hotel rooms and casino machines inoperable. While data was also stolen, the primary public impact was on Availability, making it a security breach that went beyond just data theft.

Common Security Threats

  • Malware: Includes viruses, worms (self-propagating), Trojans (disguised malware), ransomware (encrypts data for a fee), and spyware (monitors user activity).
  • Phishing: Social engineering attacks using fraudulent emails or websites to trick users into revealing sensitive information.

Insider Threats

An insider threat is a security risk that comes from within an organization. These can be more dangerous than external threats because insiders already have legitimate access and knowledge of the systems.

  • Malicious Insider: Intentionally causes harm, often motivated by revenge, financial gain, or ideology.
  • Negligent/Accidental Insider: Unintentionally causes harm through carelessness, like falling for a phishing attack or sending sensitive data to the wrong recipient.
Case Study - Edward Snowden (2013): As an NSA contractor, Snowden had authorized access to classified documents. Motivated by ideology, he leaked a massive trove of these documents, revealing global surveillance programs. This is a prime example of a malicious insider threat.

Proactive Defense & Ethics

Deception Technology: The "Fly Trap" Approach

Deception technology is a proactive defense strategy that uses decoys to mislead, detect, and study attackers. By luring them into a controlled environment, security teams can learn about their methods without risking real assets.

  • Honeypots: A decoy system (e.g., a server) designed to look like a legitimate and attractive target. Any interaction with the honeypot is, by definition, malicious and triggers an alert.
  • Honeynets: A network of multiple honeypots that simulates a larger, more realistic network environment to analyze more complex, coordinated attacks.
  • Honeyfiles: Decoy files placed on a network with enticing names (e.g., `Employee_Salaries_2025.xlsx`). When an attacker opens the file, it secretly alerts the security team.
  • Honeytokens: Fake data or credentials (e.g., a fake API key or user account) embedded in a system. If this token is ever used, it immediately signals a breach.

Ethics in Cybersecurity

Ethics refers to the moral principles that govern behavior. For cybersecurity professionals, who hold immense power over sensitive data and critical systems, practicing sound ethics is a moral imperative.

  • Responsible Data Handling: Adhering to the principles of confidentiality and integrity when managing data and complying with privacy regulations.
  • Respecting Privacy: Designing systems with 'privacy by design' principles, which involves minimizing data collection and ensuring transparency with users.
  • Transparency: Being open and honest about the capabilities and limitations of security technologies and clearly communicating risks to stakeholders.
  • Professional Conduct: Acting honorably and responsibly, and ethically disclosing vulnerabilities when they are discovered (e.g., to the vendor first, not to the public).

Fill in the Blank Questions

True/False Questions

Multiple Choice Questions