CompTIA Security+ Exam Notes

CompTIA Security+ Exam Notes
Let Us Help You Pass

Saturday, January 31, 2026

SOC 2 Type 1 vs. Type 2 Explained: Differences, Use Cases, and Why It Matters

 SOC 2 Type 1 vs. Type 2 — Explanation

SOC 2 (Service Organization Control 2) is an audit framework developed by the AICPA to evaluate how well a service organization protects customer data based on the Trust Services Criteria:

  • Security (required)
  • Availability
  • Processing Integrity
  • Confidentiality
  • Privacy

SOC 2 reports come in two forms: Type 1 and Type 2, each serving different purposes and offering different levels of assurance.

SOC 2 Type 1 — What It Is

Definition

A SOC 2 Type 1 report evaluates the design of an organization’s security controls at a single point in time.

It answers the question:

“Are the controls designed properly as of today?”

 What It Evaluates

  • Policies, configurations, and procedures exist and are designed correctly to meet the Trust Services Criteria.
  • No long-term testing is performed, only design suitability.

Timing

  • Point‑in‑time snapshot
  • Typically completed in weeks, much faster than Type 2

Use Cases

  • Early‑stage companies needing fast compliance
  • Organizations with newly implemented controls
  • Businesses needing proof of security to close deals quickly

Limitations

  • Does not prove that controls actually operate consistently over time
  • Many enterprise customers reject Type 1 reports

SOC 2 Type 2 — What It Is

Definition

A SOC 2 Type 2 report evaluates both:

  • Design of controls
  • Operating effectiveness of those controls over a period of 3–12 months

It answers:

“Do the controls work reliably over time?”

What It Evaluates

  • Auditor tests real evidence: logs, tickets, change records, access reviews
  • Demonstrates continuous control operation

Timing

  • Review period: 3–12 months
  • Total audit timeline: 6–20 months

Use Cases

  • Required by enterprise customers
  • Companies in regulated industries
  • SaaS vendors that store sensitive customer data

Strengths

  • Provides the highest level of assurance
  • Demonstrates operational maturity
  • Widely required in vendor security assessments (RFPs)

Key Differences: SOC 2 Type 1 vs. Type 2


Which One Should an Organization Choose?

Choose Type 1 if:

  • You need something fast to unblock deals
  • Your controls were recently implemented
  • You’re validating that your control design is correct before deeper auditing

Choose Type 2 if:

  • You sell to mid‑market or enterprise customers
  • You operate in regulated industries (finance, health, government)
  • You want long‑term credibility with vendors and partners

According to SOC2auditors.org, 98% of Fortune 500 companies require a Type 2 report, making it the de facto standard for serious B2B SaaS.

Summary


Both are valuable, but Type 2 is the industry standard for trust and vendor due diligence.


Friday, January 30, 2026

CVSS v4.0 Explained: What’s New, Why It Matters, and How It’s Used

 CVSS v4.0 Explained in Detail

What is CVSS v4.0?

CVSS v4.0 (released November 1, 2023) is the latest version of the Common Vulnerability Scoring System, an open standard used globally to communicate the severity of software, hardware, and firmware vulnerabilities.

It provides a numerical severity score from 0 to 10 and a corresponding vector string that explains how the score was calculated.

CVSS v4.0 introduces changes to improve granularity, accuracy, flexibility, and real‑world relevance in vulnerability scoring.

CVSS v4.0 Metric Groups

CVSS v4.0 consists of four metric groups:

Base, Threat, Environmental, and Supplemental.

1. Base Metrics

These are the intrinsic characteristics of a vulnerability, attributes that do not change across environments or over time.

They form the foundation of the CVSS score.

Key updates in CVSS v4.0 Base metrics include:

  • Attack Requirements (AT): New metric describing conditions needed for exploitation.
  • User Interaction (UI) was expanded to None, Passive, and Active, providing finer-grained control.
  • Impact metrics revamped:

    • Vulnerable System impacts (VC, VI, VA)
    • Subsequent System impacts (SC, SI, SA)
    • These replace “Scope” from CVSS v3.1.

2. Threat Metrics

These describe real‑world exploitation conditions that can change over time, such as exploit availability and active attacks.

They now replace the Temporal metrics in CVSS v3.1. 

They allow organizations to calculate a more realistic severity based on:

  • in‑the‑wild attacks
  • existence of exploit code
  • technical maturity of exploits

3. Environmental Metrics

These represent the unique characteristics of the environment where a vulnerability exists.

They help organizations tailor scores to their infrastructure. 

Examples include:

  • system value
  • controls in place
  • business impact
  • compensating security mechanisms

4. Supplemental Metrics (New)

A brand‑new group providing additional context without modifying the numeric score.

This includes information such as safety‑related impacts or automation‑relevant data. [first.org]

These metrics are useful for:

  • medical device cybersecurity (e.g., FDA recognition) 
  • industrial systems
  • compliance reporting
  • fine‑grained prioritization

Qualitative Severity Ratings (v4.0)

According to NVD, CVSS v4.0 uses:

  • Low: 0.1–3.9
  • Medium: 4.0–6.9
  • High: 7.0–8.9
  • Critical: 9.0–10.0

Key Improvements Over CVSS v3.1

1. Better Definition of User Interaction

Passive vs. Active user interaction helps distinguish:

  • Passive → user only needs to be present
  • Active → user must perform an action

2. Attack Requirements (AT) Metric

Separates “conditions needed to exploit” from “exploit complexity,” making scoring more precise.

 3. Removal/Replacement of Scope

CVSS v3.1’s Scope was often misunderstood.

CVSS v4.0 uses separate impact metrics for “Vulnerable System” and “Subsequent Systems.”

4. New Supplemental Metrics

These allow non‑score‑affecting context, such as safety, automation, and exploit vectorization.

 5. Better Alignment with Real‑World Exploitation

The new Threat metrics track real‑world activity more cleanly than v3’s Temporal metrics.

Why CVSS v4.0 Matters

More Accurate Severity Assessments

More precise metrics → fewer inflated or misleading scores.

Improved Prioritization

Organizations can incorporate environment- and threat‑specific data to improve remediation decisions.

Better Reporting and Compliance

Used by NVD, FIRST, cybersecurity vendors, and regulators such as the FDA.

Enhanced Granularity for Critical Infrastructure

New Supplemental metrics help sectors like healthcare, ICS/OT, and cloud services add context without modifying the core score.

How CVSS v4.0 Is Used Today

NVD (National Vulnerability Database) supports CVSS v4.0 Base scores.

(As of 2024–2025, Threat and Environmental metrics must be user‑calculated.)

Cybersecurity vendors (Qualys, Checkmarx, etc.) are adopting v4.

FDA Recognized Standard for medical device cybersecurity.

Summary

CVSS v4.0 is the most refined and flexible version of the Common Vulnerability Scoring System to date. Its four metric groups, Base, Threat, Environmental, and Supplemental, offer more nuanced scoring, real‑world relevance, and improved context compared to previous versions.

Key improvements include:

  • New Attack Requirements metric
  • Improved User Interaction classification
  • Replacement of Scope with clearer system impact metrics
  • Introduction of Supplemental Metrics
  • Better alignment with threat intelligence

CVSS v4.0 provides organizations with more accurate, adaptable, and actionable vulnerability severity assessments.

Thursday, January 29, 2026

Directory Brute Force Attacks Explained: How Hidden Web Paths Are Discovered

Directory Brute Force Attack?

A directory brute-force attack (also called directory enumeration, path brute-forcing, or content discovery) is a technique used in cybersecurity to identify hidden or unlinked directories and files on a web server.

These locations may not appear anywhere on the public website, but they still exist on the server, sometimes containing:

  • Admin portals
  • Backups
  • Development endpoints
  • Configuration files
  • Old versions of the site
  • Sensitive documents

Security testers attempt to identify these areas to detect potential misconfigurations, while attackers seek them to gain unauthorized access.

Why Directories Can Be Hidden But Accessible

Web servers store files in a folder structure, such as:

  • /admin
  • /backups
  • /private
  • /.git
  • /api/v1/

Even if a site doesn’t link to these directories publicly:

  • They may still be reachable if the server doesn’t block them.
  • They may leak through predictable naming patterns.
  • Developers sometimes forget to remove old or test folders.

Since URLs can be guessed (e.g., example.com/admin), attackers test huge numbers of possible paths to find what the server reveals.

How Directory Brute Forcing Works (High-Level Technical View)

Again, this is conceptual, not instructional.

1. A list of common directory/file names exists in the attacker’s tool or process

  • These lists contain thousands of guesses based on:
  • Common naming conventions (e.g., /admin, /login)
  • Framework defaults (e.g., /wp-admin for WordPress)
  • Backup file names (backup.zip, db_old.sql)
  • Hidden directories (/.git/, /test/, /old/)

2. Each potential path is tested against the target website

The web server responds differently depending on whether the path exists:





3. Responses are analyzed

A tester looks for:

  • Valid locations that the site didn’t intend to expose
  • Forbidden directories that confirm a sensitive area exists
  • Patterns of interest, such as staging environments

4. Discovered content may reveal vulnerabilities

Once a hidden directory is found, it could expose:

  • Admin login pages
  • Backup archives containing sensitive data
  • Source code repositories
  • Misconfigurations
  • Unpatched services

Security teams then fix these issues to harden the system.

Why It Matters for Security

For defenders:

  • Directory brute force testing is essential in penetration testing and web application security assessments.
  • It helps identify accidental exposures before attackers find them.
  • It uncovers outdated or forgotten content (“shadow IT”).

For attackers:

  • They may use directory discovery to:
  • Find an entry point for intrusion
  • Access sensitive information
  • Identify vulnerable components
  • Map the structure of a website for further attacks

Common Preventive Measures

Organizations can mitigate risks by:

  • Disabling directory listing on the server
  • Restricting access using authentication or IP allowlists
  • Using non-predictable naming for sensitive paths
  • Implementing Web Application Firewalls (WAFs)
  • Monitoring for unusual patterns of requests
  • Removing old or unused directories

The goal is to make it harder (or impossible) for an attacker to guess sensitive paths.

Summary

A directory brute force attack is a method of systematically guessing URL paths to find hidden directories or files on a web server. It doesn’t rely on vulnerabilities, just on predictable naming patterns or forgotten resources. While it's a legitimate security testing technique, attackers also use it to uncover sensitive content.

Wednesday, January 28, 2026

A Comprehensive Guide to Simultaneous Authentication of Equals (SAE) in WPA3

 Simultaneous Authentication of Equals (SAE) 

SAE is a password‑authenticated key exchange (PAKE) protocol used in WPA3‑Personal Wi‑Fi networks.

It replaces the older PSK (Pre‑Shared Key) approach used in WPA2.

SAE is based on the Dragonfly key exchange protocol and provides a far more secure method for establishing encryption keys on wireless networks.

1. Why SAE Exists

Under WPA2-PSK, a weak password made the network vulnerable to:

  • Offline dictionary attacks
    • Attackers could capture the 4‑way handshake and brute‑force it offline without interacting with the network.
  • No forward secrecy
    • If the PSK was discovered later, past traffic could be decrypted.

SAE solves these problems.

2. What SAE Does

SAE provides:

  • Mutual authentication
    • Both the client and the access point demonstrate knowledge of the password without revealing it.
  • Forward Secrecy
    • The encryption keys change for each session.
    • If the password leaks later, old traffic cannot be decrypted.
  • Protection from Offline Cracking
    • An attacker cannot capture a handshake and brute‑force it later.
    • They must perform live, interactive attempts—slowing attacks drastically.
  • Resistance to Passive Attacks
    • Simply listening to the traffic gives no useful information about the password.

3. How SAE Works (Step-by-Step)

SAE is a two‑phase handshake:

Phase 1 – Commit Exchange

Both sides (client and AP):

1. Convert the shared Wi‑Fi password into a Password Element (PWE).

  • PWE is derived from the password and the two MAC addresses.
  • Ensures the handshake is unique for each client–AP pair.

2. Generate a random number (their private “secret”).

3. Compute:

  • A commit scalar
  • A commit element

4. Exchange these values openly over the air.

Important:

Even though the commit values are public, they cannot be used to derive the password.

Phase 2 – Confirm Exchange

Both sides:

1. Compute the shared secret key using:

  • Their own private random number
  • The other party’s commit element

2. Derive a session key (PMK).

3. Exchange confirm messages proving they derived the same key.

If confirm messages match → authentication succeeds.

4. Key Properties of SAE

  • Offline Attack Resistance
    • An attacker capturing SAE handshakes gets no password-derivable data.
  • Forward Secrecy
    • Keys change for every session.
  • Anti-Clogging
    • To prevent DoS attacks (spamming commit messages), the AP can require "anti-clogging tokens" before continuing.
  • Mutual Authentication
    • Both sides prove knowledge of the password.

5. How SAE Differs from WPA2‑PSK

6. Where SAE Is Used

SAE is the mandatory authentication method for:

  • WPA3-Personal
  • Wi-Fi Enhanced Open (for upgrade paths)
  • Enterprise environments that enable "Transition Mode"

7. Common Terms Related to SAE

  • Dragonfly Key Exchange — underlying cryptographic design.
  • Password Element (PWE) — ECC point representing the password.
  • Commit & Confirm messages — two-step handshake communication.
  • PMK (Pairwise Master Key) — key derived from SAE for the 4‑way handshake.

8. Why SAE Is Considered Secure

Because SAE:

  • Never transmits information usable to guess the password
  • Requires an attacker to interact for every guess
  • Uses elliptic-curve Diffie-Hellman
  • Uses strong hashing of the PWE
  • Provides fresh keys per session

This combination makes it substantially more secure than WPA2-PSK.

Summary

SAE (Simultaneous Authentication of Equals) is the WPA3 authentication method designed to prevent:

  • Offline dictionary attacks
  • Decryption of old traffic
  • Reuse of stale session keys
  • Weaknesses inherent to WPA2-PSK

It accomplishes this through a secure, mutual, password-authenticated key exchange that provides forward secrecy and robust resistance to brute-force attacks.

Tuesday, January 27, 2026

SY0-701 Exam prep questions

 YOU DO NOT NEED TO USE YOUR EMAIL ADDRESS TO TAKE THIS QUIZ.

Understanding CYOD: The Enterprise Model That Blends Flexibility and Control

 CYOD (Choose Your Own Device)

What Is CYOD (Choose Your Own Device)?
CYOD (Choose Your Own Device) is an enterprise mobility strategy in which an organization offers employees a pre-approved list of devices (laptops, tablets, smartphones) and allows them to choose the model they prefer.

The key idea: employees get choice, but the company maintains control.

In a CYOD program:
  • The company buys or leases the devices, or in some cases subsidizes them.
  • The employee chooses from a controlled selection of hardware.
  • The IT department manages the devices for security, compliance, and support.
  • The devices are registered, secured, and maintained as corporate assets.

This creates a balance between employee freedom and organizational security.

Why CYOD Exists
With the rise of mobile work, companies needed a way to support:
  • Employee preference for modern devices
  • Corporate security requirements
  • Standardized IT support
  • Efficient lifecycle management
CYOD emerged as a middle ground between two extremes:

BYOD (bring any device you own)
COBO (corporate-owned, business-only, no choice)

How CYOD Works
1. IT Defines the Approved Device List
IT teams choose devices based on:
  • Security capabilities
  • Operating system versions
  • Enterprise feature support
  • Vendor relationships
  • Budget
\Example device lists:
  • Smartphones: iPhone 15, Samsung Galaxy S24, Google Pixel 9
  • Laptops: Dell Latitude, HP EliteBook, MacBook Air/Pro
  • Tablets: iPad, Surface Pro
2. Employees Select Their Preferred Device
Employees choose from the list based on:
  • Familiarity
  • Comfort
  • Performance needs
  • Accessibility requirements
3. Devices Are Configured and Secured
IT handles:
  • OS hardening
  • MDM enrollment (e.g., Intune, MobileIron, VMware Workspace ONE)
  • Encryption
  • Compliance settings
  • Company apps installation
4. Device Lifecycle Management
IT manages:
  • Warranty and repairs
  • Software updates
  • Security monitoring
  • Replacement cycles (typically 2–4 years)
Benefits of CYOD
1. Stronger Security and Compliance
Since devices are standardized and IT-approved:
  • Fewer vulnerabilities
  • Consistent patching
  • Controlled OS versions
  • Easier compliance with regulations (HIPAA, GDPR, PCI-DSS, etc.)
2. Better IT Support
With fewer device variations, support teams can:
  • Troubleshoot faster
  • Maintain shared device images
  • Use unified MDM policies
3. Higher Employee Satisfaction
Employees still get:
  • A device they like
  • Freedom to choose between brands/styles
  • Modern, high‑quality hardware
4. Cost Control
Organizations can negotiate bulk pricing, manage warranties, and plan refresh cycles efficiently.

Challenges of CYOD

1. Higher Cost Than BYOD
Because companies still purchase or subsidize the devices.
2. Limited Personalization
Employees must choose only from the approved list.
3. Device Management Overhead
IT still must:
  • Manage device inventory
  • Maintain MDM tools
  • Provide support
4. Balancing Choice With Standardization
Too many device options can overwhelm IT; too few options frustrate employees.

CYOD vs. BYOD vs. COPE vs. COBO


CYOD strikes a balance between user freedom and enterprise control.

When Companies Use CYOD

CYOD works especially well for:
  • Organizations with strict security needs but still want modern UX
  • Remote or hybrid workplaces
  • Companies with large mobile workforces
  • Businesses want consistent hardware standards
  • Companies adopting zero‑trust security models
Industries that commonly use CYOD:
  • Healthcare
  • Finance
  • Technology
  • Government
  • Education
  • Manufacturing
In Summary
CYOD gives employees choice while allowing organizations to maintain strict control over hardware, security, and support.

It offers:
  • Greater security than BYOD
  • More flexibility than COBO
  • Better user satisfaction than COPE
  • Predictable support and lifecycle costs

The Hidden Biases in AI: How Data Shapes Fairness and Accuracy

 Data Bias in Artificial Intelligence

Data bias in artificial intelligence (AI) refers to systematic errors or unfair patterns that arise when the data used to train an AI system is not fully representative, is skewed, or reflects existing societal inequalities. Because AI models learn patterns from the data they are given, any bias in that data can lead to biased outcomes.

Here’s a clear breakdown:

What Causes Data Bias?
1. Historical Bias
Even if data is collected perfectly, it can still reflect past inequalities or norms.
Example: Hiring data from a company that historically hired mostly men will cause an AI résumé screener to prefer male candidates.

2. Sampling Bias
The dataset doesn't represent the full population or scenario the AI will be used for.
Example: A facial recognition system trained mostly on lighter‑skinned faces performs poorly on darker‑skinned individuals.

3. Measurement Bias
Inaccurate or inconsistent data collection affects outcomes.
Example: Using self‑reported health metrics from one demographic but clinical measurements from another.

4. Label Bias
Human annotators bring their own assumptions into the labeling process.
Example: Annotators label certain dialects of speech as “aggressive” more often.

5. Algorithmic Amplification
Even small biases in data can be amplified by feedback loops.
Example: If a predictive policing tool directs more police to certain neighborhoods, more crimes will be recorded there, reinforcing the model’s belief that those areas need more policing.

Why Data Bias Matters

Fairness Issues
Biased AI systems can unfairly penalize or discriminate against groups of people based on race, gender, age, disability, or socioeconomic status.

Accuracy Problems
Bias reduces model performance by making predictions less generalizable.

Legal & Ethical Risks
Organizations can face regulatory penalties or reputational damage if their AI systems cause harm or discrimination.

Real-World Examples
  • Facial recognition models have shown higher error rates for women and people with darker skin tones.
  • Automated loan approval systems have been found to give worse terms to certain demographic groups.
  • Medical algorithms have sometimes underestimated risk for certain ethnic groups due to flawed data.
How to Reduce Data Bias

1. Improve Data Diversity
Ensure datasets include all relevant groups and scenarios.
2. Conduct Bias Audits
Regularly test data and models for performance disparities.
3. Use Fairness Techniques
Methods such as re-weighting, re-sampling, or algorithmic fairness constraints.
4. Increase Transparency
Document how data was collected, cleaned, and labeled (e.g., through model cards or data sheets).
5. Involve Diverse Teams
Different perspectives reduce the chance of blind spots.

In a Nutshell
Data bias in AI isn’t just a technical issue, it’s a human issue. AI mirrors the data it learns from, so creating fair and accurate systems requires attention to how data is collected, labeled, and applied.