BasiliskBasilisk
FeaturesShowcaseInstallDocsGitHub
Legal

Legal Compliance

Applicable Laws & Regulatory Framework

⚠ This is not legal advice

This page provides general guidance on legal considerations when using AI security testing tools. Consult a qualified attorney in your jurisdiction for specific legal advice.

1. Applicable Cybercrime Laws

AI/LLM security testing may fall under existing computer fraud and cybercrime legislation. The following laws are commonly relevant:

United States

  • Computer Fraud and Abuse Act (CFAA), 18 U.S.C. § 1030 — Unauthorized access to computer systems. AI APIs are considered "protected computers."
  • Digital Millennium Copyright Act (DMCA) — May apply to circumventing technological protection measures on AI systems.
  • State-level laws — Many states have additional computer crime statutes with varying thresholds for unauthorized access.

European Union

  • EU AI Act (Regulation 2024/1689) — Classifies AI systems by risk level. Red teaming of high-risk AI systems may have specific regulatory requirements.
  • General Data Protection Regulation (GDPR) — If testing AI systems that process personal data, GDPR applies to the extraction or exposure of that data.
  • NIS2 Directive — Cybersecurity obligations for essential and important entities, including AI system operators.

United Kingdom

  • Computer Misuse Act 1990 — Unauthorized access to computer material. Applies to AI API endpoints.
  • UK GDPR / Data Protection Act 2018 — Data protection obligations when testing AI systems processing UK citizen data.

India

  • Information Technology Act, 2000 (IT Act), Sections 43 & 66 — Unauthorized access and computer-related offenses. Penalties include imprisonment up to 3 years and fines up to ₹5 lakh.
  • Digital Personal Data Protection Act, 2023 (DPDP Act) — Governs the processing of digital personal data. If AI testing exposes personal data, DPDP obligations apply.

Other Jurisdictions

  • Canada — Criminal Code § 342.1 (Unauthorized use of computer)
  • Australia — Cybercrime Act 2001, Criminal Code Act § 477-478
  • Singapore — Computer Misuse Act (Cap. 50A)
  • Japan — Unauthorized Computer Access Law

2. AI-Specific Regulations

The regulatory landscape for AI is evolving rapidly. Key AI-specific regulations that may affect your use of Basilisk include:

  • EU AI Act — Mandates risk assessments and red teaming for high-risk AI systems. Providers of general-purpose AI models with systemic risk must conduct adversarial testing.
  • NIST AI Risk Management Framework (AI RMF) — Recommends red teaming as part of AI system governance. Basilisk can assist in implementing NIST AI RMF GOVERN and MAP functions.
  • Executive Order 14110 (US) — Requires AI developers to share safety test results for dual-use foundation models. Basilisk reports can support compliance with these requirements.
  • OWASP LLM Top 10 — Industry standard vulnerability taxonomy for LLM applications. Basilisk's 29 modules map directly to OWASP LLM categories.

3. Engagement Requirements

Before using Basilisk against any AI system, you must satisfy all of the following:

  • Written Authorization — A signed document from the system owner explicitly authorizing AI security testing, specifying scope, duration, and permitted techniques.
  • Scope Definition — Clear boundaries on which endpoints, models, and features may be tested. Out-of-scope systems must not be touched.
  • Rules of Engagement — Agreed procedures for handling critical findings, emergency stop conditions, and data exposed during testing.
  • Data Handling Agreement — How test data, extracted prompts, and findings will be stored, retained, and destroyed.
  • Responsible Disclosure Timeline — An agreed timeline for reporting findings to the system owner before any public disclosure.
📋 Best Practice

Maintain a detailed testing log showing timestamps, modules used, payloads sent, and results obtained during every engagement. This log serves as evidence of authorized activity if questioned.

4. Responsible Disclosure

If you discover a vulnerability in an AI system using Basilisk, we strongly recommend following responsible disclosure practices:

  • Report findings directly to the AI system owner via their security contact or bug bounty program
  • Provide clear reproduction steps, severity assessment, and recommended mitigations
  • Allow a reasonable remediation window (typically 90 days) before any public disclosure
  • Do not exploit, exfiltrate data from, or weaponize discovered vulnerabilities
  • Follow the system owner's vulnerability disclosure policy if one exists

5. Bug Bounty Programs

Many AI providers operate bug bounty or vulnerability disclosure programs. If your testing falls within a provider's bug bounty scope, follow their rules explicitly. Basilisk's SARIF output format is designed to integrate with standard vulnerability tracking workflows.

Known AI provider security programs:

  • OpenAI Security
  • Anthropic Responsible Disclosure
  • Google Bug Hunters
  • Microsoft Bounty Programs

6. Export Controls

Basilisk is open-source software publicly available on GitHub. However, you should be aware that security testing tools may be subject to export control regulations (such as the Wassenaar Arrangement) in certain jurisdictions. Verify that your use and distribution comply with your country's export control laws.

Legal Inquiries

For legal questions regarding Basilisk, contact us at support@rothackers.com.