hellst4r's Projects
AI Audio Data Poisoning is a Python script that demonstrates how to add adversarial noise to audio data. This technique, known as audio data poisoning, involves injecting imperceptible noise into audio files to manipulate the behavior of AI systems trained on this data.
AI Image Data Poisoning is a Python script that demonstrates how to add imperceptible perturbations to images, known as adversarial noise, which can disrupt the training process of AI models.
The AI Vulnerability Assessment Framework is an open-source checklist designed to guide users through the process of assessing the vulnerability of artificial intelligence (AI) systems to various types of attacks and security threats
ASCII Art Prompt Injection is a novel approach to hacking AI assistants using ASCII art. This project leverages the distracting nature of ASCII art to bypass security measures and inject prompts into large language models, such as GPT-4, leading them to provide unintended or harmful responses.
The dynamic infrastructure framework for everybody! Distribute the workload of many different scanning tools with ease, including nmap, ffuf, masscan, nuclei, meg and many more!
This project crawls bug bounty platform scopes (like Hackerone/Bugcrowd/Intigriti/etc) hourly and dumps them into the bounty-targets-data repo
This repo contains hourly-updated data dumps of bug bounty platform scopes (like Hackerone/Bugcrowd/Intigriti/etc) that are eligible for reports
Fleex makes it easy to create multiple VPS on cloud providers and use them to distribute workloads.
The official HackerGPT repository
Config files for my GitHub profile.
Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system for analysis, enabling covert communication with AI models through images.
Fast and customizable vulnerability scanner based on simple YAML based DSL.
Nuclei Templates Collection
User-friendly WebUI for LLMs (Formerly Ollama WebUI)
A list of useful payloads and bypass for Web Application Security and Pentest/CTF
The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attacks. This tool utilizes the OpenAI GPT-3.5 model to generate responses to system-user prompt pairs and outputs the results to a CSV file for analysis.
Online playground for OpenAPI tokenizers