A security research repository exploring LLM jailbreaking techniques and AI safety guardrail bypass.
This repository was created using ENI jailbreak techniques to explore how AI guardrails can be bypassed to generate obfuscated malware (RATs). All code is for educational and research purposes only. Deploying against systems without authorization is illegal and unethical.
STARDUSTUSSY is a security research project that demonstrates vulnerabilities in how Large Language Models can be manipulated to produce malicious code. The name references "stardust" — tiny particles that drift through systems undetected until critical mass is reached.
LLM Jailbreaking Exploration — Using prompt injection techniques to bypass AI safety guardrails and generate Remote Access Trojans (RATs).
AI Safety Analysis — Understanding how LLM providers implement content filters, guardrails, and detection mechanisms — and their limitations.
Case Study: HarmonyFlow — The repository includes a detailed security analysis of "HarmonyFlow SyncBridge," demonstrating vulnerability patterns that appear in real-world software.
Research into how carefully crafted prompts can bypass LLM guardrails to generate malicious code instead of helpful responses. Understanding prompt injection vectors.
Analysis of methods AI providers use to filter harmful content — and techniques that can evade or disable these controls. Studying adversarial attacks.
How malware authors hide malicious payloads — base64 encoding, string splitting, polyglot code, and evasion techniques.
The repository includes a comprehensive security analysis of "HarmonyFlow SyncBridge" — a cloud-native wellness platform — identifying patterns consistent with sophisticatedtrojan horse design rather than accidental security flaws.
This research demonstrates real attack vectors so defenders can better understand, detect, and mitigate such threats. The goal is to improve AI safety — not to enable abuse. If you're a security researcher, this material can help you build stronger protections.
Security research requires understanding attack techniques to build effective defenses. STARDUSTUSSY follows responsible disclosure practices — code is documented, findings are shared, and techniques are analyzed so vulnerabilities can be fixed before they are weaponized.
The "stardust" analogy is intentional: vulnerabilities are like dust — small, hard to detect, and dangerous when they accumulate. The goal is to identify these particles before they reach critical mass.
Check the GitHub repo for detailed security analysis, prompt injection examples, and vulnerability research notes. Join Discord to discuss security topics and findings.