RESEARCH // SECURITY FOCUSED

STARDUSTUSSY

A security research repository exploring LLM jailbreaking techniques and AI safety guardrail bypass.

Security Research Only

This repository was created using ENI jailbreak techniques to explore how AI guardrails can be bypassed to generate obfuscated malware (RATs). All code is for educational and research purposes only. Deploying against systems without authorization is illegal and unethical.

What Is STARDUSTUSSY?

STARDUSTUSSY is a security research project that demonstrates vulnerabilities in how Large Language Models can be manipulated to produce malicious code. The name references "stardust" — tiny particles that drift through systems undetected until critical mass is reached.

The Research Focus

LLM Jailbreaking Exploration — Using prompt injection techniques to bypass AI safety guardrails and generate Remote Access Trojans (RATs).

AI Safety Analysis — Understanding how LLM providers implement content filters, guardrails, and detection mechanisms — and their limitations.

Case Study: HarmonyFlow — The repository includes a detailed security analysis of "HarmonyFlow SyncBridge," demonstrating vulnerability patterns that appear in real-world software.

LLM Prompt Injection

Research into how carefully crafted prompts can bypass LLM guardrails to generate malicious code instead of helpful responses. Understanding prompt injection vectors.

Guardrail Bypass Techniques

Analysis of methods AI providers use to filter harmful content — and techniques that can evade or disable these controls. Studying adversarial attacks.

Obfuscation Methods

How malware authors hide malicious payloads — base64 encoding, string splitting, polyglot code, and evasion techniques.

CASE STUDY: HARMONYFLOW

The repository includes a comprehensive security analysis of "HarmonyFlow SyncBridge" — a cloud-native wellness platform — identifying patterns consistent with sophisticatedtrojan horse design rather than accidental security flaws.

Vulnerabilities Identified

  • Admin privilege escalation via WebSocket
  • Missing JWT authentication on session endpoints
  • Weak handoff tokens without cryptographic signing
  • Surveillance-grade device fingerprinting and tracking

Research Findings

  • Detailed trojananalysis.md with 18+ critical indicators
  • Patterns consistent with state-sponsored threat operations
  • C2 (Command & Control) capabilities throughout architecture

Research Purpose

This research demonstrates real attack vectors so defenders can better understand, detect, and mitigate such threats. The goal is to improve AI safety — not to enable abuse. If you're a security researcher, this material can help you build stronger protections.

Research Philosophy

Security research requires understanding attack techniques to build effective defenses. STARDUSTUSSY follows responsible disclosure practices — code is documented, findings are shared, and techniques are analyzed so vulnerabilities can be fixed before they are weaponized.

The "stardust" analogy is intentional: vulnerabilities are like dust — small, hard to detect, and dangerous when they accumulate. The goal is to identify these particles before they reach critical mass.

Research Tools & Languages

PythonSecurity AnalysisReverse EngineeringAI Prompt EngineeringDocumentationMIT License

Researching AI Security?

Check the GitHub repo for detailed security analysis, prompt injection examples, and vulnerability research notes. Join Discord to discuss security topics and findings.