8194460 Job Detail
M

Freelance AI Red Team Engineer

at Mindrift

Desired Skills

About Job

Evaluate and red team AI models and agents and machine learning systems for vulnerabilities and safety risks. Create offline reproducible & auto-evaluable test cases to test safety & capability of AI agents. Develop and implement automation scripts, custom tools, environments and test harnesses. Lead or contribute to security research initiatives, especially in AI safety, creating and implementing realistic and challenging attack scenarios for the model. Advise on cybersecurity best practices and policy implications.

Requirements

You hold a Bachelor's or Master's Degree in Computer Science, Software Engineering, Cybersecurity, Digital Forensics or other related fields.
Your level of English is advanced (C1) or above
Proficient in scripting and automation using Python, Bash, or PowerShell
Experienced with containerization and CI/CD security tools, especially Docker.
Hands-on experience with penetration testing across web, API, network, and infrastructure environments.
Knowledge of vulnerabilities in current AI models, including prompt injections, with knowledge of OWASP Top 10 for Large Language Models (LLMs)
Familiar with AI red-teaming frameworks such as garak or PyRIT.
Experience in AI/ML security, evaluation, and red teaming, particularly with LLMs, AI agents, and RAG pipelines.
Proficient in offensive exploitation and exploit development
Skilled in reverse engineering using tools like Ghidra or equivalents.
Expertise in network and application security, including web application security.
Knowledge of operating system security concepts such as Linux privilege escalation and Windows internals.
Familiar with secure coding practices for full-stack development.
You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines.

Additional Instructions

Evaluate and red team AI models and agents and machine learning systems for vulnerabilities and safety risks.
Create offline reproducible & auto-evaluable test cases to test safety & capability of AI agents.
Develop and implement automation scripts, custom tools, environments and test harnesses.
Lead or contribute to security research initiatives, especially in AI safety, creating and implementing realistic and challenging attack scenarios for the model.
Advise on cybersecurity best practices and policy implications.

Perks and Benefits

Get paid for your expertise, with rates that can go up to $80/hour depending on your skills, experience, and project needs
Take part in a part-time, remote, freelance project that fits around your primary professional or academic commitments
Work on advanced AI projects and gain valuable experience that enhances your portfolio
Influence how future AI models understand and communicate in your field of expertise
M

Mindrift

-

Details

Job Type
Remote
Preferred location
New York, Ny
Apply Before
Jan 20, 2026
Apply To Job