No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araTransport i Logística
1.182Comercial i Vendes
1.050Informàtica i IT
886Administració i Secretariat
677Comerç i Venda al Detall
505Veure més categories
Desenvolupament de Programari
481Enginyeria i Mecànica
450Educació i Formació
374Indústria Manufacturera
370Dret i Legal
311Màrqueting i Negoci
292Instal·lació i Manteniment
268Art, Moda i Disseny
188Sanitat i Salut
161Disseny i Usabilitat
143Publicitat i Comunicació
126Hostaleria
112Construcció
103Comptabilitat i Finances
94Arts i Oficis
92Alimentació
85Atenció al client
78Recursos Humans
78Turisme i Entreteniment
48Producte
44Immobiliària
39Banca
36Seguretat
31Farmacèutica
27Cures i Serveis Personals
22Energia i Mineria
16Social i Voluntariat
13Telecomunicacions
3Assegurances
2Editorial i Mitjans
2Esport i Entrenament
2Agricultura
0Ciència i Investigació
0Mindrift
AI Agent Evaluation Analyst
Mindrift · Madrid, ES
Teletreball . QA
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English proficiency.
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.
What We Do
The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.
Who we're looking for:
We're looking for curious and intellectually proactive contributors, the kind of person who double-checks assumptions and plays devil's advocate.
Are you comfortable with ambiguity and complexity? Does an async, remote, flexible opportunity sound exciting? Would you like to learn how modern AI systems are tested and evaluated?
This is a flexible, project-based opportunity well-suited for:
- Analysts, researchers, or consultants with strong critical thinking skills
- Students (senior undergrads / grad students) looking for an intellectually interesting gig
- People open to a part-time and non-permanent opportunity
We're on the hunt for QAs for autonomous AI agents for a new project focused on validating and improving complex task structures, policy logic, and agent evaluation frameworks. Throughout the project, you'll have to balance quality assurance, research, and logical problem-solving. This project opportunity is ideal for people who enjoy looking at systems holistically and thinking through scenarios, implications, and edge cases.
You do not need a coding background, but you must be curious, intellectually rigorous, and capable of evaluating the soundness and consistency of complex setups. If you've ever excelled in things like consulting, CHGK, Olympiads, case solving, or systems thinking — you might be a great fit.
What you'll be doing:
- Reviewing evaluation tasks and scenarios for logic, completeness, and realism
- Identifying inconsistencies, missing assumptions, or unclear decision points
- Helping define clear expected behaviors (gold standards) for AI agents
- Annotating cause-effect relationships, reasoning paths, and plausible alternatives
- Thinking through complex systems and policies as a human would to ensure agents are tested properly
- Working closely with QA, writers, or developers to suggest refinements or edge case coverage
Apply to this post, qualify, and get the chance to contribute to a project aligned with your skills, on your own schedule. Shape the future of AI while building tools that benefit everyone.
Requirements
- Excellent analytical thinking: Can reason about complex systems, scenarios, and logical implications
- Strong attention to detail: Can spot contradictions, ambiguities, and vague requirements
- Familiarity with structured data formats: Can read, not necessarily write JSON/YAML
- Ability to assess scenarios holistically: What's missing, what's unrealistic, what might break?
- Good communication and clear writing (in English) to document your findings.
- Experience with policy evaluation, logic puzzles, case studies, or structured scenario design
- Background in consulting, academia, olympiads (e.g. logic/math/informatics), or research
- Exposure to LLMs, prompt engineering, or AI-generated content
- Familiarity with QA or test-case thinking (edge cases, failure modes, "what could go wrong")
- Some understanding of how scoring or evaluation works in agent testing (precision, coverage, etc.)
- Get paid for your expertise, with rates that can go up to $29/hour depending on your skills, experience, and project needs
- Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments
- Participate in an advanced AI project and gain valuable experience to enhance your portfolio
- Influence how future AI models understand and communicate in your field of expertise
Evaluation Scenario Writer - QA
26 d’oct.Mindrift
Evaluation Scenario Writer - QA
Mindrift · Barcelona, ES
Teletreball . Python QA
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.
The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.
Who we're looking for:
We're looking for curious and intellectually proactive contributors who never miss an error and can think outside of the box when brainstorming solutions.
Are you comfortable with ambiguity and complexity? Does an async, remote, flexible opportunity sound exciting? Would you like to learn how modern AI systems are tested and evaluated?
This is a flexible, project-based opportunity well-suited for:
- Analysts, researchers, or consultants with strong critical thinking skills
- Students (senior undergrads / grad students) looking for an intellectually interesting gig
- People open to a part-time and non-permanent opportunity
We're on the hunt for an Evaluation Scenario Writer - QA for a new project focused on ensuring the quality and correctness of evaluation scenarios created for LLM agents. This project opportunity blends manual scenario validation, automated test thinking, and collaboration with writers and engineers. You will verify test logic, flag inconsistencies, and help maintain a high bar for evaluation coverage and clarity.
What you'll be doing:
- Reviewing and validating test scenarios from Evaluation Writers
- Spotting logical inconsistencies, ambiguities, or missing checks
- Suggesting improvements to structure, edge cases, or scoring logic
- Collaborating with infrastructure and tool developers to automate parts of the review
- Creating clean and testable examples for others to follow
How To Get Started
Apply to this post, qualify, and get the chance to contribute to a project aligned with your skills, on your own schedule. Shape the future of AI while building tools that benefit everyone.
Requirements
The ideal contributor will have:
- Strong QA background (manual or automation), preferably in complex testing environments
- Understanding of test design, regression testing, and edge case detection
- Ability to evaluate logic and structure of test scenarios (even if written by others)
- Experience reviewing and debugging structured test case formats (JSON, YAML)
- Familiarity with Python and JS scripting for test automation or validation
- Clear communication and documentation skills
- Willingness to occasionally write or refactor test scenarios
- Experience testing AI-based systems or NLP applications
- Familiarity with scoring systems and behavioral evaluation
- Git/GitHub workflow familiarity (PR review, versioning of test cases)
- Experience using test management systems or tracking tools
Contribute on your own schedule, from anywhere in the world. This opportunity allows you to:
- Get paid for your expertise, with rates that can go up to $29/hour depending on your skills, experience, and project needs
- Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments
- Participate in an advanced AI project and gain valuable experience to enhance your portfolio
- Influence how future AI models understand and communicate in your field of expertise