No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araTransport i Logística
1.108Informàtica i IT
997Comercial i Vendes
971Administració i Secretariat
722Desenvolupament de Programari
541Veure més categories
Enginyeria i Mecànica
462Comerç i Venda al Detall
415Dret i Legal
380Educació i Formació
380Indústria Manufacturera
343Instal·lació i Manteniment
257Màrqueting i Negoci
238Sanitat i Salut
194Disseny i Usabilitat
186Construcció
134Publicitat i Comunicació
128Art, Moda i Disseny
115Comptabilitat i Finances
95Recursos Humans
81Alimentació
70Arts i Oficis
65Atenció al client
63Turisme i Entreteniment
55Hostaleria
51Producte
51Banca
45Immobiliària
39Farmacèutica
37Seguretat
23Cures i Serveis Personals
20Energia i Mineria
17Social i Voluntariat
13Telecomunicacions
7Esport i Entrenament
4Assegurances
3Agricultura
1Editorial i Mitjans
1Ciència i Investigació
0Multiverse Computing
Madrid, ES
Machine Learning Engineer (Product)
Multiverse Computing · Madrid, ES
Python Docker Cloud Coumputing Git Machine Learning
As a Machine Learning Engineer you will
Build data and model pipelines end-to-end: create, source, augment, and validate datasets stand up training/fine-tuning/evaluation flows and ship models that meet product and customer requirements.
Design rigorous evaluation frameworks to verify task competence and alignment implement statistical testing, reliability checks, and continuous evaluation.
Scale training and inference: make effective use of distributed compute, optimize throughput/latency, and identify opportunities for algorithmic or systems-level speedups.
Improve models post-training: apply SFT and preference-based or reinforcement learning methods to enhance helpfulness, safety, and reasoning.
Optimize and specialize models: apply compression techniques to meet performance and footprint targets.
Collaborate across research and engineering: partner with ML engineers, researchers, and software engineers on data curation, evaluation design, training runs, model serving, and observability.
Contribute to our shared codebase: write clean, well-tested Python document decisions and artifacts uphold engineering standards.
Required Qualifications
Bachelor s degree in Computer Science, Math, Physics, Data Science, Operations Research, or related field.
Strong programming skills in Python and the modern ML stack (e.g., PyTorch), plus fluency with data tooling (NumPy/Pandas) and basic software practices (git, unit tests, CI).
Solid grounding in language modelling concepts around training, evaluation, model architecture, and data.
Comfort working with datasets at scale: collection, cleaning, filtering, labelling/annotation strategies, and quality controls.
Experience using GPU resources and familiarity with containerized workflows (e.g., Docker) and job schedulers or cloud orchestration.
Ability to read research papers, prototype ideas quickly, and turn them into reproducible, production-ready code.
Clear, pragmatic communication and a collaborative mindset.
Preferred Qualifications
PhD in Computer Science, Math, Physics, Data Science, Operations Research, or related field, or equivalent industry experience in machine learning, data science, or related roles, with demonstrated experience with NLP or LLMs.
Experience building foundational LLMs from the ground up.
Preferred Qualifications By Focus Area
Model Evaluation: Track record building task-grounded evals for LLMs, implementing or extending evaluation harnesses, and generating synthetic data for both evaluation and training deep understanding of LLM quirks and their ties to architecture and training dynamics.
Distributed Training: Hands-on experience debugging multi-node training, profiling/optimizing throughput and memory, and extending training frameworks to new architectures or optimizers comfort diagnosing flaky cluster issues.
Model Compression: Strong mathematical background and experience with pruning, quantization, and NAS ability to formulate and solve constrained optimization problems for accuracy/latency/footprint trade-offs and to integrate results into production.
Post-Training: Theoretical and practical familiarity with post-training and alignment techniques experience with SFT and preference/RL-based methods (e.g., DPO/GRPO, RLHF).
Python, Pytorch, LLM
Machine Learning Engineer (Product)
17 de set.Multiverse Computing
Madrid, ES
Machine Learning Engineer (Product)
Multiverse Computing · Madrid, ES
Python Docker Cloud Coumputing Git Machine Learning
As a Machine Learning Engineer you will
* Build data and model pipelines end-to-end: create, source, augment, and validate datasets; stand up training/fine-tuning/evaluation flows; and ship models that meet product and customer requirements.
* Design rigorous evaluation frameworks to verify task competence and alignment; implement statistical testing, reliability checks, and continuous evaluation.
* Scale training and inference: make effective use of distributed compute, optimize throughput/latency, and identify opportunities for algorithmic or systems-level speedups.
* Improve models post-training: apply SFT and preference-based or reinforcement learning methods to enhance helpfulness, safety, and reasoning.
* Optimize and specialize models: apply compression techniques to meet performance and footprint targets.
* Collaborate across research and engineering: partner with ML engineers, researchers, and software engineers on data curation, evaluation design, training runs, model serving, and observability.
* Contribute to our shared codebase: write clean, well-tested Python; document decisions and artifacts; uphold engineering standards.
---
Required Qualifications
* Bachelor´s degree in Computer Science, Math, Physics, Data Science, Operations Research, or related field.
* Strong programming skills in Python and the modern ML stack (e.g., PyTorch), plus fluency with data tooling (NumPy/Pandas) and basic software practices (git, unit tests, CI).
* Solid grounding in language modelling concepts around training, evaluation, model architecture, and data.
* Comfort working with datasets at scale: collection, cleaning, filtering, labelling/annotation strategies, and quality controls.
* Experience using GPU resources and familiarity with containerized workflows (e.g., Docker) and job schedulers or cloud orchestration.
* Ability to read research papers, prototype ideas quickly, and turn them into reproducible, production-ready code.
* Clear, pragmatic communication and a collaborative mindset.
---
Preferred Qualifications
* PhD in Computer Science, Math, Physics, Data Science, Operations Research, or related field, or equivalent industry experience in machine learning, data science, or related roles, with demonstrated experience with NLP or LLMs.
* Experience building foundational LLMs from the ground up.
Preferred qualifications by focus area:
* Model Evaluation: Track record building task-grounded evals for LLMs, implementing or extending evaluation harnesses, and generating synthetic data for both evaluation and training; deep understanding of LLM quirks and their ties to architecture and training dynamics.
* Distributed Training: Hands-on experience debugging multi-node training, profiling/optimizing throughput and memory, and extending training frameworks to new architectures or optimizers; comfort diagnosing flaky cluster issues.
* Model Compression: Strong mathematical background and experience with pruning, quantization, and NAS; ability to formulate and solve constrained optimization problems for accuracy/latency/footprint trade-offs and to integrate results into production.
* Post-Training: Theoretical and practical familiarity with post-training and alignment techniques; experience with SFT and preference/RL-based methods (e.g., DPO/GRPO, RLHF).