Python Docker Cloud Coumputing Kubernetes AWS Bash Terraform Office

Are you ready to build high-performance data pipelines that turn complex science into real impact for patients? In this role, you will transform raw bioinformatics and scientific data into trusted, reusable assets that drive discovery and...
Are you ready to build high-performance data pipelines that turn complex science into real impact for patients? In this role, you will transform raw bioinformatics and scientific data into trusted, reusable assets that drive discovery and decision-making across our research programs.

You will join a team that fuses data engineering with cutting-edge science, using HPC and AWS to deliver reproducible workflows at scale. From first ingestion to consumption by scientists and AI models, you will set the standard for reliability, speed, and governance across our data foundation.

Do you thrive where learning is continuous and bold ideas are encouraged? You will have the freedom to experiment, the support to grow, and the opportunity to see your work influence breakthroughs as they take shape.

Accountabilities:
- Pipeline Engineering: Design, implement, and operate fit-for-purpose data pipelines for bioinformatics and scientific data, from ingestion to consumption.
- Workflow Orchestration: Build reproducible pipelines using frameworks such as Nextflow (preferred) or Snakemake; integrate with schedulers and HPC/cloud resources.
- Data Platforms: Develop data models, warehousing layers, and metadata/lineage; ensure data quality, reliability, and governance.
- Scalability and Performance: Optimize pipelines for throughput and cost across Unix/Linux HPC and cloud environments (AWS preferred); implement observability and reliability practices.
- Collaboration: Translate scientific and business requirements into technical designs; partner with CPSS stakeholders, R&D IT, and DS&AI to co-create solutions.
- Engineering Excellence: Establish and maintain version control, CI/CD, automated testing, code review, and design patterns to ensure maintainability and compliance.
- Enablement: Produce documentation and reusable components; mentor peers and promote best practices in data engineering and scientific computing.

Essential Skills/Experience:
- Pipeline engineering: Design, implement, and operate fit-for-purpose data pipelines for bioinformatics and scientific data, from ingestion to consumption.
- Workflow orchestration: Build reproducible pipelines using frameworks such as Nextflow (preferred) or Snakemake; integrate with schedulers and HPC/cloud resources.
- Data platforms: Develop data models, warehousing layers, and metadata/lineage; ensure data quality, reliability, and governance.
- Scalability and performance: Optimize pipelines for throughput and cost across Unix/Linux HPC and cloud environments (AWS preferred); implement observability and reliability practices.
- Collaboration: Translate scientific and business requirements into technical designs; partner with CPSS stakeholders, R&D IT, and DS&AI to co-create solutions.
- Engineering excellence: Establish and maintain version control, CI/CD, automated testing, code review, and design patterns to ensure maintainability and compliance.
- Enablement: Produce documentation and reusable components; mentor peers and promote best practices in data engineering and scientific computing.

Desirable Skills/Experience:
- Strong programming in Python and Bash for workflow development and scientific computing.
- Experience with containerization and packaging (Docker, Singularity, Conda) for reproducible pipelines.
- Familiarity with data warehousing and analytics platforms (e.g., Redshift, Snowflake, Databricks) and data catalog/lineage tools.
- Experience with observability and reliability tooling (Prometheus/Grafana, ELK, tracing) in HPC and cloud contexts.
- Knowledge of infrastructure as code and cloud orchestration (Terraform, CloudFormation, Kubernetes).
- Understanding of FAIR data principles and domain-specific bioinformatics formats and standards.
- Track record of mentoring engineers and enabling cross-functional teams with reusable components and documentation.
- Experience optimizing performance and cost on AWS, including spot strategies, autoscaling, and storage tiers.

When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That´s why we work, on average, a minimum of three days per week from the office. But that doesn´t mean we´re not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.

Why AstraZeneca:
Your engineering craft will fuel science at the crossroads of biology, data, and technology. You will collaborate with researchers, data scientists, and technologists to tackle complex diseases, using modern platforms and inclusive ways of working to turn uncertainty into insight. We value kindness alongside ambition, nurture resilience and curiosity, and pair the resources of a global leader with the agility to move at pace-from hands-on experimentation to shared learning and tangible impact for patients.

¡No te pierdas nada!

Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo


Nunca compartiremos tu email con nadie y no te vamos a enviar spam

Suscríbete Ahora

Últimas ofertas de empleo en Barcelona

CAS TRAINING

Barcelona, ES

Ingeniero/a de Sistemas "MBSE" Centro de trabajo: Barcelona – Distrito 22@ Presencial Perfil requerido Buscamos una...

B. Braun Group

Barcelona, ES

We are seeking a Senior T-shaped Data Engineer to join our team, focusing on the crucial intersection between deep SAP...

Deutsche Telekom

Granada, La, ES

Deine Aufgabe In dieser Rolle wirst du Teil eines kleinen Teams von Data Scientists, das am Projekt "Energy Cloud"...

Deutsche Telekom

Granada, La, ES

Deine Aufgabe Wir suchen einen motivierten DevOps Engineer , der unser Team unterstützt und den Betrieb sowie die...

Barcelona Supercomputing Center

Barcelona, ES

Job Reference 406_25_ES_CES_RE1 Position Machine Learning Engineer (RE1-2) – AI Factory (Earth Sciences Department)...

CDmon

Barcelona, ES

### Tu Misión: Formar parte del equipo de _arquitectos_. En cdmon.com construimos las herramientas y la infraestructura...

Trust In SODA

Barcelona, ES

Senior Site Reliability Engineer | Spain (Hybrid) An opportunity to join a high growth, late stage technology company...

Canonical

Canonical is a leading provider of open source software and operating systems to the global enterprise and technology...

Seidorcons

Barcelona, ES

´TE APUNTAS AL RETO? Como DevOps Engineer te incorporarás a un equipo de Operaciones Cloud / DevOps, responsable de...

Aubay

Barcelona, ES

Funciones - Diseñar y arquitectar infraestructuras AWS seguras y escalables. - Implementar Infraestructura como Código...