No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araComercial i Vendes
948Informàtica i IT
893Administració i Secretariat
688Transport i Logística
512Educació i Formació
381Veure més categories
Desenvolupament de Programari
372Enginyeria i Mecànica
325Comerç i Venda al Detall
321Dret i Legal
290Màrqueting i Negoci
260Disseny i Usabilitat
246Instal·lació i Manteniment
201Publicitat i Comunicació
172Indústria Manufacturera
134Construcció
126Sanitat i Salut
122Comptabilitat i Finances
111Hostaleria
83Recursos Humans
80Art, Moda i Disseny
68Atenció al client
65Turisme i Entreteniment
50Arts i Oficis
49Producte
47Cures i Serveis Personals
39Immobiliària
39Editorial i Mitjans
24Alimentació
22Seguretat
19Banca
16Energia i Mineria
15Farmacèutica
14Social i Voluntariat
8Assegurances
2Agricultura
1Ciència i Investigació
1Esport i Entrenament
1Telecomunicacions
1Boston Scientific
Madrid, ES
Senior Data Platform Engineer
Boston Scientific · Madrid, ES
Python TSQL Azure Cloud Coumputing REST AWS DevOps Terraform Spark Office
Additional Locations: France-Île-de-France; Germany-Düsseldorf; Italy-Milan; Netherlands-Kerkrade; Spain-Madrid; United Kingdom-Hemel Hempstead
Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance
At Boston Scientific, we´ll give you the opportunity to harness all that´s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we´ll help you in advancing your skills and career. Here, you´ll be supported in progressing - whatever your ambitions.
A Day in the Life of the EMEA Data Platform Engineer:
You´ll start your day by scanning platform health dashboards, pipeline success rates, latency, cost and capacity signals, SLO attainment, and data quality alerts, then action the highest-value improvements. You´ll partner with data product managers & owners, data engineers, AI engineers, data scientists, and application teams to turn business needs into robust, reusable platform capabilities.
When EMEA-specific constraints (e.g., GDPR) surface, you´ll collaborate with Security, Legal, and Governance to design compliant patterns without slowing delivery. Working closely with the Global team, you´ll ship infrastructure-as-code, automate CI/CD for data workloads, and harden security so teams can build safely by default.
Throughout the week, you´ll evolve ingestion and processing frameworks (batch, micro-batch, streaming), improve observability (logging, tracing, lineage), and coach teams on using the platform´s self-service tooling and catalog. You´ll run blameless incident reviews, remove toil, and continuously raise reliability, performance, and cost efficiency.
Each month, you´ll contribute to global architecture forums to converge standards, reuse patterns, and ensure EMEA requirements are reflected in the enterprise roadmap.
The role:
Reporting to the Data Engineering & Platform Manager, the Data Platform Engineer builds and operates the cloud data platforms that powers analytics, AI, and operational data products across the region. You will design secure, scalable, and observable platform services (compute, storage, processing, orchestration, quality, lineage, access), deliver them as well-documented, reusable capabilities, and support teams in adopting them at scale. Success requires deep engineering craft, platform thinking, and close collaboration across product, architecture, and governance.
Key Responsibilities:
- Platform engineering & operations: Design, build, and operate cloud data platforms components (lake/lakehouse, warehouses, streaming, orchestration, catalogs) with strong SLIs/SLOs, automated recovery, and capacity planning.
- Pipelines & frameworks: Provide reusable templates and libraries for ELT/ETL, CDC, and streaming, standardize patterns for schema evolution, testing, and deployments across domains.
- Security & compliance by design: Implement least-privilege IAM, key management, encryption in transit/at rest, network segmentation, and data classification, ensure GDPR/data residency adherence with auditable controls.
- Observability & quality: Instrument end-to-end telemetry (logs/metrics/traces), lineage, and data quality checks, build dashboards and alerts to prevent regressions and reduce MTTD/MTTR.
- Automation & IaC: Deliver platform resources with infrastructure-as-code, enable Git-based workflows, CI/CD for data workloads, and policy-as-code guardrails.
- Cost efficiency: Monitor and optimize spend across compute/storage, set budgets and alerts, recommend right-sizing, workload scheduling, and caching/format strategies.
- Collaboration & enablement: Document platform capabilities, publish examples and runbooks, and provide office hours/community support to drive safe self-service adoption.
- Incident & change management: Lead RCAs, prioritize preventative engineering and change controls.
- Standards & reusability: Contribute to reference architectures and shared components, champion interoperability (contracts, semantics) with architecture and governance peers.
- Global alignment: Partner with global platform, product, and security teams to align roadmaps, share learnings, and represent EMEA needs.
What you will need:
- Bachelor´s in computer science, Engineering, or related field.
- 5-8+ years in data engineering/platform engineering with production experience in cloud data stacks, with a strong focus on Snowflake, Data Cloud and AWS as primary platforms. Experience with Informatica, dbt, and Azure is highly valued.
- Proficiency in Python and SQL, strong grasp of distributed data processing (e.g., Spark on AWS EMR) and storage formats (Parquet/Delta/Iceberg).
- Hands-on with Infrastructure as Code (e.g., Terraform), CI/CD (e.g., GitHub Actions/Azure DevOps), containers/orchestrators (e.g., Docker/Kubernetes), and job schedulers (e.g., Airflow/ADF).
- Solid security engineering across AWS and Snowflake: IAM, secrets, encryption, and policy-as-code.
- Observability mindset: metrics, tracing, logging, lineage, and data quality frameworks, ideally leveraging platform-native tools (e.g., AWS CloudWatch, Snowflake monitoring, Informatica data quality, dbt tests).
- Working knowledge of governance and privacy (GDPR), retention, and access models, especially as implemented in Snowflake, AWS, and Azure.
- Excellent communication and collaboration skills across product, engineering, and non-technical stakeholders. Experience in medtech/pharma/consulting is a plus.
What do you offer:
- An ownership mentality for reliability, performance, and cost, and the curiosity to automate everything that repeats.
- Pragmatic engineering instincts: you ship secure, simple, and well-tested solutions that scale.
- A passion for enabling others through great developer experience, documentation, and coaching.
- Continuous learning in Data & AI platform capabilities (semantic layers, streaming, GenAI-assisted tooling) and how to apply them responsibly.
What do we offer:
- A chance to shape the EMEA data platform foundations and set enterprise-wide standards.
- A collaborative, global network and supportive coaching culture focused on your growth.
- Opportunities to lead high-impact initiatives that accelerate analytics and AI adoption across diverse markets.
- A coaching culture environment focusing on your success and development!
EMEA Data Scientist
NovaBoston Scientific
Madrid, ES
EMEA Data Scientist
Boston Scientific · Madrid, ES
Python TSQL Azure Docker Cloud Coumputing Kubernetes Git AWS Spark Machine Learning Power BI Tableau
Additional Locations: France-Île-de-France; Germany-Düsseldorf; Italy-Milan; Netherlands-Kerkrade; United Kingdom-Hemel Hempstead
Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance
At Boston Scientific, we´ll give you the opportunity to harness all that´s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we´ll help you in advancing your skills and career. Here, you´ll be supported in progressing - whatever your ambitions.
A Day in the Life of the EMEA Data Scientist:
You will spend most of your time (60%) focused on MLOps and model deployment, and (40%) on data science and model development. You will work across the entire ML lifecycle-from building and maintaining pipelines to deploying and monitoring models, exploring data, and developing machine learning solutions. As a well-rounded ML practitioner, you will balance engineering and data science. As you collaborate with your peers, you will learn and continually improve your work.
You will work with businesses to understand their data needs and translate those into data structures and model requirements. You will define standards for monitoring data model integrity during projects and lead data modeling and testing. Additionally, you conduct quality assurance on analytics tools and methods used. You will apply expertise in machine learning, data mining, and information retrieval to design, prototype, and develop next-generation analytics. You will develop best practices for analytics, including models, standards, and tools. Furthermore, you will have metrics to assess the impact of analytics on the business.
The role:
This role, reporting directly to the EMEA AI Development & BI Manager, is a great opportunity for technical talents passionate about driving meaningful innovation for our business.
Key Responsibilities:
- Data Engineering:
- Build and maintain data pipelines for structured and unstructured data.
- Work with SQL/NoSQL databases and data warehouses (e.g., Snowflake, BigQuery, Redshift).
- Ensure data quality, integrity, and availability.
- Data Science:
- Explore, clean, and analyze datasets to derive insights.
- Train, evaluate, and fine-tune machine learning models.
- Assist in developing proof-of-concepts and production-ready ML solutions.
- MLOps:
- Support deployment of ML models into production (batch and real-time).
- Implement model monitoring, retraining, and CI/CD workflows.
- Work with cloud platforms (AWS, GCP, or Azure) and containerization tools (Docker, Kubernetes).
What you will need:
- Bachelor´s or Master´s in Computer Science, Data Science, Engineering, or related field.
- 4-5 years of professional experience in data science, data engineering, or MLOps.
- Strong programming skills in Python (pandas, scikit-learn, PySpark preferred).
- Experience with SQL and familiarity with database design.
- Exposure to cloud platforms (AWS, GCP, or Azure).
- Knowledge of ML lifecycle tools (MLflow, Kubeflow, Airflow, Prefect, etc.).
- Familiarity with Git, CI/CD pipelines, and containerization (Docker, Kubernetes).
- Good problem-solving skills and eagerness to learn new technologies.
Nice to Have:
- Hands-on experience with deep learning frameworks (TensorFlow, PyTorch).
- Experience with data visualization tools (Tableau, Power BI, or Looker).
- Knowledge of distributed computing frameworks (Spark, Dask).
- Prior internship or project work in MLOps.
What do we offer:
- A compelling career opportunity to lead impactful, innovative initiatives within the EMEA region.
- Opportunity to work on end-to-end data projects.
- Mentorship from senior data scientists and engineers.
- A collaborative, learning-focused environment.
- A coaching culture environment focusing on your success and development!