No et perdis res!
Uneix-te a la comunitat de wijobs i rep per email les millors ofertes d'ocupació
Mai no compartirem el teu email amb ningú i no t'enviarem correu brossa
Subscriu-te araInformàtica i IT
292Desenvolupament de Programari
207Administració i Secretariat
169Comercial i Vendes
166Transport i Logística
127Veure més categories
Educació i Formació
98Dret i Legal
94Màrqueting i Negoci
90Comerç i Venda al Detall
80Enginyeria i Mecànica
53Disseny i Usabilitat
51Instal·lació i Manteniment
41Publicitat i Comunicació
37Construcció
30Sanitat i Salut
27Comptabilitat i Finances
22Hostaleria
21Recursos Humans
20Art, Moda i Disseny
19Indústria Manufacturera
19Producte
17Atenció al client
14Immobiliària
8Arts i Oficis
7Banca
5Energia i Mineria
5Turisme i Entreteniment
5Cures i Serveis Personals
4Farmacèutica
4Alimentació
3Seguretat
3Editorial i Mitjans
1Social i Voluntariat
1Agricultura
0Assegurances
0Ciència i Investigació
0Esport i Entrenament
0Telecomunicacions
0Top Zones
Madrid
1.443DevOps Engineer
21 de nov.Accenture
Madrid, ES
DevOps Engineer
Accenture · Madrid, ES
Azure Cloud Coumputing Kubernetes Ansible DevOps
Client & Project: We are seeking a new talent to join the Software Engineering team where you will have the opportunity to collaborate in the project - DATA I ARiA Platform. The client, a company engaged in the exploration and production of oil and natural gas, as well as the refining and marketing of petroleum products.
Responsibilities: As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment, Cloud technologies, Container Orchestration, and Security. You will build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. You are expected to be a subject matter expert, collaborate and manage the team to perform effectively. You will be responsible for team decisions, engage with multiple teams, and contribute to key decisions while providing solutions to problems for your immediate team and across multiple teams. Advanced proficiency in Ansible on Microsoft Azure is required. Advanced proficiency in Microsoft Azure Data Factory is recommended. Intermediate proficiency in Configuration & Release Management, Cloud Automation DevOps, and advanced proficiency in Microsoft Azure Kubernetes Service (AKS) are suggested.
Design and implement robust CI/CD pipelines to streamline development processes.
Collaborate with cross-functional teams to enhance system performance and security.
Conduct regular assessments of infrastructure and tools to identify areas for improvement.
Mentor team members on best practices in DevOps and cloud technologies.
Stay updated with industry trends and emerging technologies to drive innovation.
Client & Project: We are seeking a new talent to join the Software Engineering team where you will have the opportunity to collaborate in the project - DATA I ARiA Platform. The client, a company engaged in the exploration and production of oil and natural gas, as well as the refining and marketing of petroleum products.
Responsibilities: As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment, Cloud technologies, Container Orchestration, and Security. You will build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. You are expected to be a subject matter expert, collaborate and manage the team to perform effectively. You will be responsible for team decisions, engage with multiple teams, and contribute to key decisions while providing solutions to problems for your immediate team and across multiple teams. Advanced proficiency in Ansible on Microsoft Azure is required. Advanced proficiency in Microsoft Azure Data Factory is recommended. Intermediate proficiency in Configuration & Release Management, Cloud Automation DevOps, and advanced proficiency in Microsoft Azure Kubernetes Service (AKS) are suggested.
Design and implement robust CI/CD pipelines to streamline development processes.
Collaborate with cross-functional teams to enhance system performance and security.
Conduct regular assessments of infrastructure and tools to identify areas for improvement.
Mentor team members on best practices in DevOps and cloud technologies.
Stay updated with industry trends and emerging technologies to drive innovation.
About Accenture
Accenture is a leading global professional services company that helps the world´s leading businesses, governments and other organizations build their digital core, optimize their operations, accelerate revenue growth and enhance citizen services-creating tangible value at speed and scale. We are a talent- and innovation-led company with approximately 791,000 people serving clients in more than 120 countries. Technology is at the core of change today, and we are one of the world´s leaders in helping drive that change, with strong ecosystem relationships. We combine our strength in technology and leadership in cloud, data and AI with unmatched industry experience, functional expertise and global delivery capability. Our broad range of services, solutions and assets across Strategy & Consulting, Technology, Operations, Industry X and Song, together with our culture of shared success and commitment to creating 360° value, enable us to help our clients reinvent and build trusted, lasting relationships.
DevOps Cloud Engineer
21 de nov.Serem
Madrid, ES
DevOps Cloud Engineer
Serem · Madrid, ES
Agile Azure Jenkins Docker Cloud Coumputing Kubernetes Ansible AWS DevOps Terraform
Buscamos un/a Cloud & DevOps Engineer con experiencia en entornos de infraestructura como código y despliegue en la nube, que quiera formar parte de un equipo técnico de primer nivel, participando en proyectos innovadores en entornos AWS y Azure. Tendrás la oportunidad de diseñar, automatizar y operar soluciones en la nube que impactan directamente en los negocios de nuestros clientes.
Responsabilidades
Diseñar y construir infraestructuras cloud (AWS y/o Azure) con Terraform.
Implantar y configurar pipelines CI/CD con herramientas como Jenkins o GitHub Actions.
Automatizar despliegues de contenedores, dando soporte al equipo de desarrollo.
Administrar y monitorizar clústeres de Kubernetes (EKS, AKS).
Participar en el diseño de arquitecturas multi-cluster y despliegues de Landing Zones.
Aplicar medidas de seguridad cloud: gestión de identidades, redes, cifrado, claves, firewalls, etc.
Colaborar en la definición e implementación de procesos DevSecOps y buenas prácticas SDLC.
Requisitos
Experiencia mínima de 4 años como DevOps o en roles similares.
Conocimiento avanzado de Terraform, Docker, Kubernetes, AWS o Azure, GitHub/Git y Jenkins.
Capacidad para trabajar en entornos colaborativos con metodologías Agile.
Formación técnica en Ingeniería Informática, Telecomunicaciones o similar.
Nivel de inglés intermedio-alto (se valorará positivamente fluidez profesional).
Se valoran candidaturas preferiblemente de Madrid y Barcelona.
Se valorará experiencia en sectores regulados como banca o seguros.
Qué valoramos especialmente
Certificaciones en AWS o Azure.
Conocimiento en herramientas como Ansible, Helm o Vault.
Proactividad, capacidad de análisis y pasión por la automatización y las buenas prácticas DevOps.
Fomentamos un ambiente de trabajo multicultural e inclusivo, no discriminamos por edad, género o creencias; así como ofrecemos igualdad de oportunidades a todo el personal.
Desarrollamos nuestras actividades bajo los principios del cuidado del medioambiente, la sostenibilidad y la responsabilidad social corporativa; colaborando en proyectos de reforestación y sostenibilidad.
Apoyamos los 10 principios del Pacto Mundial y los 17 Objetivos de Desarrollo Sostenible, en materia de derechos humanos, condiciones laborales, medio ambiente y anticorrupción.
Los procesos de reclutamiento se desarrollan bajo altos estándares de calidad definiendo la incorporación en base a la experiencia y habilidades del candidato.
Somos una empresa española líder en servicios tecnológicos y atracción del talento presente en el mercado desde 1995. Contamos con más de 600 empleados en proyectos tanto nacionales como internacionales en sector TI.
Especialista Automatizacion DevOps
21 de nov.Krell Consulting & Training
Especialista Automatizacion DevOps
Krell Consulting & Training · Madrid, ES
Teletreball Linux Ansible Oracle DevOps PostgreSQL
Descripción
🚀 Oferta de Trabajo – Especialista en Automatización DevOps
📍 Ubicación: España — Modalidad híbrida (60% teletrabajo) en Madrid/Málaga | 100% remoto desde otras ciudades de España
🧩 Acerca de Krell-consulting
Krell-consulting es una consultora líder en el sector tecnológico, comprometida con la excelencia, la innovación y el crecimiento profesional. Ayudamos a nuestros clientes a optimizar sus operaciones mediante soluciones tecnológicas de vanguardia. Valoramos el talento, la colaboración y la mejora continua, y buscamos profesionales con mentalidad evolutiva y pasión por la automatización.
🔥 Descripción del puesto
Krell-consulting busca un/a profesional con al menos 2 años de experiencia en automatización para unirse a nuestro equipo como Especialista en Automatización DevOps. Si te apasiona la automatización, la infraestructura y la mejora continua, esta oportunidad es para ti.
En este rol serás una pieza clave en la creación, mantenimiento y evolución de soluciones automáticas para la provisión, configuración y resolución de incidencias en infraestructuras críticas. Colaborarás estrechamente con equipos de desarrollo y operaciones garantizando sistemas estables, escalables y de alto rendimiento.
🛡️ Responsabilidades
Identificar oportunidades de automatización y mejora de eficiencia en los sistemas.
Diseñar, implementar y mantener scripts de automatización para provisión y configuración de infraestructuras.
Desarrollar y mantener playbooks y roles en Ansible.
Gestionar y optimizar sistemas Windows, Linux y bases de datos (MSSQL, PostgreSQL, Oracle) mediante automatización.
Asegurar la integración fluida de sistemas y aplicaciones.
Colaborar con equipos técnicos para integrar automatización en procesos existentes.
Monitorizar y optimizar soluciones automatizadas para garantizar fiabilidad y escalabilidad.
Participar activamente en la resolución de incidencias y mejorar la estabilidad de los sistemas.
Documentar procedimientos, configuraciones y resultados técnicos.
Mantenerse actualizado con tendencias y tecnologías en automatización y DevOps.
Senior Data Engineer
21 de nov.UST
Senior Data Engineer
UST · Madrid, ES
Teletreball Oracle Office
Role description
We are looking for the very Top Talent...and we would be delighted if you were to join our team!
More in details, UST is a multinational company based in North America, certified as a Top Employer and Great Place to Work company with over 35.000 employees all over the world and presence in more than 35 countries. We are leaders on digital technology services, and we provide large-scale technologic solutions to big companies.
What we look for?
We are looking for Senior Data Engineer, to contribute in a project with one of our global customer in wealth management. We are building a very strong team comprised of high caliber professionals that will work closely with the client´s IT and Business stakeholders to deliver impactful programs.
You will have the opportunity to work on challenging projects with talented colleagues in an international environment while developing your domain knowledge working for a top company
Work Location: Work model in hybrid in Madrid (3 days at the office, 2 WFH), feel free to apply if you are also open to relocation!
Main tasks and accountabilities will be:
Being responsible for 20+ applications.
Requirements engineering in case of new request or functionality required.
Improvement of process within responsible applications
Consulting in case of migrating the application to another platform
What do we expect from you?
Very good Banking knowledge is a must
Good knowledge of Oracle SQL/PLSQL.
Good knowledge of Informatica PowerCenter
Knowledge of IDMC is an advantage
Basic knowledge of UNIX
Experience in Datawarehouse systems
Experience with job scheduling tools like AWA
At least 5 years´ experience on above mentioned technologies and environments
What can we offer?
23 days of Annual Leave plus the 24th and 31st of December as discretionary days!
Numerous benefits (Health Care Plan, teleworking compensation, Life and Accident Insurances).
Retribución Flexible´ Program: (Meals, Kinder Garden, Transport, online English lessons, Health Care Plan...)
Free access to several training platforms
Professional stability and career plans
UST also, compensates referrals from which you could benefit when you refer professionals.
The option to pick between 12 or 14 payments along the year.
Real Work Life Balance measures (flexibility, WFH or remote work policy, compacted hours during summertime...)
UST Club Platform discounts and gym Access discounts
If you would like to know more, don´t hesitate to apply and we´ll get in touch to fill you in detail. We are waiting for you!
In UST we are committed to equal opportunities in our selection processes and do not discriminate based on race, gender, disability, age, religion, sexual orientation or nationality. We have a special commitment to Disability & Inclusion, so we are interested in hiring people with disability certificate.
Data Engineer - PySpark
21 de nov.Arelance
Data Engineer - PySpark
Arelance · Madrid, ES
Teletreball Big Data
En Arelance sabemos que las personas son el activo más importante dentro de una empresa y por tanto invertimos muchos esfuerzos en buscar los mejores profesionales para nuestros clientes, y en ofrecer a nuestros candidatos los mejores proyectos.
En este momento, buscamos a un perfil de ingeniería o análisis de datos con experiencia en tratamiento de datos con PySpark para proyecto de sector banca.
¿Qué buscamos en ti?
- Al menos 2 años de experiencia en PySpark
- Experiencia en ETL
- Conocimientos en entornos Big Data
- Muy valorable: experiencia previa en banca
Entre tus funciones:
- Desarrollo y mantenimiento de pipelines PySpark orientados a la transformación y calidad del dato
- Extracción de datos desde un servidor propio utilizando PySpark.
- Documentación de procesos, estándares y flujos de datos.
¿Qué ofrecemos en esta posición?
- Incorporación indefinida en Arelance
- Acceso a formación
- Banda salarial ofrecida a negociar según perfil y experiencia : hasta 26K B/A
- Modalidad de trabajo: 100% remoto o híbrido en Madrid
Si tienes interés en una gran oportunidad como ésta, ¡inscríbete! ¡Queremos conocerte!
*** Sólo se valoran candidatos con permiso de trabajo y residencia en España ***
Lead DevOps Engineer
20 de nov.Airbus
Lead DevOps Engineer
Airbus · Madrid, ES
Teletreball Docker Cloud Coumputing OpenShift AWS DevOps Office
We are seeking a passionate and skilled Lead DevOps Engineer to oversee and actively participate in the deployment, operation, and maintenance of our software products, with a focus on AI/ML and Generative AI/LLM models. In this role, you will facilitate the continuous integration and delivery of our AI services, ensure adherence to standards, and validate technical solutions. You will be responsible for improving the efficiency, reliability, and quality of our products, managing necessary improvements, and contributing to our Communities of Practice. Your expertise in cloud (e.g. AWS managed services such as Sagemaker, EKS or RDS) as well as on-prem (OpenShift, Docker or Podman) technologies, as well as your ability to navigate complex security constraints, will be crucial to ensuring our national and European sovereignty.
Key Responsibilities:
- Lead the entire product deployment and operation lifecycle, designing CI/CD pipelines and managing infrastructure as code (IaC) for both cloud and on-prem environments.
- Ensure adherence to security and sovereignty standards, maintaining the structural integrity of the given solution.
- Develop and maintain robust and scalable operational stacks tailored to AI/ML and Generative AI/LLM model needs, leveraging OpenShift and Podman.
- Design and implement monitoring strategies, logging, and alerting systems to ensure quality, reliability, and security of product deployments.
- Translate business needs into operational requirements, fostering a collaborative environment that values security and sovereignty.
- Mentor team members, conduct code and infrastructure reviews, and guide the team on best practices.
What We Offer:
- The opportunity to work on cutting-edge projects and technologies in the aerospace industry, with a focus on AI/ML and Generative AI/LLM models, and a strong commitment to security and sovereignty.
- A collaborative and innovative work environment that encourages professional growth and development, and values and upholds our security and sovereignty requirements.
- Competitive salary and benefits package.
- The chance to make a real impact on the future of Airbus Defence and Space, and to contribute to the security and sovereignty of our national and European digital landscape.
Join Us:
If you are a passionate and experienced Lead DevOps Engineer looking for an exciting opportunity to drive operational innovation in the aerospace industry, particularly in the context of AI/ML and Generative AI/LLM models, and with a strong commitment to security and sovereignty, we would love to hear from you! Apply now to join our dynamic team and contribute to the success of the DAISEI project.
Airbus Defence and Space is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees, while upholding the highest standards of security and sovereignty.
WHICH BENEFITS WILL YOU HAVE AS AIRBUS EMPLOYEE?
At Airbus we are focused on our employees and their welfare. Take a look at some of our social benefits:
- Vacation days and additional days-off along the year (+35 working days off in total).
- Attractive salary and compensation package.
- Hybrid model of working when possible, promoting the work-life balance (40% remote work).
- Collective transport service in some sites.
- Benefits such as health insurance, employee stock options, retirement plan, or study grants.
- On-site facilities (among others): free canteen, kindergarten, medical office.
- Possibility to collaborate in different social and corporate social responsibility initiatives.
- Excellent upskilling opportunities and great development prospects in a multicultural environment.
- Special rates in products & benefits.
This job requires an awareness of any potential compliance risks and a commitment to act with integrity, as the foundation for the Company´s success, reputation and sustainable growth.
Automation DevOps Ansible
20 de nov.Serem
Automation DevOps Ansible
Serem · Madrid, ES
Teletreball Linux Ansible Git Oracle DevOps PostgreSQL
En SEREM estamos comprometidos con diversos proyectos y queremos contar con los mejores profesionales del sector.
Actualmente estamos buscando un/a Automation DevOps para el diseño, desarrollo y mantenimiento de soluciones para la ejecución automática de tareas y la resolución de incidencias.
Responsabilidades clave:
• Identificar oportunidades de automatización y mejorar la eficiencia del sistema.
• Desarrollar y mantener flujos de trabajo automatizados para reducir la intervención manual.
• Garantizar la fiabilidad y escalabilidad de las soluciones de automatización en todos los sistemas.
• Colaborar con los equipos técnicos para integrar la automatización en los procesos existentes.
• Supervisar el rendimiento y optimizar continuamente las soluciones automatizadas.
• Documentar los resultados del análisis y colaborar estrechamente con los diseñadores de automatización para garantizar un diseño y desarrollo coherentes para todos los clientes.
• Desarrollar y mantener playbooks y roles de Ansible para los procesos de TI automatizados.
• Garantizar la integridad y el rendimiento de la infraestructura de TI mediante soluciones de automatización eficaces.
• Documentar los procesos y las mejores prácticas para el desarrollo con Ansible.
• Supervisar y solucionar problemas relacionados con las implementaciones de automatización de Ansible.
• Participar en la revisión de código y el mantenimiento del repositorio de scripts de Ansible.
Requisitos:
• Más de 2 años de experiencia en automatización de IT (incluyendo las siguientes plataformas/sistemas: Windows, Linux, bases de datos (MSSQL, PostgreSQL, Oracle) e integraciones de sistemas con Ansible).
• Sólidos conocimientos de infraestructura de IT y configuración de sistemas.
• Experiencia con sistemas de control de versiones, como Git.
• Excelentes habilidades para la resolución de problemas y atención al detalle.
• Buenas habilidades de comunicación y colaboración.
• Inglés B2+ hablado (se valora el francés).
Consideraciones del puesto:
• Horario: 9:00 - 18:00 / 8:00 - 17:00
• 2 días presencial en oficinas de Las Tablas (Madrid). Para el resto de provincias, teletrabajo.
• Contrato indefinido, puesto estable.
Fomentamos un ambiente de trabajo multicultural e inclusivo, no discriminamos por edad, género o creencias; así como ofrecemos igualdad de oportunidades a todo el personal.
Desarrollamos nuestras actividades bajo los principios del cuidado del medioambiente, la sostenibilidad y la responsabilidad social corporativa; colaborando en proyectos de reforestación y sostenibilidad.
Apoyamos los 10 principios del Pacto Mundial y los 17 Objetivos de Desarrollo Sostenible, en materia de derechos humanos, condiciones laborales, medio ambiente y anticorrupción.
Los procesos de reclutamiento se desarrollan bajo altos estándares de calidad definiendo la incorporación en base a la experiencia y habilidades del candidato.
Somos una empresa española líder en servicios tecnológicos y atracción del talento presente en el mercado desde 1995. Contamos con más de 600 empleados en proyectos tanto nacionales como internacionales en sector TI.
JUNIOR DATA ENGINEER
18 de nov.Inetum
JUNIOR DATA ENGINEER
Inetum · Madrid, ES
Teletreball . Python TSQL Azure Cloud Coumputing PowerShell ITIL Power BI
Company Description
🚀 Join Inetum – We're Hiring a DATA ENGINEER! 🚀
At Inetum, a leading international digital consultancy, we empower 27,000 professionals across 27 countries to shape their careers, foster innovation, and achieve work-life balance. Proudly certified as a Top Employer Europe 2024, we’re passionate about creating positive and impactful digital solutions.
Job Description
We are seeking a highly motivated and technically proficient Data Platform Support Specialist to provide operational support for a data solution built on Azure Data Factory, Snowflake, and Power BI. This is not a development role, but rather a hands-on support position focused on ensuring the reliability, performance, and smooth operation of the data platform.
Qualifications
Key Responsibilities:
- Monitor and maintain data pipelines in Azure Data Factory, ensuring timely and accurate data movement.
- Provide support for Snowflake data warehouse operations, including troubleshooting queries, managing access, and optimizing performance.
- Assist business users with Power BI dashboards and reports, resolving issues related to data refreshes, connectivity, and visualization errors.
- Collaborate with data engineers and analysts to ensure platform stability and data integrity.
- Document support procedures and maintain knowledge base articles for recurring issues.
- Communicate effectively in English with internal teams and stakeholders.
- Advance in English (B2 or above).
- Hands-on experience with:
- Azure Data Factory (monitoring, troubleshooting, pipeline execution).
- Snowflake (SQL, performance tuning, user management).
- Power BI (report troubleshooting, data sources, refresh schedules).
- Strong analytical and problem-solving skills.
- Ability to work independently and manage multiple support tasks.
- Familiarity with ITIL or similar support frameworks is a plus.
- Experience in data governance or data quality monitoring.
- Basic understanding of cloud infrastructure (Azure).
- owledge of scripting (e.g., PowerShell or Python) for automation.
Senior Data Engineer 100 (m/w/d)
17 de nov.Julius Baer
Madrid, ES
Senior Data Engineer 100 (m/w/d)
Julius Baer · Madrid, ES
Python Agile TSQL Jenkins Linux Docker Cloud Coumputing Kubernetes Microservices Git Oracle DevOps Machine Learning
At Julius Baer, we celebrate and value the individual qualities you bring, enabling you to be impactful, to be entrepreneurial, to be empowered, and to create value beyond wealth. Let’s shape the future of wealth management together. Support the development of a Python-based enterprise data hub (integrated with Oracle) and advance the MLOps infrastructure. This role combines DevOps excellence with hands-on machine learning engineering to deliver scalable, reliable, and auditable ML solutions. Key objectives include automating CI/CD pipelines for data and ML workloads, accelerating model deployment, ensuring system stability, enforcing infrastructure-as-code, and maintaining secure, compliant operations.
YOUR CHALLENGE
- Design and maintain CI/CD pipelines for Python applications and machine learning models using GitLab CI/Jenkins, Docker, and Kubernetes
- Develop, train, and evaluate machine learning models (e.g., using scikit-learn, XGBoost, PyTorch) in close collaboration with data scientists
- Orchestrate end-to-end ML workflows including pre-processing, training, hyperparameter tuning, and model validation
- Deploy and serve models in production using containerised microservices (Docker/K8s) and REST/gRPC APIs
- Manage the MLOps lifecycle via tools like MLflow (experiment tracking, model registry) and implement monitoring for drift, degradation, and performance
- Refactor exploratory code (e.g., Jupyter notebooks) into robust, testable, and version-controlled production pipelines
- Collaborate with data engineers to deploy and optimise the data hub, ensuring reliable data flows for training and inference
- Troubleshoot operational issues across infrastructure, data, and model layers; participate in incident response and root cause analysis
YOUR PROFILE
- Technical Proficiency: Strong skills in Python, Linux, CI/CD, Docker, Kubernetes, and MLOps tools (e.g., MLflow). Practical experience with Oracle databases, SQL, and ML frameworks
- ML Engineering Aptitude: Ability to own the full ML lifecycle—from training and evaluation to deployment and monitoring—with attention to reproducibility and compliance
- Automation & Reliability: Committed to building stable, self-healing systems with proactive monitoring and automated recovery
- Collaboration & Communication: Effective team player in agile, cross-functional settings; able to communicate clearly across technical and non-technical audiences
Education and Skills Requirements - Education: Bachelor of Science (BS) in Computer Science, Engineering, Data Science, or related field. Certifications such as CKA, AWS/Azure DevOps Engineer, or Google Cloud Professional DevOps Engineer are a plus
Technical Skills:
- Proficient in Python, Git, and shell scripting
- Experienced with CI/CD pipelines (GitLab, Jenkins), Docker, and Kubernetes
- Skilled in SQL and Oracle database interactions
- Hands-on with MLOps frameworks (e.g., MLflow), model deployment, and monitoring
- Familiarity with microservices, REST/gRPC, and basic ML model evaluation techniques
Experience:- Minimum 5 years in DevOps, SRE, or ML Engineering roles, with at least
- 2–3 years focused on data-intensive or machine learning systems
- Experience in financial services or regulated environments is highly valued
Languages:- English is a must
We are looking forward to receiving your full job application through our online application tool. Further interesting job opportunities can be found on our Career site. Is this not quite what you are looking for? Set up a job alert by creating a candidate account here.