¡No te pierdas nada!
Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo
Nunca compartiremos tu email con nadie y no te vamos a enviar spam
Suscríbete AhoraInformática e IT
376Comercial y Ventas
233Desarrollo de Software
229Adminstración y Secretariado
196Transporte y Logística
164Ver más categorías
Marketing y Negocio
119Derecho y Legal
112Educación y Formación
111Comercio y Venta al Detalle
82Ingeniería y Mecánica
63Publicidad y Comunicación
50Instalación y Mantenimiento
49Diseño y Usabilidad
46Sanidad y Salud
32Recursos Humanos
30Contabilidad y Finanzas
21Construcción
20Producto
20Industria Manufacturera
19Atención al cliente
16Hostelería
16Banca
15Arte, Moda y Diseño
13Inmobiliaria
8Alimentación
7Artes y Oficios
7Turismo y Entretenimiento
7Farmacéutica
6Energía y Minería
5Cuidados y Servicios Personales
4Seguridad
4Agricultura
1Ciencia e Investigación
0Deporte y Entrenamiento
0Editorial y Medios
0Seguros
0Social y Voluntariado
0Telecomunicaciones
0Top Zonas
Madrid
1.755UST
Senior Application Owner (DevOps)
UST · Madrid, ES
Teletrabajo .Net Agile Docker Cloud Coumputing Kubernetes Jira DevOps ITIL .Net Core SQL Office
Role description
We are looking for the very Top Talent...and we would be delighted if you were to join our team!
More in details, UST is a multinational company based in North America, certified as a Top Employer and Great Place to Work company with over 35,000 employees all over the world and presence in more than 35 countries. We are leaders on digital technology services, and we provide large-scale technologic solutions to big companies.
What are we looking for?
We are looking for a Senior Application Owner, working close to one of our main clients in banking sector.
Your primary objective will be to ensure Application Compliance with the IT Governance in your DevOps Team. Additionally, you will provide guidance and support to the team on application strategy, plan, coordinate and execute application Lifecycles (e.g. upgrading end of life technologies).
Main tasks and accountabilities will be:
- You are technologically versatile and familiar with the .NET Core tech stack, T-SQL, GitLab CI/CD plus you understand the concept of containerization & orchestration (Docker & Kubernetes and how services run
- Further you are well familiar with application monitoring tools such as ELK, Splunk, Dynatrace and can use this knowledge to quickly identify and resolve production issues - either independently or by supporting the team
- You understand standard IT governance processes and take ownership of your team´s applications
- You proactively plan, coordinate, and carry out actions to address risks, security issues, vulnerabilities, upgrades, and other required improvements
- You understand and can demonstrate IT service management principles, preferably based on ITIL 4, applied in agile and DevOps contexts
- You support your team by coordinating with Release Management on creating RFC´s with properly documented dependencies, deployment windows and rollback procedures
- You have experience in DevOps teams in agile environments (SAFe) working with tools such as JIRA, Confluence, MIRO etc
- You ensure the minimum compliance of your teams´ applications while keeping the change relevant documentation artifacts up to date (Solution Design, Security Concept, Service Continuity (BCM), Operational Manual)
- Further you take care that non-functional requirements are tailored, defined and validated (e.g. performance, capacity, availability, and disaster recovery)
- Additionally, you play a vital role in advising your team in release go/no-go situations and you oversee the burn-in phase and ensure stability before full handover to "business-as-usual" operations (DevOps)
What UST expects from you?
- 5+ years of experience in application ownership or similar roles within DevOps / Agile environments (SAFe is a plus).
- Strong technical background with .NET Core, T-SQL and GitLab CI/CD, including pipeline design, security scanning and controlled multi-environment deployments.
- Solid understanding of containerization and orchestration using Docker and Kubernetes.
- Hands-on experience with monitoring and observability tools such as ELK, Splunk or Dynatrace, and the ability to troubleshoot and resolve production issues.
- Good knowledge of IT Governance and IT Service Management, preferably ITIL 4, applied in Agile/DevOps contexts.
- Experience coordinating RFCs with Release Management, including dependencies, deployment windows and rollback procedures.
- Ability to ensure application compliance and keep key documentation up to date (Solution Design, Security Concept, BCM, Operational Manuals).
- Experience defining and validating non-functional requirements (performance, capacity, availability, disaster recovery).
- Involvement in release go/no-go decisions, burn-in phases and stabilization before handover to BAU operations.
- .BSc/MSc in Computer Science, Engineering or a related field, or equivalent experience.
- Good english level (C1) you will be working with international teams.
Nice to have:
- Experience with cloud concepts and technologies.
- Familiarity with deployment tools such as Octopus Deploy.
- Banking or financial services domain knowledge.
Work Location
Hybrid. Madrid city center (Sol area). 3 days a week in the office + 2 days remote.
Work schedule
Business Hours. No intensive working days for friday or summer.
What can we offer?
- 23 days of Annual Leave plus the 24th and 31st of December as discretionary days!
- Numerous benefits (Health Care Plan, teleworking compensation, Life and Accident Insurances).
- `Retribución Flexible´ Program: (Meals, Kinder Garden, Transport, online English lessons, Health Care Plan...)
- Free access to several training platforms
- Professional stability and career plans
- UST also, compensates referrals from which you could benefit when you refer professionals.
- The option to pick between 12 or 14 payments along the year.
- Real Work Life Balance measures (flexibility, WFH or remote work policy, compacted hours during summertime...)
- UST Club Platform discounts and gym Access discounts
If you would like to know more, don´t hesitate to apply and we´ll get in touch to fill you in detail. We are waiting for you!
In UST we are committed to equal opportunities in our selection processes and do not discriminate based on race, gender, disability, age, religion, sexual orientation or nationality. We have a special commitment to Disability & Inclusion, so we are interested in hiring people with disability certificate.
DevOps Senior - Ciberdefensa
12 feb.CMV Consultores
DevOps Senior - Ciberdefensa
CMV Consultores · Madrid, ES
Teletrabajo TSQL Jenkins Docker Kubernetes DevOps
¿Quieres formar parte del núcleo tecnológico que protege las infraestructuras críticas? Únete a uno de nuestros clientes líderes en el sector de la Ciberdefensa como DevOps Senior. Buscamos un profesional con alta capacidad técnica para diseñar y mantener arquitecturas robustas y seguras, trabajando con tecnologías de vanguardia en un entorno donde la seguridad es la máxima prioridad.
Funciones:
Despliegue y Orquestación: Administración y configuración de plataformas de contenedores basadas en Docker y Kubernetes.
Automatización: Mantenimiento y evolución de pipelines de CI/CD (Jenkins, GitLab CI) para optimizar la entrega de servicios.
Arquitectura: Diseño de infraestructuras de alta disponibilidad y sistemas de observabilidad (ELK, Grafana, Prometheus).
Seguridad: Colaboración con equipos de ciberseguridad para garantizar el cumplimiento de estándares y buenas prácticas en cada despliegue.
Gestión de Identidad y APIs: Configuración de sistemas como Keycloak y Apisix para la protección de accesos.
Requisitos
Experiencia: Mínimo de 4 a 6 años con Kubernetes y Docker; 4-5 años con Helm.
Pipeline: Experiencia demostrable en el desarrollo de CI/CD (Jenkins/GitLab).
Formación: Titulación en Informática, Telecomunicaciones o carrera técnica similar.
Idiomas: Nivel de inglés avanzado (C1), hablado y escrito.
Valorable: Conocimientos en scripting, SQL, OpenStack y herramientas de observabilidad.
Soft Skills: Perfil analítico, resolutivo y proactivo, con foco en la mitigación de riesgos.
Beneficios y Condiciones:
Modalidad Híbrida: 60% teletrabajo (40% presencial en Madrid - Vega 5 o Barcelona).
Estabilidad: Sin turnos ni guardias programadas.
Salario según valía y experiencia aportada (máx 28K)
Entorno: Un proyecto puntero en el sector Defensa con exposición a tecnologías de última generación.
¡Protege el futuro digital desde la primera línea! Si eres experto en Kubernetes y buscas un entorno de máxima seguridad, este es tu sitio.
DevOps
Arelance · Madrid, ES
Python Kubernetes DevOps
En Arelance sabemos que las personas son el activo más importante dentro de una empresa y por tanto invertimos mucho esfuerzo en buscar los mejores profesionales para nuestros clientes, y en ofrecer a nuestros candidatos los mejores proyectos.
Actualmente tenemos una vacante como DevOps Engineer , donde buscamos un perfil especializado en automatización, despliegue y operación de sistemas en producción.
Requisitos:
- Poseas experiencia de al menos 3 años en posiciones similares
- Aportes conocimientos sólidos en Python y sistemas de programación en entornos de producción
Experiencia en despliegues y operación de pipelines con práctica de Kubernetes.
- Capacidad de acudir 2 días a la semana cada 2 semanas a oficina de cliente en Madrid.
Entre tus funciones:
- Desarrollo y mantenimiento de pipelines de despliegue.
- Operación y supervisión de entornos productivos.
- Gestión y administración de infraestructura basada en Kubernetes.
- Automatización de procesos mediante Python.
- Soporte técnico en entornos de producción.
¿Qué ofrecemos en esta posición?
- Contrato indefinido en Arelance
- Acceso a formación.
- Modalidad de trabajo: híbrido Madrid - 2 días a la semana cada 2 semanas, resto en remoto.
- SBA a negociar según perfil y experiencia aportada: (De 37 a 45k B/A)
- Condiciones económicas según experiencia y perfil aportado.
Si tienes interés en una gran oportunidad como ésta, ¡inscríbete! ¡Queremos conocerte!
*** Sólo se valoran candidatos con permiso de trabajo y residencia en España ***
Barcelona Supercomputing Center
Madrid, ES
Machine Learning Engineer (Re1-2) – Ai Factory (Earth Sciences Department)
Barcelona Supercomputing Center · Madrid, ES
. Python Machine Learning
Overview
Job Reference: 406_25_ES_CES_RE1 Position: Machine Learning Engineer (RE1-2) – AI Factory (Earth Sciences Department)
Closing Date: Sunday, 01 March, ****
Reference: 406_25_ES_CES_RE1
Job title: Machine Learning Engineer (RE1-2) – AI Factory (Earth Sciences Department)
About BSC: The Barcelona Supercomputing Center - Centro Nacional de Supercomputación (BSC-CNS) is the leading supercomputing center in Spain.
It houses MareNostrum, one of the most powerful supercomputers in Europe, and hosts the European HPC ecosystem.
The mission of BSC is to research, develop and manage information technologies to facilitate scientific progress.
BSC combines HPC service provision and R&D into computer and computational science under one roof, with over **** staff from 60 countries.
Context and Mission: The Barcelona Supercomputing Center (BSC) is seeking a Machine Learning Engineer to join the Earth Sciences department within the AI Factory initiative.
The AI Factory accelerates adoption and development of AI across industry sectors, deploying AI-focused services including training, networking, and innovation support.
The MareNostrum5 AI partition provides the computing backbone for these services.
The selected candidate will support AI services related to climate change use cases, coordinate availability and integration of AI software on MareNostrum5, and ensure smooth support for the AI Factory user community.
Responsibilities
Support the documentation and curation of AI software employed in the AI Factory
Support the deployment and maintenance of AI software on the MareNostrum5 AI partition
Design, implement, and optimize machine learning pipelines for environmental-related applications
Collaborate with domain scientists and external users to develop AI solutions
Support users from the AI Factory in accessing and utilizing AI tools and services
Participate in collaborative development within the AI Factory consortium
Participate in technical reporting and scientific publications contributing to the documentation of the AI Factory software, with opportunities to be involved in academic publications and project reporting
Requirements
Education
Bachelor's or Master's in Computer Science, Machine Learning, Data Science, Environmental Sciences, or a related field
Essential Knowledge And Professional Experience
Strong programming skills in Python, with experience in machine learning libraries such as PyTorch, TensorFlow, and Scikit-learn
Proven experience in developing and training machine learning models, particularly deep learning architectures
Strong background in handling, analyzing, and validating large-scale datasets
Experience working in a UNIX-based computational environment
Additional Knowledge And Professional Experience
Familiarity with climate, weather, and Earth system datasets (NetCDF, Zarr)
Experience in high-performance computing (HPC) and parallelized machine learning workflows
Proficiency in GPU-accelerated machine learning frameworks such as TensorFlow, RAPIDS, JAX, and/or distributed training using Dask
Understanding of climate and weather models
Competences
Strong problem-solving and analytical skills, with the ability to optimize computational workflows
Ability to work independently while collaborating effectively in a research environment
Excellent communication skills, with a strong ability to document and present research findings
Proficiency in written and spoken English
Conditions
The position will be located at BSC within the Earth Sciences Department
We offer a full-time contract (37.5h/week), a good working environment, state-of-the-art infrastructure, flexible working hours, extensive training plan, restaurant tickets, private health insurance, and support for relocation procedures
Duration: Open-ended contract due to project and budget considerations
Holidays: 22 days of holidays + 6 personal days + 24th and 31st of December
Salary: Competitive salary commensurate with qualifications and experience, aligned with Barcelona cost of living
Starting date: As soon as possible
Applications procedure and process
All applications must be submitted via the BSC website and contain:
A full CV in English including contact details
A cover/motivation letter with a statement of interest in English, clearly specifying the area and topics of interest; two references for further contact must be included
Recruitment process and equal opportunity
The selection will be carried out through a competitive examination system (Concurso-Oposición).
The process consists of two phases: Curriculum Analysis (40 points) and Interview phase (60 points).
A minimum of 30 points in the interview is required.
The recruitment panel will include at least three people with gender representation.
BSC adheres to OTM-R principles and promotes gender-balanced recruitment panels.
All participants will receive feedback after interviews.
For suggestions or complaints about recruitment processes, please contact ******.
For more information follow this link.
Deadline: The vacancy remains open until a suitable candidate is hired; applications are regularly reviewed.
OTM-R and equal opportunity
BSC-CNS is committed to the Code of Conduct for the Recruitment of Researchers and Open, Transparent and Merit-based Recruitment (OTM-R).
We are an equal opportunity employer and consider all qualified applicants regardless of protected characteristics.
#J-*****-Ljbffr
Senior Data Engineer
10 feb.Ebury
Madrid, ES
Senior Data Engineer
Ebury · Madrid, ES
Python Agile TSQL Docker Cloud Coumputing DevOps Fintech Machine Learning Office
Ebury is a global fintech firm dedicated to empowering businesses to expand internationally through tailored and forward-thinking financial solutions. Since our founding in 2009, we´ve grown to a diverse team of over 1,700 professionals across 40+ offices and 29+ markets worldwide. Joining Ebury means becoming part of a collaborative and innovative environment where your contributions are valued. You´ll play a key role in shaping the future of cross-border finance, while advancing your own career in a dynamic, high-growth industry.
Senior Data Engineer: Drive our AI Revolution & Data Ecosystem
Location: Madrid (Hybrid: 4 days office / 1 day WFH)
Stack: Python, SQL, Airflow, dbt, DuckDB, GCP, DLT, Docker.
Why Ebury?
At Ebury, we don´t just move money; we move the boundaries of what Fintech can achieve. Our strategic growth is powered by Data, and we are looking for a Senior Data Engineer who wants to be at the heart of this transformation.
We aren´t looking for "Jira ticket closers." We want problem solvers, tech enthusiasts, and proactive engineers who want to understand the why and for what behind the data and propose high-value projects that move the needle for the business.
The Challenge: Beyond Pipelines to AI-Ready Systems
We are at an inflection point. Beyond maintaining a robust platform, we have a massive appetite for Machine Learning, LLMs, and Agents. Your mission won´t just be moving bytes; it will be building the infrastructure that enables Ebury to lead the Gen-AI revolution in the financial sector.
Your Cutting-Edge Toolkit:
- Orchestration: Airflow (Cloud Composer) is our backbone.
- Cloud Power: The full GCP suite (GKE, Cloud Run, Functions, Artifact Registry).
- Smart Ingestion: We leverage DLT (Data Load Tool) to build seamless and scalable data pipelines.
- Modeling & Quality: We use dbt at scale, integrating DuckDB to enforce bulletproof Data Contracts.
- DevOps Culture: Dockerized environments and solid CI/CD are our standard in Github Actions.
What You´ll Do
- Architect & Innovate: Design, deploy, and evolve ELT/ETL pipelines from diverse sources (APIs, transactional DBs, file-based endpoints).
- Champion AI/ML: Actively contribute to building the infrastructure for ML models and LLM-powered Agents.
- Business Partnership: Work closely with stakeholders to identify high-impact opportunities. If you see a better way to do things, you have the autonomy to lead it.
- Engineering Excellence: Apply top-tier software practices (SDLC, RFCs, unit testing) to ensure our platform is a benchmark for scalability.
- Growth & Mentorship: Benefit from a personalized 30/60/90 day plan designed to help you thrive from day one.
About You
You´ll be a great fit if:
- Python & SQL Mastery: You have a deep command of both (+5 years of experience).
- Agile Problem Solver: You thrive on unexpected daily challenges, delivering smart technical solutions at speed and iterating them into robust, production-grade systems.
- Business Partner: You are proactive and "business-curious", you want to understand the impact of your code on the bottom line.
- You are fluent in English (we are a global team!).
Bonus Points:
- Hands-on experience with LLMs, Vector Databases, or AI Agents.
- Deep knowledge of DuckDB or DLT.
- Strong foundations in Dimensional Modeling and Data Warehousing.
- Spanish language skills (a plus for office life in Madrid!).
What We Offer
- Competitive Salary + Performance-based discretionary bonus.
- Vibrant Culture: Join a collaborative environment where Data Engineers, Data Scientists, Analytics Engineers, and Analysts work as one.
- Work-Life Balance: A modern hybrid model in our Madrid office to keep the team spirit alive while respecting your focus time.
- Open Source DNA: We follow Open Source principles internally and encourage you to contribute to external projects.
- Continuous Learning: Personal development through certifications and specialized training.
Excited about the role but don´t tick every single box? Apply anyway! At Ebury, we value potential, proactivity, and a growth mindset over a perfect checklist.
#LI-JC1
Grupo NS
Data Engineer ingles alto con Java
Grupo NS · Madrid, ES
Teletrabajo Java Hadoop Kafka Spark
Desde Grupo NS estamos buscando un perfil de Data Engineer con nivel avanzado de inglés
Buscamos:
Profundo conocimiento de los datos:
Dominio de herramientas: años de experiencia con herramientas de datos de código abierto como Kafka, Hadoop Ecosystem, etc.
Habilidad con los flujos: capacidad demostrada para crear flujos de datos en tiempo real utilizando tecnologías como Flink, Kafka, Spark y dbt.
Habilidades lingüísticas: fluidez en inglés.
Mentalidad: ágil, proactivo y apasionado por el éxito basado en los datos.
Funciones
Analizar y elaborar estrategias: profundizar en los procesos de TI de nuestros clientes para identificar los mejores métodos para generar activos de datos de primer nivel.
Asesorar y liderar: asesorar a los clientes sobre arquitectura de datos, gestión de datos, almacenamiento de datos y el mejor uso de los lagos de datos y los almacenes de datos.
Diseñar y desarrollar: crear y desarrollar soluciones de datos modernas utilizando herramientas como Hadoop, Exasol, Snowflake y otras.
Optimizar flujos de trabajo: crear, desarrollar y perfeccionar canales de datos para garantizar que estos estén siempre listos para su uso.
Mantener la innovación: estar al día de las últimas tendencias e integrar las mejores prácticas en ingeniería de datos en nuestros proyectos.
NS, es una empresa donde se valora tanto el perfil profesional tecnológico de los trabajadores, como el interés y la aptitud que demuestren a la hora de desarrollar nuevos proyectos.
Por ello, requerimos personas constantes, con ganas de evolucionar y aprender
Profundo conocimiento de los datos:
Dominio de herramientas: años de experiencia con herramientas de datos de código abierto como Kafka, Hadoop Ecosystem, etc.
Habilidad con los flujos: capacidad demostrada para crear flujos de datos en tiempo real utilizando tecnologías como Flink, Kafka, Spark y dbt.
Inglés Avanzado
Conocimientos avanzados en Inglés
DevOps Engineer - REMOTO
10 feb.Michael Page
DevOps Engineer - REMOTO
Michael Page · Madrid, ES
Teletrabajo TSQL Azure NoSQL Jenkins Cloud Coumputing Kubernetes Git DevOps
- Lidera la evolución cloud en un entorno tech internacional
- Impacto real en arquitectura, escalabilidad y seguridad
¿Dónde vas a trabajar?
Compañía tecnológica con presencia internacional, especializada en soluciones digitales para el sector financiero, en plena evolución de su arquitectura cloud y procesos DevOps.
Descripción
- Diseño y desarrollo de soluciones escalables, seguras y fiables en Microsoft Azure.
- Mejora continua de la infraestructura junto al equipo DevOps.
- Administración y resolución de incidencias en plataformas (Apps, Kubernetes, BBDD, web servers, load balancers…).
- Optimización de disponibilidad, seguridad, observabilidad y eficiencia.
- Participación activa en decisiones técnicas y evolución de la arquitectura.
- Compartir conocimiento con el equipo de ingeniería.
¿A quién buscamos (H/M/D)?
- +6 años de experiencia como DevOps / Cloud / Systems Engineer.
- Experiencia sólida en Microsoft Azure.
- Experiencia con Kubernetes en entornos productivos.
- Conocimiento de CI/CD y automatización (Git, Jenkins, Azure DevOps, Terraform/Tofu…).
- Experiencia con bases de datos SQL y NoSQL.
- Capacidad de análisis y resolución de problemas en sistemas distribuidos.
- Nivel alto de inglés.
¿Cuáles son tus beneficios?
- Proyecto estratégico con impacto real en la infraestructura global.
- Posición 100% remota, desde España.
- Salario según experiencia.
- Estabilidad y crecimiento dentro de una compañía internacional.
Internship - Data Engineer
9 feb.Astrafy
Internship - Data Engineer
Astrafy · Madrid, ES
Teletrabajo . Cloud Coumputing Terraform Word
Your mission
You will work on different projects to help Astrafy customers get the most out of their data. You will work on the business and technical sides of the customer use case.
- Design and maintain scalable data pipelines leveraging technologies such as Airflow, dbt, BigQuery, Pubsub, and Snowflake, ensuring efficient and reliable data ingestion, transformation, and delivery.
- Develop and optimize data infrastructure in the Google Cloud environment, implementing best practices for performance, cost, and security.
- Use Terraform to automate infrastructure provisioning and manage containerized workloads, promoting agility and repeatability across environments.
- Implement robust data governance, security, and quality measures, ensuring accuracy, consistency, and compliance throughout the data lifecycle.
- Collaborate with cross-functional teams to design and deploy BI dashboards and other analytics solutions that empower stakeholders with actionable insights.
- Continuously refine data architecture to accommodate changing business needs, scaling solutions to handle increased data volume and complexity.
- Champion a culture of innovation by researching, evaluating, and recommending emerging data technologies and industry best practices.
Your profile
- Technical expertise: Knowledge and experience with the tools mentioned above or similar ones are valued.
- Continuous Learning: Curious to stay updated on emerging data technologies, practices, and frameworks, and to share knowledge across the organization.
- Team Player: Strong communication and collaboration skills, with the ability to work effectively within cross-functional teams to deliver impactful data solutions.
- You speak English fluently, and a word of French and/or Spanish is a plus
- Attractive Salary Package: No blurry or hidden clauses. Everything is transparently outlined in our Gitbook (https://astrafy.gitbook.io/handbook/its-all-about-people/compensation)
- Genuine Innovation: An exciting role where technology innovation is our daily job. We don’t get stuck with old technologies and practices. We encourage learning, testing, and taking initiative.
- Strong Values & Culture: Become part of a dynamic team that lives by solid values. Learn more in our “Culture and Values” chart (https://docs.astrafy.io/handbook/the-company/culture-and-values)
- Continuous Learning: We offer ongoing training and development for both soft and hard skills. Check out our training policy (https://docs.astrafy.io/handbook/its-all-about-people/training)
- Flexible Work Environment: Enjoy flexible hours and remote work options. We want the job to adapt to your circumstances. Discover more here. (https://docs.astrafy.io/handbook/the-company/remote-work)
- Team-Building & Retreats: We organise two team-building retreats per year in an exciting European location, along with several activities during the year. Learn more here (https://docs.astrafy.io/handbook/its-all-about-people/team-building-and-retreat).
- And Much More: Our handbook (https://docs.astrafy.io/handbook/) covers all you need to know about who we are and how we work.
Our mission is to help companies and individuals solve data analytics challenges across the full data journey—from ingestion to transformation to distribution—leveraging a Modern Data Stack. We find that many organizations lack the internal expertise to harness emerging data tools, and we aim to educate them while implementing powerful solutions.
We achieve this by creating a new kind of consulting company that:
- Fosters Creativity & Strategic Thinking: We embrace out-of-the-box solutions tailored to each client’s needs.
- Prioritizes Education: We ensure every project delivers both technical solutions and the knowledge to sustain them.
- Invests in People: We nurture our employees’ growth and well-being, recognizing they are our most important asset.
DevOps Engineer
9 feb.WayOps
Madrid, ES
DevOps Engineer
WayOps · Madrid, ES
Azure Scrum Cloud Coumputing Kubernetes AWS DevOps Agile PostgreSQL Microservices
Desde WayOps buscamos un DevOps Engineer que quiera seguir creciendo profesionalmente participando en un proyecto dentro de un entorno exigente en el sector farmacéutico, con una sólida base técnica en orquestación de contenedores y observabilidad, capaz de gestionar infraestructuras híbridas que soportan plataformas de Inteligencia Artificial.
PROYECTO & EQUIPO
Formarás parte de iniciativas de desarrollo de aplicaciones corporativas estratégicas para uno de nuestros principales clientes internacionales del sector farmacéutico. El proyecto se centra en la orquestación de la plataforma que soporta los agentes de IA, donde la latencia baja y la alta disponibilidad son críticas.
Formarás parte del equipo de ingeniería de plataforma, dando soporte a iniciativas estratégicas de IA Generativa. La infraestructura es el pilar que sostiene microservicios de alto rendimiento y bases de datos vectoriales críticas para el funcionamiento de agentes inteligentes.
Te integrarás en un equipo de alto nivel técnico, colaborando con desarrolladores backend, frontend, DevOps y ingenieros de IA con distintos niveles de seniority, coordinados por un responsable técnico. La dinámica es altamente colaborativa, con uso de metodologías modernas y un fuerte enfoque en la creación de componentes reutilizables para productos de IA.
Tu papel será será clave para asegurar la estabilidad, escalabilidad y observabilidad de la plataforma, implementando las herramientas necesarias para monitorizar flujos de datos en tiempo real y gestionando el despliegue de componentes innovadores como bases de datos vectoriales.
FUNCIONES & RESPONSABILIDADES
En tu día a día serás el referente de infraestructura y operaciones, participando activamente en tareas como:
- Gestión, administración y orquestación de clústeres de Kubernetes en entornos productivos
- Implementación de estrategias de Observabilidad completa utilizando el stack Grafana, Prometheus y Loki
- Despliegue y mantenimiento de infraestructura de datos moderna: PostgreSQL, cachés distribuidas (Redis/Valkey) y bases de datos vectoriales (Qdrant)
- Gestión de infraestructura Cloud Híbrida: Servicios core en AWS (SQS, S3, Networking) y soporte a servicios de IA en Azure
- Automatización de pipelines de CI/CD y gestión de infraestructura como código (IaC)
- Asegurar la alta disponibilidad de los servicios de mensajería (Kafka/Eventos)
REQUISITOS & EXPERIENCIA
Para que tu perfil sea considerado, será necesario que cuentes con experiencia demostrable gestionando entornos de producción críticos.
Entre las habilidades y conocimientos imprescindibles se incluyen:
- Dominio profundo de Kubernetes
- Experiencia sólida en AWS y conocimientos de Azure
- Experiencia implementando stacks de monitorización con Grafana, Prometheus y Loki
- Capacidad para desplegar y mantener bases de datos (PostgreSQL) y sistemas de caché (Redis/Valkey)
- Conocimientos de Scripting (Bash/Python)
Además, se valorará positivamente:
- Experiencia previa desplegando infraestructura para IA (Bases de datos vectoriales como Qdrant)
- Conocimiento de herramientas de GitOps
- Certificaciones oficiales en AWS o Kubernetes (CKA)
- Habituado a trabajar en proyectos usando metodologías ágiles como Scrum
CONTRATACIÓN & UBICACIÓN
La colaboración será preferentemente en modalidad de profesional autónomo, con contratos anuales y renovación tácita, dentro de un proyecto a largo plazo pensado para ofrecer estabilidad y continuidad. Para perfiles de especial encaje técnico o cultural, se considerará la contratación como personal asalariado con contrato indefinido.
La posición será a tiempo completo (40 h/semana). El proyecto requiere una colaboración estrecha con el equipo, siendo necesaria la modalidad híbrida (acudir a 3 días por semana a las oficinas del cliente, situadas en Madrid/Sanchinarro) para facilitar la comunicación y el empuje del proyecto.
SOBRE WAYOPS
WayOps es una consultora tecnológica especializada en transformación digital, data-driven y cognitiva. Nos entusiasma trabajar en proyectos innovadores con las últimas tecnologías, siempre en entornos Cloud y siguiendo buenas prácticas de código limpio.
Todos nuestros profesionales, sean colaboradores autónomos o asalariados, tienen un plan de carrera personalizado, ya que apostamos por el desarrollo de cada persona, invirtiendo en formación, aprendizaje continuo y oportunidades para asumir nuevos retos dentro de proyectos reales y disruptivos.
En WayOps creemos en un modelo de accountability: responsabilidad y recompensa. Nuestro trabajo se basa en la competencia y la confianza, enfrentando los desafíos con liderazgo, buscando la excelencia, adaptándonos al ritmo del negocio y generando un impacto real.