¡No te pierdas nada!
Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo
Nunca compartiremos tu email con nadie y no te vamos a enviar spam
Suscríbete AhoraInformática e IT
164Comercial y Ventas
158Desarrollo de Software
116Transporte y Logística
99Adminstración y Secretariado
89Ver más categorías
Marketing y Negocio
76Derecho y Legal
75Educación y Formación
65Comercio y Venta al Detalle
36Diseño y Usabilidad
32Publicidad y Comunicación
29Ingeniería y Mecánica
23Industria Manufacturera
22Instalación y Mantenimiento
20Sanidad y Salud
19Turismo y Entretenimiento
18Producto
16Recursos Humanos
15Arte, Moda y Diseño
14Construcción
14Hostelería
9Inmobiliaria
9Atención al cliente
8Contabilidad y Finanzas
8Alimentación
5Cuidados y Servicios Personales
5Farmacéutica
4Artes y Oficios
3Social y Voluntariado
3Banca
2Energía y Minería
2Seguridad
2Seguros
1Agricultura
0Ciencia e Investigación
0Deporte y Entrenamiento
0Editorial y Medios
0Telecomunicaciones
0Top Zonas
Barcelona
873Data Engineer
NuevaB. Braun Group
Barcelona, ES
Data Engineer
B. Braun Group · Barcelona, ES
. Python Agile TSQL Azure
We are seeking a Senior T-shaped Data Engineer to join our team, focusing on the crucial intersection between deep SAP knowledge and expert modern Azure Data Engineering skills. This specialized role is paramount for developing, maintaining, and optimizing robust data pipelines on our Azure-based Data Analytics Platform, specifically handling complex financial data extracted from SAP systems. The ideal candidate possesses the technical proficiency to utilize tools like Databricks and Azure Data Factory, combined with the business maturity and consulting background necessary to understand and interpret SAP financial data structures.
This position requires a professional who can not only build the technical solutions but also act as a consultant in the domain, translating business needs from the Finance department into effective and scalable data strategies. You will be instrumental in ensuring high data quality, governance, and availability for critical business intelligence and analytical dashboards. We are looking for a proactive, solution-oriented individual with high seniority, eager to contribute to a multidisciplinary, agile, and international environment.
Your Tasks in the Team
- Develop, maintain, and optimize data pipelines on the Azure-based Data Analytics Platform, primarily leveraging Databricks, Azure Data Factory (ADF), and related tools.
- Integrate and manage data from internal APIs, streaming sources, and especially from SAP technical structures (FI/CO), ensuring seamless data transformation and availability.
- Support event-driven analytics and provide real-time data for business and clinical dashboards.
- Collaborate closely with fellow data engineers, data scientists, and business stakeholders, especially those in the Finance department, to implement secure and scalable data solutions.
- Contribute to technical documentation, participate in code reviews, and drive the continuous improvement of data engineering practices.
- Ensure full compliance with data governance, security, and regulatory requirements.
- Apply your functional SAP knowledge to design more effective data solutions, serving as a liaison between the technical data platform and business requirements.
- Profound and practical experience with SAP. The financial domain (FI/CO) is prioritized, including knowledge of SAP data structures and how to extract and understand financial data.
- A background that includes consulting experience or high functional maturity within the SAP domain, enabling you to act as an expert and bridge business understanding gaps.
- Proven experience building and maintaining cloud-based data pipelines, ideally on Microsoft Azure.
- Practical, hands-on knowledge of modern data processing platforms, specifically Databricks and Azure Data Factory.
- Proficiency with Python for data processing, including experience with PySpark and/or Pandas.
- Strong SQL skills for data manipulation and analysis.
- Familiarity with event-driven architectures and real-time data streaming.
- Experience working in Agile/Scrum environments.
- Demonstrated strong analytical skills, attention to detail, and a solution-oriented mindset.
- Fluent in English (written and spoken).
Deutsche Telekom
Granada, La, ES
Data Engineer (m/w/d) GCP & SAP
Deutsche Telekom · Granada, La, ES
. Cloud Coumputing Terraform
Deine Aufgabe
In dieser Rolle wirst du Teil eines kleinen Teams von Data Scientists, das am Projekt "Energy Cloud" arbeitet, einer unternehmensübergreifenden Google Cloud Platform, die als zentrale Datendrehscheibe für Digitalisierungsinitiativen dient. Du bist verantwortlich für die Mitgestaltung nachhaltiger Lösungen für Datenstrukturen, Datenpipelines und Datenprodukte. Diese Position ist hauptsächlich virtuell mit festgelegten Präsenzterminen in Deutschland.
Aufgaben
- Implementierung solider Datenstrukturen und Anwendungsfälle in allen Unternehmensbereichen basierend auf dem Geschäftswert
- Arbeit an Energiebeschaffung, Management des Energieverbrauchs, Batteriespeichern und Optimierung technischer Anlagen
- Direkte Arbeit in einem virtuellen Team mit enger Zusammenarbeit mit in Deutschland ansässigen Data Scientists und Business Ownern
- Fokus auf 1-2 Geschäftsbereiche innerhalb der Organisation
- Aneignung von Geschäftswissen und Terminologie im Energiesektor
- Aufbau eines Verständnisses für relevante Geschäftsdaten
- Verstehen und Adressieren der Bedürfnisse von Business Ownern
- Beitrag zur Automatisierung von End-to-End-Geschäftsprozessen
- Ermöglichung datengestützter Entscheidungsfindung durch deine Arbeit
Wir suchen einen Expert Data Engineer mit umfangreicher praktischer Erfahrung in der Google Cloud Platform (GCP), mit Fokus auf den Aufbau von Datenpipelines und Datenprodukten. Der ideale Kandidat verfügt über Expertise in ETL/ELT-Konzepten, SQL- und Python-Entwicklung sowie die Fähigkeit, mit Data Scientists und Business-Stakeholdern zusammenzuarbeiten.
Muss-Kriterien
- Umfangreiche praktische Erfahrung mit GCP (BigQuery, Cloud Storage, Dataflow, Pub/Sub)
- Experten-SQL-Kenntnisse
- Fundierte Python-Entwicklungserfahrung
- Tiefes Verständnis von ETL/ELT-Konzepten, Datenmodellierung und Pipeline-Orchestrierung
- Erfahrung mit Workflow-Orchestrierungstools (z.B. Cloud Composer/Airflow)
- Fähigkeit, sauberen, wartbaren, produktionsreifen Code zu schreiben
- Vertrautheit mit CI/CD, Versionskontrolle und Testing-Best-Practices für Datenpipelines
- Kenntnisse in Infrastructure-as-Code (Terraform)
- Fähigkeit, Geschäftswissen/Terminologie zu erwerben
- Fähigkeit, ein Verständnis für relevante Geschäftsdaten aufzubauen
- Fähigkeit, die Bedürfnisse von Geschäftsinhabern zu verstehen
- Grundlegende Lösungsdesign- oder Architekturkenntnisse
- Erfahrung mit gängigen SAP-Modulen
- Vertrautheit mit BI-Tools oder Datenproduktkonzepten
- Erfahrung mit dbt
Bei T-Systems findest du bahnbrechende Projekte, die zum sozialen und ökologischen Wohlbefinden beitragen. Wir möchten neue Talente wie dich willkommen heißen, die frische Ideen und verschiedene Blickwinkel mitbringen, die Herausforderungen und kontinuierliches Lernen annehmen, um zu wachsen und die Gesellschaft zu beeinflussen... All das auf eine unterhaltsame Weise!
Es spielt keine Rolle, wann oder wo du arbeitest. Es geht darum, Arbeit zu leisten, die wichtig ist, um die Gesellschaft voranzubringen. Aus diesem Grund werden wir alles Mögliche tun, damit du jede Chance zur Entwicklung hast, indem wir dir ein Unterstützungsnetzwerk, exzellente Technologie, ein neues Arbeitsumfeld und die Freiheit zum selbstständigen Arbeiten bieten. Wir unterstützen dich dabei, dich sowohl persönlich als auch beruflich ständig weiterzuentwickeln, damit du einen bemerkenswerten Eindruck in der Gesellschaft hinterlassen kannst.
T-Systems ist ein Team von rund 28.000 Menschen, die weltweit beschäftigt sind, was uns zu einem der weltweit führenden Anbieter von integrierten End-to-End-Lösungen macht. Wir entwickeln Hybrid-Cloud- und Künstliche-Intelligenz-Lösungen und treiben die digitale Transformation von Unternehmen, der Industrie, des öffentlichen Sektors und letztendlich der gesamten Gesellschaft voran.
- Internationales, positives, dynamisches und motiviertes Arbeitsumfeld
- Hybrides Arbeitsmodell (Telearbeit/vor Ort)
- Flexible Arbeitszeiten
- Kontinuierliche Weiterbildung: Vorbereitung auf Zertifizierungen, Zugang zu Coursera, wöchentlicher Englisch- und Deutschunterricht
- Flexibler Vergütungsplan: Krankenversicherung, Essensgutscheine, Kinderbetreuung, Transportunterstützung
- Lebens- und Unfallversicherung
- Mehr als 26 Arbeitstage Urlaub pro Jahr
- Sozialfonds
- Kostenloser Service für Fachleute (Ärzte, Physiotherapeuten, Ernährungsberater, Psychologen, Anwälte)
- 100% des Gehalts im Krankheitsfall
Wenn du eine neue Herausforderung suchst, zögere nicht, uns deinen Lebenslauf zu schicken. Werde Teil unseres Teams!
T-Systems Iberia wird nur die Lebensläufe von Kandidaten bearbeiten, die die für jedes Angebot angegebenen Anforderungen erfüllen.
Zusätzliche Details
Die Arbeit wird größtenteils virtuell mit speziellen Präsenzterminen in Deutschland stattfinden.
DevOps Engineer (m/w/d)
1 feb.Deutsche Telekom
Granada, La, ES
DevOps Engineer (m/w/d)
Deutsche Telekom · Granada, La, ES
. Python Azure Cloud Coumputing Kubernetes Jira Bash DevOps
Deine Aufgabe
Wir suchen einen motivierten DevOps Engineer, der unser Team unterstützt und den Betrieb sowie die Weiterentwicklung unseres T-GAIA Chatbot Frameworks und der Chatbots vorantreibt. Diese Rolle kombiniert administrative und technische Verantwortlichkeiten mit Fokus auf einen reibungslosen Betrieb, die Überwachung der Systemleistung und die Implementierung von Verbesserungen zur Optimierung der Anwendungszuverlässigkeit und -genauigkeit.
Aufgaben
- Einrichtung, Fehlerbehebung und Bereitstellung von T-GAIA Chatbots als Cloud-Service sowie anderen GenAI-Produkten und Enablern
- Wartung und Überwachung der Chatbots sowie des zugrundeliegenden Frameworks und der Pipelines
- Überwachung und Sicherstellung der SLA-Levels der Chatbots, insbesondere Kapazität und Performance
- Überwachung und Sicherstellung der SLA-Levels der zugrundeliegenden T-GAIA Plattform
- Aktualisierung aller Dokumentationen
- Sicherstellung des Betriebs durch Unterstützung der Incident-, Problem- und Changemanagement-Prozesse
- Kontinuierliche Verbesserung der Monitoring- und Alarmierungsfähigkeiten des T-GAIA Frameworks und der gehosteten Chatbots
- Unterstützung bei der Lösung von Service-Desk-Tickets unserer internen Kunden
- Bereitstellung und Verwaltung von Use Cases über GitLab-Pipelines
- Durchführung administrativer Tätigkeiten wie Benutzerverwaltung und technischer Aktivitäten wie Netzwerkkonfiguration, IaC-Bereitstellung und Service-Überwachung
- Einrichtung, Wartung und Administration von Jira Service Desk und internen Wiki/Dokumentationssystemen
- Verwaltung interner Kundentickets (eigene Organisation + CCOE)
- Interne Kundenkommunikation: Updates, Roadmap-Kommunikation usw.
- Unterstützung bei MS Azure Cloud-Tools und Pipelines (Bereitstellung, Überwachung, Logging, Sicherheit, IAM)
- Entwicklung von Basis-Betriebsstandards für Hosting/Support (z.B. SLAs, Alarme, Support-Abdeckung)
- Durchführung von Qualitätssicherungsprüfungen für Abrechnungsgenauigkeit, Sicherheitsmaßnahmen und Compliance
- Sammlung interner Anforderungen für Plattform- und Chatbot-Betrieb, Koordination von Diskussionen mit CCOE
Wir suchen einen motivierten DevOps Engineer mit umfangreicher Cloud-Plattform-Erfahrung, Infrastructure-as-Code-Kenntnissen und technischen Administratorfähigkeiten zur Unterstützung unseres T-GAIA Chatbot Framework-Betriebs.
Muss-Kriterien
- Erfahrung mit Terraform/Infrastructure as Code
- Erfahrung mit GitLab und Git-Versionskontrolle
- Erfahrung mit CI/CD-Pipelines (z.B. GitLab CI, Cloud Build)
- Erfahrung mit Kubernetes
- Praktische Erfahrung mit Google Cloud Platform (GCP) und/oder Microsoft Azure
- Erfahrung mit IAM, Monitoring, Billing und Landing Zones in Cloud-Umgebungen
- Scripting-Fähigkeiten (Bash, Python oder ähnliches)
- Erfahrung mit Jira (Service Desk/Projects/Workflows)
- Erfahrung mit Confluence/Wiki-Dokumentationssystemen
- Erfahrung mit API-Integration und Nutzung von API-Gateways
- Google GCP DevOps Engineer Level-Zertifizierung oder nachweislich gleichwertige Erfahrung
- Microsoft Azure gleichwertige Qualifikationen
- Erfahrung in der Arbeit mit oder innerhalb eines Cloud Center of Excellence (CCOE)
- Erfahrung im Projektmanagement
- Erfahrung im Benutzermanagement
- Erfahrung mit Reporting und Monitoring
- Erfahrung mit Public Cloud Billing
- Erfahrung mit Automatisierung von Public-Cloud-Prozessen
- Erfahrung mit Ticket-Handling innerhalb einer IT-Organisation
- Erfahrung mit Containern und Kubernetes
Bei T-Systems findest du bahnbrechende Projekte, die zum sozialen und ökologischen Wohlbefinden beitragen. Wir möchten neue Talente wie dich willkommen heißen, die frische Ideen und verschiedene Blickwinkel mitbringen, die Herausforderungen und kontinuierliches Lernen annehmen, um zu wachsen und die Gesellschaft zu beeinflussen... All das auf eine unterhaltsame Weise!
Es spielt keine Rolle, wann oder wo du arbeitest. Es geht darum, Arbeit zu leisten, die wichtig ist, um die Gesellschaft voranzubringen. Aus diesem Grund werden wir alles Mögliche tun, damit du jede Chance zur Entwicklung hast, indem wir dir ein Unterstützungsnetzwerk, hervorragende Technologie, ein neues Arbeitsumfeld und die Freiheit zum selbstständigen Arbeiten bieten. Wir unterstützen dich dabei, dich sowohl persönlich als auch beruflich ständig weiterzuentwickeln, damit du einen bemerkenswerten Einfluss auf die Gesellschaft nehmen kannst.
T-Systems ist ein Team von rund 28.000 Mitarbeitern weltweit, was uns zu einem der weltweit führenden Anbieter von integrierten End-to-End-Lösungen macht. Wir entwickeln Hybrid-Cloud- und Künstliche-Intelligenz-Lösungen und treiben die digitale Transformation von Unternehmen, der Industrie, des öffentlichen Sektors und letztendlich der gesamten Gesellschaft voran.
- Internationales, positives, dynamisches und motiviertes Arbeitsumfeld
- Hybrides Arbeitsmodell (Telearbeit/vor Ort)
- Flexible Arbeitszeiten
- Kontinuierliche Weiterbildung: Vorbereitung auf Zertifizierungen, Zugang zu Coursera, wöchentlicher Englisch- und Deutschunterricht
- Flexibler Vergütungsplan: Krankenversicherung, Essensgutscheine, Kinderbetreuung, Transportunterstützung
- Lebens- und Unfallversicherung
- Mehr als 26 Arbeitstage Urlaub pro Jahr
- Sozialfonds
- Kostenloser Service für Fachleute (Ärzte, Physiotherapeuten, Ernährungsberater, Psychologen, Anwälte...)
- 100% des Gehalts im Krankheitsfall
Wenn du eine neue Herausforderung suchst, zögere nicht, uns deinen Lebenslauf zu schicken. Werde Teil unseres Teams!
T-Systems Iberia wird nur die Lebensläufe von Kandidaten bearbeiten, die die für jedes Angebot angegebenen Anforderungen erfüllen.
Barcelona Supercomputing Center
Barcelona, ES
Machine Learning Engineer (RE1-2) – AI Factory (Earth Sciences Department)
Barcelona Supercomputing Center · Barcelona, ES
. Python LESS Machine Learning
Job Reference
406_25_ES_CES_RE1
Position
Machine Learning Engineer (RE1-2) – AI Factory (Earth Sciences Department)
Closing Date
Sunday, 01 March, 2026
Reference: 406_25_ES_CES_RE1
Job title: Machine Learning Engineer (RE1-2) – AI Factory (Earth Sciences Department)
About BSC
The Barcelona Supercomputing Center - Centro Nacional de Supercomputación (BSC-CNS) is the leading supercomputing center in Spain. It houses MareNostrum, one of the most powerful supercomputers in Europe, was a founding and hosting member of the former European HPC infrastructure PRACE (Partnership for Advanced Computing in Europe), and is now hosting entity for EuroHPC JU, the Joint Undertaking that leads large-scale investments and HPC provision in Europe. The mission of BSC is to research, develop and manage information technologies in order to facilitate scientific progress. BSC combines HPC service provision and R&D into both computer and computational science (life, earth and engineering sciences) under one roof, and currently has over 1000 staff from 60 countries.
Look At The BSC Experience
BSC-CNS YouTube Channel
Let's stay connected with BSC Folks!
We are particularly interested for this role in the strengths and lived experiences of women and underrepresented groups to help us avoid perpetuating biases and oversights in science and IT research. In instances of equal merit, the incorporation of the under-represented sex will be favoured.
We promote Equity, Diversity and Inclusion, fostering an environment where each and every one of us is appreciated for who we are, regardless of our differences.
If you consider that you do not meet all the requirements, we encourage you to continue applying for the job offer. We value diversity of experiences and skills, and you could bring unique perspectives to our team.
Context And Mission
The Barcelona Supercomputing Center (BSC) is seeking a Machine Learning Engineer to join the Earth Sciences department within the framework of the AI Factory initiative.
The AI Factory is a major European project aimed at accelerating the adoption and development of artificial intelligence across industry sectors that is deploying a comprehensive set of AI-focused services, including robust training, networking, and innovation support structures. Its mission is to foster the uptake and effective use of AI, particularly among SMEs and startups across participating countries, and to strengthen the European innovation ecosystem. These services are powered by the AI-specific partition of the MareNostrum5 supercomputer, one of the most advanced infrastructures in Europe, designed to meet the evolving AI needs by harnessing the latest AI-oriented computing technologies.
Within the Earth Sciences department, the selected candidate will support the development and deployment of AI services related to climate change use and test cases, contributing to real-world applications and scientific advancements. Additionally, the selected candidate will be coordinating the availability and integration of AI software, developed internally by consortium members or externally by third parties, on the MareNostrum5 AI partition, ensuring smooth support for the wider AI Factory user community.
Key Duties
- Support the documentation and curation of AI software employed in the AI Factory
- Support the deployment and maintenance of AI software on MareNostrum5 AI partition
- Design, implement, and optimize machine learning pipelines for environmental-related applications.
- Collaborate with domain scientists and external users to develop AI solutions
- Support users from the AI Factory in accessing and utilizing AI tools and services.
- Participate in collaborative development within the AI Factory consortium.
- Participate in technical reporting and scientific publications contributing to the documentation of the AI Factory software, with opportunities to be involved in academic publications and project reporting.
- Education
- Bachelor's or Master’s in Computer Science, Machine Learning, Data Science, Environmental Sciences, or a related field.
- Essential Knowledge and Professional Experience
- Strong programming skills in Python, with experience in machine learning libraries such as PyTorch, Tensorflow, and Scikit-learn.
- Proven experience in developing and training machine learning models, particularly deep learning architectures.
- Strong background in handling, analyzing, and validating large-scale datasets.
- Experience working in a UNIX-based computational environment.
- Additional Knowledge and Professional Experience
- Familiarity with climate, weather, and Earth system datasets (NetCDF, Zarr).
- Experience in high-performance computing (HPC) and parallelized machine learning workflows.
- Proficiency in GPU-accelerated machine learning frameworks such as TensorFlow, RAPIDS, JAX, and/ or distributed training using Dask.
- Understanding of climate and weather models.
- Competences
- Strong problem-solving and analytical skills, with the ability to optimize computational workflows.
- Ability to work independently while collaborating effectively in a research environment.
- Excellent communication skills, with a strong ability to document and present research findings.
- Proficiency in written and spoken English.
- The position will be located at BSC within the Earth Sciences Department
- We offer a full-time contract (37.5h/week), a good working environment, a highly stimulating environment with state-of-the-art infrastructure, flexible working hours, extensive training plan, restaurant tickets, private health insurance, support to the relocation procedures
- Duration: Open-ended contract due to technical and scientific activities linked to the project and budget duration
- Holidays: 22 days of holidays + 6 personal days + 24th and 31st of December per our collective agreement
- Salary: we offer a competitive salary commensurate with the qualifications and experience of the candidate and according to the cost of living in Barcelona
- Starting date: As soon as possible
All applications must be submitted via the BSC website and contain:
- A full CV in English including contact details
- A cover/motivation letter with a statement of interest in English, clearly specifying for which specific area and topics the applicant wishes to be considered. Additionally, two references for further contacts must be included. Applications without this document will not be considered.
The selection will be carried out through a competitive examination system ("Concurso-Oposición"). The recruitment process consists of two phases:
- Curriculum Analysis: Evaluation of previous experience and/or scientific history, degree, training, and other professional information relevant to the position. - 40 points
- Interview phase: The highest-rated candidates at the curriculum level will be invited to the interview phase, conducted by the corresponding department and Human Resources. In this phase, technical competencies, knowledge, skills, and professional experience related to the position, as well as the required personal competencies, will be evaluated. - 60 points. A minimum of 30 points out of 60 must be obtained to be eligible for the position.
In accordance with OTM-R principles, a gender-balanced recruitment panel is formed for each vacancy at the beginning of the process. After reviewing the content of the applications, the panel will begin the interviews, with at least one technical and one administrative interview. At a minimum, a personality questionnaire as well as a technical exercise will be conducted during the process.
The panel will make a final decision, and all individuals who participated in the interview phase will receive feedback with details on the acceptance or rejection of their profile.
At BSC, we seek continuous improvement in our recruitment processes. For any suggestions or comments/complaints about our recruitment processes, please contact [email protected].
For more information, please follow this link.
Deadline
The vacancy will remain open until a suitable candidate has been hired. Applications will be regularly reviewed and potential candidates will be contacted.
OTM-R principles for selection processes
BSC-CNS is committed to the principles of the Code of Conduct for the Recruitment of Researchers of the European Commission and the Open, Transparent and Merit-based Recruitment principles (OTM-R). This is applied for any potential candidate in all our processes, for example by creating gender-balanced recruitment panels and recognizing career breaks etc.
BSC-CNS is an equal opportunity employer committed to diversity and inclusion. We are pleased to consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or any other basis protected by applicable state or local law.
For more information follow this link
Application Form
You are applying for the following job offer
Name and Surname *
Gender ** *
Female
Male
Other
Email *
Nationality** *
Where did you first see this job offer (Please indicate the name of the website, social media, referral etc.)? *
please choose one of this and if needed describe the option : - BSC Website - Euraxess - Spotify - HiPeac - LinkedIn - Networking/Referral: include who and how - Events (Forum, career fairs): include who and how - Through University: include the university name - Specialized website (Metjobs, BIB, other): include which one - Other social Networks: (Twitter, Facebook, Instagram, Youtube): include which one - Other (Glassdoor, ResearchGate, job search website and other cases): include which one
Indicate what BSC department/s you want to apply.
Computer Sciences
CASE
Life Sciences
Earth Sciences
Indicate what research group/s you want to apply.
Upload CV (select the file, then click the Upload button) *
Please, upload your CV document using the following name structure: Name_Surname_CV
Files must be less than 3 MB.
Allowed file types: txt rtf pdf doc docx.
Cover Letter (optional) (if so, select the file and then click the Upload button)
Please, upload your CV document using the following name structure: Name_Surname_CoverLetter
Files must be less than 3 MB.
Allowed file types: txt rtf pdf doc docx zip.
Other Documents (optional) (if so, select the file and then click the Upload button)
Please, upload your CV document using the following name structure: Name_Surname_OtherDocument
Files must be less than 10 MB.
Allowed file types: txt rtf pdf doc docx rar tar zip.
- Consider that the information provided in relation to gender and nationality will be used solely for statistical purposes.
I accept the data policy *
Other: *
I confirm that the information given in this form is true, complete and accurate.
Leave this field blank
Devops Azure/AWS
28 ene.Grupo NS
Devops Azure/AWS
Grupo NS · Barcelona, ES
Teletrabajo Azure AWS DevOps Terraform
Desde Grupo NS precisamos incorporar para un proyecto en modalidad híbrida en Viladecans (2 días presencial en Viladecans y 3 teletrabajo) un perfil Devops con experiencia en Azure/AWS de al menos 3 años
Requisitos
Ingeniero DevOps Automatización de infraestructuras y orquestación C
Sólida experiencia práctica con los servicios de Microsoft Azure (máquinas virtuales, redes, almacenamiento, Azure AD, servicios de aplicaciones, AKS, etc.).
Utilizar Terraform para diseñar y gestionar la infraestructura como código (IaC) en AWS y Azure.
Mantener la coherencia y la fiabilidad de la infraestructura en la nube.
Conocimientos generales sobre los servicios de Azure, los módulos de AWS, la configuración y la arquitectura.
Crear y gestionar zonas de aterrizaje, redes virtuales, cortafuegos, puntos finales privados y conectividad híbrida (VPN/ExpressRoute).
Desarrollo de canalizaciones CI/CD
Al menos 3 años de experiencia como Devops Engineer con Aws/Azure
Experiencia en orquestación C
CI/CD
CDmon
Barcelona, ES
Platform Engineer / DevOps (Python/Go Linux)
CDmon · Barcelona, ES
API Python Linux Cloud Coumputing REST AWS Bash DevOps
### Tu Misión:
Formar parte del equipo de _arquitectos_. En cdmon.com construimos las herramientas y la infraestructura que automatizan nuestra nube. Estamos buscando a alguien con alma de sysadmin pero que prefiera escribir código para automatizar tareas que hacerlas a mano.
### Lo que harás (y aprenderás):
* Ayudar a crear APIs internas (Python o Go) para gestionar nuestros clusters (OpenStack, Proxmox).
* Automatizar la creación de redes y servidores virtuales.
* Aprender y aplicar conceptos de Infraestructura como Código (IaC).
* Trabajar en la intersección entre el Hardware y el Software.
## Lo que buscamos en ti:
- Pasión por Linux (sabes qué es un Kernel, un proceso, permisos).
- Sabes programar scripts para automatizar cosas (Python, Bash o Go).
- Entiendes conceptos básicos de redes (IPs, subnets, qué es una API REST).
- **Actitud:** Odias hacer la misma tarea dos veces manual; prefieres pasar 3 horas programando un script que lo haga en 1 segundo para siempre. Eres curioso sobre cómo funciona la "Nube" de verdad (no solo la consola de AWS).
### ¿Por qué nosotros?
- Tocarás tecnologías de virtualización y hardware que en otras empresas están reservadas a los Seniors.
- Plan de carrera para posiciones junior con salario competitivo + bonus por beneficios.
- Horario flexible.
- Trabajo remoto: **1 día en oficina (para gente que resida en la zona), 4 remoto**.
- Formación continua: presupuesto para certificaciones, cursos y conferencias.
- Seguro médico privado + ticket restaurante.
- Ambiente técnico, colaborativo y orientado a la innovación.
- Acceso a infraestructura cloud de última generación.
- Oportunidades reales de crecimiento profesional.
**Compensación (Rango Salarial Bruto Anual):**
* **Junior:** 30.000€ - 40.000€
* **Senior:** 40.000€ - 55.000€
Senior Site Reliability Engineer
25 ene.Trust In SODA
Barcelona, ES
Senior Site Reliability Engineer
Trust In SODA · Barcelona, ES
. Node.js Python Cloud Coumputing Kubernetes Ruby AWS Terraform Docker
Senior Site Reliability Engineer | Spain (Hybrid)
An opportunity to join a high growth, late stage technology company operating at significant scale. The business supports thousands of customers globally and is investing heavily in reliability, platform maturity and engineering quality as it continues to grow.
This is a true senior SRE role for someone who has been through scaling systems and teams, and is ready to lead reliability initiatives that span services, squads and stakeholders.
The role
You will operate at the intersection of software engineering, cloud infrastructure and reliability engineering. This role goes beyond execution and delivery. You will be expected to design, plan and lead initiatives, shaping how reliability, observability and incident management are implemented across the organisation.
You will partner closely with engineering teams, influence architectural decisions early, and help define how reliability is measured and improved as the platform scales.
Your soft skills
We are looking for engineers who have:
• Led initiatives across multiple teams or domains rather than working solely within one squad
• Designed and evolved systems with clear reasoning around trade offs, failure modes and long term impact
• Strong communication skills and confidence presenting technical decisions in larger group settings
• Experience in scale ups or mid sized tech environments where structure is still evolving and ownership is high
Technical background
You bring strong depth across:
• Cloud infrastructure, ideally AWS, with solid networking and service level understanding
• Containers and orchestration such as Kubernetes, ECS or similar
• Infrastructure as Code using tools like Terraform, Pulumi or CloudFormation
• Observability and monitoring including metrics, logging and alerting using tools such as Prometheus, Grafana, DataDog or CloudWatch
• CI CD and automation practices with a focus on reliability and safety
You also have a strong software engineering background, with experience building and operating systems in languages such as Python, Node.js, Ruby or similar, not just scripting.
Reliability mindset
You are comfortable with:
• Defining and using SLOs and SLIs to make reliability measurable
• Using error budgets to guide engineering priorities
• Leading or participating in incident response and post incident improvement
• Improving production readiness, on call quality and reducing recurring failure patterns
Why this role stands out
• High impact senior role with real ownership and influence
• Opportunity to shape reliability practices in a growing engineering organisation
• Strong engineering culture with an emphasis on autonomy and trust
• Competitive salary, equity and a flexible hybrid working model
If you are a senior engineer who enjoys designing systems, leading initiatives and improving reliability at scale, this role offers the scope and autonomy to make a real impact.
IoT Data Engineer
25 ene.Canonical
IoT Data Engineer
Canonical · Granada, La, ES
Teletrabajo . Python Linux Cloud Coumputing REST SaaS IoT Modbus RabbitMQ Go Kafka
Canonical is a leading provider of open source software and operating systems to the global enterprise and technology markets. Our platform, Ubuntu, is very widely used in breakthrough enterprise initiatives such as public cloud, data science, AI, engineering innovation, and IoT. Our customers include the world's leading public cloud and silicon providers, and industry leaders in many sectors. The company is a pioneer of global distributed collaboration, with 1200+ colleagues in 75+ countries and very few office-based roles. Teams meet two to four times yearly in person, in interesting locations around the world, to align on strategy and execution.
The company is founder-led, profitable, and growing.
This is an exciting opportunity for a software engineer passionate about open source software, Linux, and Web Services at scale. Come build a rewarding, meaningful career working with the best and brightest people in technology at Canonical, a growing pre-IPO international software company.
Canonical's engineering team is at the forefront of the IoT revolution and aims to strengthen this position by developing cutting-edge telemetry and connectivity solutions. By integrating reliable, secure, and robust data streaming capabilities into the Snappy ecosystem, we are setting new standards in the industry for ease of development, implementation, management and security.
We are seeking talented individuals to help us enhance our global SaaS services, providing customers with the essential data services needed to build the next generation of IoT devices effortlessly. Our commitment to data governance, ownership, and confidentiality is unparalleled, ensuring our customers can innovate with confidence on top of the globally trusted Ubuntu platform.
Location: This role will be based remotely in the EMEA region.
What your day will look like
- Work remotely with a globally distributed team, driving technical excellence and fostering innovation across diverse engineering environments.
- Design and architect high-performance service APIs to power streaming data services, ensuring seamless integration across teams and products using Python and Golang.
- Develop robust governance, auditing, and management systems within our advanced telemetry platform, ensuring security, compliance, and operational integrity.
- Partner with our infrastructure team to build scalable cloud-based SaaS solutions while also delivering containerized on-prem deployments for enterprise customers.
- Lead the design, implementation, and optimization of new features—taking projects from spec to production, ensuring operational excellence at scale.
- Provide technical oversight, review code and designs, and set best practices to maintain engineering excellence.
- Engage in high-level technical discussions, collaborating on optimal solutions with engineers, product teams, and stakeholders.
- Work remotely with occasional global travel (2-4 weeks per year) for internal and external events, fostering deeper collaboration and knowledge-sharing.
- You design and architect scalable backend services, messaging/data pipelines, and REST APIs using Golang or Python, guiding best practices, technical direction, and system scalability.
- You possess deep expertise in cybersecurity principles and proactively address the complex challenges of IoT environments—secure connectivity, data streaming, governance, and compliance.
- You bring proven expertise in designing and optimizing systems using:
- IAM models, encryption, access control, and compliance frameworks (GDPR, HIPAA) to ensure secure and compliant data handling.
- Ability to design decentralized data ownership models, ensuring interoperability and governance across domains.
- Designing high-throughput, low-latency systems for IoT data processing.
- Data streaming technologies (MQTT, Kafka, RabbitMQ)
- Observability tools (OpenTelemetry)
- Industrial/engineering data exchange protocols (OPC-UA, ModBus)
- You thrive in cross-functional environments, partnering with product teams, engineers, and stakeholders to drive high-impact technical solutions that align with business objectives.
- You mentor junior engineers, foster technical excellence, and contribute to a culture of innovation, continuous improvement, and knowledge sharing.
- You embrace challenges with an open mind, continuously seeking opportunities to learn, improve, and innovate in a rapidly evolving IoT landscape.
- You are familiar with Ubuntu as a development and deployment platform.
- You hold a Bachelor's degree or equivalent in Computer Science, STEM, or a related field.
- Willingness to travel up to 4 times a year for internal events.
We consider geographical location, experience, and performance in shaping compensation worldwide. We revisit compensation annually (and more often for graduates and associates) to ensure we recognize outstanding performance. In addition to base pay, we offer a performance-driven annual bonus or commission. We provide all team members with additional benefits which reflect our values and ideals. We balance our programs to meet local needs and ensure fairness globally.
- Distributed work environment with twice-yearly team sprints in person
- Personal learning and development budget of USD 2,000 per year
- Annual compensation review
- Recognition rewards
- Annual holiday leave
- Maternity and paternity leave
- Team Member Assistance Program & Wellness Platform
- Opportunity to travel to new locations to meet colleagues
- Priority Pass and travel upgrades for long-haul company events
Canonical is a pioneering tech firm at the forefront of the global move to open source. As the company that publishes Ubuntu, one of the most important open-source projects and the platform for AI, IoT, and the cloud, we are changing the world of software. We recruit on a global basis and set a very high standard for people joining the company. We expect excellence; in order to succeed, we need to be the best at what we do. Most colleagues at Canonical have worked from home since our inception in 2004. Working here is a step into the future and will challenge you to think differently, work smarter, learn new skills, and raise your game.
Canonical is an equal opportunity employer
We are proud to foster a workplace free from discrimination. Diversity of experience, perspectives, and background create a better work environment and better products. Whatever your identity, we will give your application fair consideration.
DevOps Engineer
23 ene.Seidorcons
Barcelona, ES
DevOps Engineer
Seidorcons · Barcelona, ES
Jenkins Cloud Coumputing Git DevOps
´TE APUNTAS AL RETO?
Como DevOps Engineer te incorporarás a un equipo de Operaciones Cloud / DevOps, responsable de brindar soporte reactivo a aplicaciones y coordinar con equipos de infraestructura y desarrollo. Ayudarás a dar soporte operativo y monitoreo reactivo de aplicaciones alojadas en IBM Cloud sobre clusters OpenShift/Kubernetes, atendiendo consultas, peticiones e incidentes según se presenten o dando soporte a equipos de terceros que realizan cambios en la infraestructura. El soporte a aplicaciones puede incluir asistencia con despliegues gestionados por Git y Argo CD y construcicones con Jenkins o Tekton.
¿QUÉ HARÁS EN TU DÍA A DÍA?
- Revisar logs y métricas de IBM Cloud para detectar incidentes.
- Monitorear pods, servicios y recursos en OpenShift/Kubernetes.
- Supervisar tráfico y errores en Istio.
- Usar herramientas de terceros para detectar y reportar problemas.
- Dar soporte puntual a pipelines de construcción (Jenkins/Tekton) y despliegues (Git, Argo CD) cuando se requiera.
- Conocer el ecositem y las distintas capas de comunicación para detectar puntos de fallo.
- Escalar incidencias a los equipos responsables.
- Documentar incidencias y generar reportes operativos.
- Migrar aplicaciones dentro de los clusteres existentes.
¿QUÉ ESPERAMOS DE TI?
- Ciclo formativo de grado superior o Ingeniería en el área informática o campo relacionado.
- Experiencia en Kubernetes/OpenShift.
- Conocimiento de herramientas de logging.
- Familiaridad con Istio a nivel operativo.
- Familiaridad con entornos de red y comunicaciones.
- Manejo de herramientas de observabilidad y monitoreo (Grafana, Elastic, etc.).
- Conocimientos en Jenkins, Tekton, Git y Argo CD para soporte.