¡No te pierdas nada!
Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo
Nunca compartiremos tu email con nadie y no te vamos a enviar spam
Suscríbete AhoraInformática e IT
850Comercial y Ventas
775Transporte y Logística
570Adminstración y Secretariado
552Desarrollo de Software
372Ver más categorías
Derecho y Legal
351Comercio y Venta al Detalle
327Educación y Formación
276Marketing y Negocio
257Ingeniería y Mecánica
252Instalación y Mantenimiento
173Publicidad y Comunicación
148Diseño y Usabilidad
145Industria Manufacturera
130Construcción
123Sanidad y Salud
113Hostelería
104Recursos Humanos
81Contabilidad y Finanzas
65Inmobiliaria
55Artes y Oficios
50Atención al cliente
47Arte, Moda y Diseño
46Turismo y Entretenimiento
45Producto
39Alimentación
30Cuidados y Servicios Personales
26Seguridad
23Energía y Minería
20Farmacéutica
19Banca
18Social y Voluntariado
5Deporte y Entrenamiento
3Telecomunicaciones
3Seguros
2Ciencia e Investigación
1Agricultura
0Editorial y Medios
0Amazon Web Services AWS
WikipediaNEORIS
Madrid, ES
Data Engineer especializado en IA
NEORIS · Madrid, ES
. Azure Cloud Coumputing AWS
NEORIS ahora parte de EPAM es un acelerador digital que ayuda a las compañías a entrar en el futuro, con más de 20 años de experiencia como Socios Digitales de algunas de las compañías más importantes del mundo. Somos más de 4,000 profesionales en 11 países, con una cultura multicultural y de startup donde fomentamos la innovación, el aprendizaje continuo y la generación de soluciones de alto impacto para nuestros clientes.
Estamos en búsqueda de: Data Engineer especializado en IA
Principales Responsabilidades
- Diseñar, implementar y mantener productos de datos que integren modelos y capacidades de inteligencia artificial.
- Asegurar la calidad, eficiencia y fiabilidad del flujo de datos en entornos cloud (AWS u otras plataformas).
- Integrar modelos IA en soluciones orientadas a negocio, garantizando su correcto despliegue, escalabilidad y operación.
- Automatizar procesos de ingesta, transformación, operación y monitoreo de pipelines y modelos.
- Colaborar con equipos multidisciplinarios (Data Science, Ingeniería, Arquitectura) para asegurar el óptimo funcionamiento de los productos de datos.
Excluyentes:
- Experiencia sólida en ingeniería de datos y en construcción de pipelines productivos de datos.
- Experiencia trabajando con modelos o soluciones basadas en IA en entornos productivos.
- Conocimientos en cloud (AWS, GCP o Azure), especialmente en despliegue y operación de soluciones basadas en datos.
- Experiencia en automatización de pipelines y servicios serverless o batch.
- Inglés intermedio–alto para colaborar con equipos globales.
- Experiencia en AI/MLOps Engineering: Airflow, AWS Lambda, AWS Batch, S3, Athena.
- Experiencia con plataformas de IA generativa o servicios de modelos: Azure OpenAI, Gemini en GCP, Amazon Bedrock.
- Conocimiento de herramientas de monitoreo, orquestación y buenas prácticas de observabilidad.
- Experiencia en la industrialización y operación de modelos ML/IA.
- Estudios en Ingeniería, Informática, Ciencia de Datos o áreas afines.
- Contrato indefinido con salario competitivo
- Modalidad flexible y posibilidad de trabajo remoto.
- Plan de carrera personalizado y formación continua (certificaciones, inglés, etc.).
- Participación en proyectos estables con alto componente técnico.
- Flexibilidad horaria y enfoque en la conciliación.
- Beneficios sociales adaptados a tus necesidades
Lucia García
Amazon
Madrid, ES
Software Dev Engineer Internship - AI/ML
Amazon · Madrid, ES
. C# Java Python Agile TSQL NoSQL C++ Cloud Coumputing Microservices TypeScript AWS Office
Description
Do you want to solve real customer problems through innovative technology? Do you enjoy working on scalable services in a collaborative team environment? Do you want to see your code directly impact millions of customers worldwide?
At Amazon, we hire the best minds in technology to innovate and build on behalf of our customers. Customer obsession is part of our company DNA, which has made us one of the world's most beloved brands.
Our Software Development Engineer (SDE) interns use modern technology to solve complex problems while seeing their work's impact first-hand. The challenges SDE interns solve at Amazon are meaningful and influence millions of customers, sellers, and products globally. We seek individuals passionate about creating new products, features, and services while managing ambiguity in an environment where development cycles are measured in weeks, not years.
At Amazon, we believe in ownership at every level. As an SDE intern, you'll own the entire lifecycle of your code - from design through deployment and ongoing operations. This ownership mindset, combined with our commitment to operational excellence, ensures we deliver the highest quality solutions for our customers.
We're looking for curious minds who think big and want to define tomorrow's technology. At Amazon, you'll grow into the high-impact engineer you know you can be, supported by a culture of learning and mentorship. Every day brings exciting new challenges and opportunities for personal growth.
Amazon internships across all seasons are full-time positions, and interns should expect to work in office, Monday-Friday, up to 40 hours per week typically between 8am-5pm. Specific team norms around working hours will be communicated by your manager. Interns should not have conflicts such as classes or other employment during the Amazon work-day. Applicants should have a minimum of one quarter/semester/trimester remaining in their studies after their internship concludes.
Key job responsibilities
- Collaborate and communicate effectively with experienced cross-disciplinary Amazonians to design, build, and operate innovative products and services that delight our customers, while participating in technical discussions to drive solutions forward.
- Design and develop scalable solutions using cloud-native architectures and microservices in a large distributed computing environment.
- Participate in code reviews and contribute to technical documentation.
- Build and maintain resilient distributed systems that are scalable, fault-tolerant, and cost-effective.
- Leverage and contribute to the development of GenAI and AI-powered tools to enhance development productivity while staying current with emerging technologies.
- Write clean, maintainable code following best practices and design patterns.
- Work in an agile environment practicing CI/CD principles while participating in operational responsibilities including on-call duties.
- Demonstrate operational excellence through monitoring, troubleshooting, and resolving production issues.
As an intern, you will be matched to a manager and a mentor and will have the opportunity to influence the evolution of Amazon technology and lead critical projects early in your career.
In addition to working on an impactful project, you will have the opportunity to engage with Amazonians for both personal and professional development, expand your network, and participate in activities with other interns throughout your internship. No matter the location of your internship, we give you the tools to own your project and learn in a real-world setting.
Basic Qualifications
- Must be 18 years of age or older
- Education Requirements (must meet one):
- o Currently enrolled in Bachelor's degree or above in Computer Science, Computer Engineering, Data Science, Information Systems, or related STEM fields [degrees can be updated based on regional variations]
- o Completed Bachelor's or Graduate degree in specified fields
- Expected graduation between October 2026 - September 2029
- Demonstrated experience with at least one general-purpose programming language such as Java, Python, C++, C#, Go, Rust, or TypeScript
- Demonstrated experience one or more of the following:
- o Data structures implementation
- o Basic algorithm development
- o Object-oriented design principles
- Previous technical internship(s) or demonstrated project experience
- Experience with one or more of the following:
- o AI tools for development productivity
- o Cloud platforms (preferably AWS)
- o Database systems (SQL and NoSQL)
- o Contributing to open-source projects
- o Version control systems
- o Debugging and troubleshooting complex systems
- Strong problem-solving and analytical skills
- Excellent written and verbal communication skills
- Demonstrated ability to learn and adapt to new technologies quickly
- Basic understanding of software development lifecycle (SDLC)
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Company - Amazon Spain Services, S.L.U.
Job ID: A3160089
Backend Intern
NuevaDashBook
Backend Intern
DashBook · Barcelona, ES
Teletrabajo . PHP MySQL Docker Cloud Coumputing AWS Symfony MariaDB
Your mission
As a Backend Intern, you’ll join our backend team and work under the guidance of the Lead Backend Developer and CTO. You’ll get hands-on experience with real projects and learn how to ship reliable, scalable features.
You Will
- Contribute to the development of our backend services and APIs - Help maintain and improve both the public platform and admin tools - Write clean, maintainable code (and learn how to test it)
- Collaborate with the Frontend and Infra teams on cross-functional features
- Learn about software architecture, dev workflows, and cloud infrastructure
We’re Currently Using
- Symfony (PHP)- MariaDB (MySQL) on AWS RDS- S3, Docker, GitHub Actions, AWS EC2 We don’t expect you to know everything, but you should be eager to learn and comfortable working in a technical environment.
- Are studying in Computer Science - Have some experience (even academic) with Symfony / PHP or similar backend tech
- Understand the basics of APIs, databases, and server-side logic
- Are curious, organized, and love solving technical problems
- Feel comfortable communicating in English in an international team (Bonus: you also speakSpanish and/or French*)* - Want to learn fast and contribute to real production projects
- A hands-on experience in a real-world product team
- A remote-friendly and flexible working culture
- An international environment based in Barcelona - Flexible hours and async-friendly collaboration
- Mentorship and regular feedback to help you grow
- The opportunity to contribute to tools used by real creators
Aegon
Madrid, ES
Programa Step Up! - Beca Data
Aegon · Madrid, ES
. TSQL Azure Cloud Coumputing AWS Big Data
¡Hola! ¡Somos Aegon y creemos que después de conocer nuestro Programa Step Up! tendrás un flechazo. 🙏 Probablemente estarás buscando tu primera experiencia laboral y una aseguradora no es que sea lo más apetecible, we know… 👾 Pero las apariencias engañan: digamos que la semana pasada se nos estropeó el fax y la máquina de escribir del siglo pasado 😂, así que nos hemos puesto a actualizar cosas y hemos tirado la casa por la ventana. Dos ejemplos rápidos: hemos renovado totalmente nuestras oficinas para adaptadas a un modelo de trabajo híbrido (eso de trabajar dos días por semana desde la ofi para compartir tiempo con tu equipo y el resto de los días desde donde tú quieras 🏝️). Y también, como hemos evolucionado nuestra cultura colaborativa, la Lista Forbes nos ha reconocido como una de las 100 Mejores Empresas para Trabajar en España 👑. Si has llegado hasta aquí probablemente te interesemos un poquito, ¡así que nosotros también estamos interesados en ti! ¡Formarás parte de la comunidad de becarios del Programa “Step Up!”. Durante tu incorporación, te familiarizarás con Aegon y con las diferentes áreas de la compañía, se te asignará un Buddy (empleado actual de la empresa que te ayuda cuando eres nuevo) y, por supuesto, tendrás la oportunidad de conocer a tu equipo y al CEO. ¿Aún no hay match? Sigue leyendo para conocer si quieres acercarte al equipo Aegonita un poco más:
- 💰 Valoramos que quieras formarte y nos ayudes en nuestro día a día, por eso nuestra beca es remunerada
- ⚖️ Compartimos unos valores muy claros: aquí los directores también se remangan y te traen el café si es necesario. Todos somos iguales
- 💪 Serás parte activa de proyectos reales y tu opinión se tendrá en cuenta igual que la de los demás. Además, tendrás la oportunidad de aprender de la mano de profesionales del sector. Tenemos preparado un plan formativo que te ayudará en tus primeros pasos
- 🤸♂️ Apostamos por la flexibilidad a través de nuestro modelo de trabajo híbrido: contamos con la posibilidad de teletrabajar al menos tres días por semana y también de compartir tiempo en unas oficinas adaptadas a la metodología ágil.
- ✨Si ya te aburren las plataformas de streaming no te preocupes, hemos creado nuestra plataforma formativa con el objetivo de ayudarte en tu plan de desarrollo
- 🤟 No tenemos dress code, ven como tú seas a nuestras oficinas
¿Y qué harás con nosotros?
- Extracción y transformación de datos, utilizando SQL para extraer datos según los requerimientos del proyecto.
- Desarrollo de soluciones de integración de datos, apoyándote en herramientas ETL para automatizar y optimizar flujos de información.
- Identificación y corrección de datos inconsistentes, seguido por análisis exploratorio para encontrar patrones y tendencias.
- Creación de visualizaciones interactivas y paneles de control para comunicar resultados.
- Trabajo en equipo con otros profesionales para desarrollar soluciones efectivas de análisis de datos
- Colaboración en entornos Cloud (AWS y Azure) para el despliegue o soporte de procesos y soluciones de datos.
Apply
Data Scientist Gen AI
27 feb.Capgemini
Madrid, ES
Data Scientist Gen AI
Capgemini · Madrid, ES
. Python Azure Docker Cloud Coumputing Kubernetes SaaS AWS R Power BI Spark Machine Learning
Job Description
Hello! We are CAPGEMINI, a TOP EMPLOYER company with 360,000 PEOPLE around the world united by the same passion: leading the evolution towards a sustainable and inclusive future through TECHNOLOGY.
In our global Insights & Data business unit we are growing and we are looking for professionals with Generative AI profile.
What will be your mission?
Your Mission Will Be To Lead, Drive, And Technically Develop Initiatives In The Field Of Generative AI And ML Autonomously And Also
- You will design, develop, and deploy solutions using Machine Learning, Deep Learning, and Computer Vision models.
- You will develop projects with NLP and LLMs.
- You will manage and manipulate data sets: Definition and identification of learning variables, generation of new sets, transformation, and cleaning.
- You will define End-to-End Machine Learning processes.
- You will develop training processes using different frameworks (TF, PyTorch, Scikit-learn, etc).
- You will develop inference processes in On-premise or Cloud environments.
- You will maintain existing Machine Learning models.
- You will lead project technical teams.
- You will mentor Junior or trainee Data Scientists.
- You will participate in technical proposals at national and international levels.
What are we looking for?
To Feel Comfortable In The Position, It Is Necessary To Have a Degree In Computer Science, Mathematics, Physics, Or Similar Engineering And Be Able To Handle The Following Well
- Experience in the development of projects in the field of generative AI working with NLP, LLMs,
- Proven experience of more than 3 years in projects applying ML and DL techniques in productive environments.
- Experience in defining, creating, deploying, and maintaining ML and DL models in productive environments.
- Knowledge of Python and/or R in the field of data science.
- Knowledge of Databricks.
- Advanced knowledge of Machine Learning frameworks (TF, PyTorch, Scikit-learn, etc).
- Advanced knowledge of cloud analytics services on AWS, GCP, or Azure.
- Ability to work effectively in a team.
- Proactivity in building solutions and applying new functionalities related to the technology in use.
- Fluent English and Spanish
- PhD or Master's in computer science, data science, artificial intelligence, mathematics, or statistics.
- Advanced knowledge of Apache Spark in the field of data science (PySpark).
- Knowledge of Docker and/or Kubernetes.
- Knowledge of data visualization tools (PowerBI or similar).
- Certifications in AWS, GCP, or Azure (or other equivalent cloud platforms for data processing). developing and using SaaS applications.
- Knowledge and application of data science in different sectors and industries: banking, insurance, telco, retail, etc.
You will belong to a global department and will be able to collaborate with engineers from different countries.
CLOUD projects (AWS, Azure, GCP) and/or enterprise.
Open door to international projects without needing to go abroad, as all projects are done from Spain.
Specialization tribes by technology and camaraderie in communities.
Descripción breve
We will evaluate all applications. At Capgemini, we have a wide range of training, classroom, online, certifications, etc. Even if you don't have 100% of the required skills, we'd love to meet you!
Do you know what we can offer you?
INDEFINITE CONTRACT.
- Continuous TRAINING and certifications with official partners.
- Access to national (80%) and international (FlexAbroad 45 days) Remote.
- Access to offices with COLLABORATIVE ENVIRONMENTS in multifunctional spaces.
We are DIVERSE in age, gender, ancestry, family... we have been certified in Diversity and as an ETHICAL company for more than 11 years in a row!
We appreciate the FLEXIBILITY, the CONCILIATION and the fiscal and social BENEFITS to combine our personal and professional life. We have a very complete CATALOGUE OF MEASURES for development and conciliation (family allowances, insurance, restaurant and nursery tickets, additional holidays for you and your family...).
Access to the FLEXIBLE RETRIBUTION system.
DISCOUNTS and promotions through our Capgemini Club (sports, travel,...).
Collaboration is key in developing new ideas, new solutions, and professional and personal growth, which is why the team and good atmosphere are vital for us. We work and promote our COMMUNITIES within the company.
Today you have learned more about us and our growing project. Tomorrow other projects will come because that’s how we work, generating solutions as needed.
Job Description - Grade Specific
The role combines advanced technical expertise in data science with consulting skills to provide strategic guidance and solutions to clients.
Analista Programador IA BI
27 feb.Incoming Domain
Analista Programador IA BI
Incoming Domain · Madrid, ES
Teletrabajo React API Java Python CSS TSQL HTML Azure NoSQL Maven Linux Docker Cloud Coumputing Kubernetes Git REST Oracle Jira OpenShift AWS PowerShell Sass Bash DevOps jUnit MVC Spring Eclipse Couchbase
TELETRABAJO para Desarrolladores con mas de tres años de experiencia.
* DEVOPS CYBERSECURITY :
1.-BACKEND DEVELOPMENT : Especialista CiverSeguridad en el Area de Criptografia Cuántica
2.-PROYECT GOVERNANCE - TECHNICAL MANAGER
-CiverSeguridad en el Area de Criptografia cuántica
-Responsable Servicio de QTO
-Experto en Métricas y mejora contínua
3.-BACKEND DEVELOPMENT : Especialista CiberSeguridad
-Seguimiento y recopilación de datos para la gestión y medición del Servicio.
* DATA BI : BI Analyst
-Recopilación y Análisis de requisitos del negocio
-Diseño de soluciones en bases de datos relacionales (Oracle, Databrichs, StarRocks)
-Desarrollo de procesos ETL para la carga de datos, desde (Pentho, Kettle, PL/SQL)
-Modelado de datos en herramientas BI: definición de dimensiones, hechos, jerarquias
-Diseño y Desarrollo de KPIs, informes y dashboards interactivos
-Experiencia avanzada en SQL y PL/SQL.
* DATA / DATABASE:
- Altos conocimientos en Bases de Datos no relacionales.
- Concretamente en CouchBase, y que haya realizado modelados en el banco con Power Designer .
- Haber trabajado con Control-M y DataX.
* DATA SCIENTIST :
-SE REQUIEREN EXPERTOS EN CUALQUIERA DE LAS VERTICALES IA
-SE Valora tener experiencia en VERTEX AI.
-Extraer y analizar datos de las bases de datos de la empresa para impulsar la optimización y la mejora del
desarrollo de productos de la Segunda Ola Digital.
-Evaluar la eficacia precisión de las nuevas fuentes de datos y aplicar técnicas de recopilación de datos.
-A partir de los datos disponibles y haciendo uso de algoritmos o de componentes de RAIP desarrollar
modelos personalizados para resolver las iniciativas de IA existentes.
-Utilice el modelado predictivo para aumentar y optimizar las experiencias de los Clientes, la generación de
ingresos, la segmentación de anuncios y otros resultados empresariales.
-Diseñar marcos de pruebas A/B y determinar siempre una métrica para evaluar los modelos.
-Desarrollar procesos y herramientas para analizar el rendimiento del modelo y la precisión de los datos.
-Habilidades fuertes de resolución de problemas con énfasis en el desarrollo del producto.
-Experiencia en el uso del lenguaje Python.
* CLOUD ENGINEER / AMAZON WEB SERVICES :
-Desarrollo y soporte de Arquitectura Gen AI para cliente bancario.
-Experiencia previa en soporte IT, DevOps o administración de sistemas en entornos cloud.
-Conocimiento en AWS (IAM, EC2, S3, Lambda, CloudWatch, etc.)
-Familiaridad con Azure y GCP (no excluyente)
-Manejo de Bases de Datos SQL y NoSQL
-Experiencia con herramientas de monitoreo y logging en la nube
-Capacidad de ejecutar y depurar scripts en Python, Bash ó PowerShell
-Familiaridad con API REST y herramientas de integración SaSS.
* MIGRACION : COBOL, CICS, DB2 , JCL / MICROFOCUS ENTERPRISE DEVELOPER / SERVER
. . . . . . . . . . . . . : Conocimientos de ECLIPSE , LINUX ( bash scripting) , PYTHON
* CORE BANKING: CONOCIMIENTOS GENERALES DE BANCA
. . . . . 1.- ANALISTAS DE REQUERIMIENTOS , FUNCIONAL Y PRUEBAS DE USUARIO
. . . . . 2.- ANALISTAS CORE BANKING REGULATORIO ( MERCADOS FINANCIEROS Y MEDIOS DE PAGO )
. . . . . . . ( Business Analyst con Conocimientos amplios en CIRBE )
* HOST. . : ANALISTAS PROGRAMADORES MANTENIMIENTO : COBOL CICS DB2 JCL AS400
DWH / ETL . . : INFORMATICA, PL-SQL
FrontEnd. . . . : HTML5, HTML, CSS3,APPVERSE, QWT, JAVA SCRIPT , REACT
BackEnd. . . . : Indispensable APIs, Spring MVC, API, Maven, GIT, JIRA,
. . . . . . . . . . . . : Docker, Kubernetes, Openshift, JUnit, Nexus, SonarQ,
. . . . . . . . . . . . : Spring Data, Swagger, Agile
Arquitecto .Net
27 feb.Michael Page
Arquitecto .Net
Michael Page · Madrid, ES
Teletrabajo .Net C# Node.js Python Azure Cloud Coumputing AWS
- Posibilidad de teletrabajo 100%
- Cliente final
¿Dónde vas a trabajar?
Nuestro cliente es una organización de gran tamaño dentro del sector Technology & Telecoms, reconocida por su enfoque innovador en el desarrollo de productos y servicios tecnológicos propios para el sector hostelería.
Descripción
- Diseñar y liderar la arquitectura de soluciones tecnológicas avanzadas.
- Supervisar y coordinar equipos de desarrollo técnico.
- Colaborar con otros departamentos para garantizar la integración de las soluciones.
- Evaluar y seleccionar tecnologías adecuadas para los proyectos.
- Garantizar la escalabilidad y la seguridad de las soluciones propuestas.
- Documentar las arquitecturas y los procesos técnicos.
- Resolver problemas técnicos complejos y proporcionar soporte técnico avanzado.
- Participar en la definición de estrategias tecnológicas de la empresa.
¿A quién buscamos (H/M/D)?
El/la candidato/a seleccionado deberá cumplir los siguientes requisitos:
- Sólida experiencia construyendo sistemas distribuidos en .NET / C#.
- Sólido pensamiento arquitectónico abarcando servicios, datos y mensajería.
- Experiencia trabajando en entornos cloud (Azure, AWS o GCP).
- Profundo entendimiento de arquitecturas orientadas a eventos y semánticas de mensajería.
- Capacidad para trabajar de forma autónoma mientras se colabora con equipos distribuidos.
- Conocimientos prácticos de bases de datos.
- Deseable: experiencia práctica con Node.js y/o Python en sistemas de producción.
- Inglés avanzado, tanto escrito como hablado.
¿Cuáles son tus beneficios?
- Contrato permanente.
- Salario competitivo entre 67.500 € y 75.000 € anuales.
- Posibilidad de desarrollo profesional en el sector Technology & Telecoms.
- Entorno laboral colaborativo y enfocado en la innovación.
- Posibilidad de teletrabajo 100%.
Cyber Defense Engineer - Evinova
27 feb.AstraZeneca
Barcelona, ES
Cyber Defense Engineer - Evinova
AstraZeneca · Barcelona, ES
API Cloud Coumputing Kubernetes SaaS AWS Office
Role based in Barcelona - 3 days onsite office / 2 days at home
This role operates as the primary technical escalation point for all cyber threats identified by our Security Operations Center (SOC) and is responsible for validating, investigating, and directing responses to escalated security incidents. This role provides a unique blend of technical detection engineering with threat-informed cyber defense strategy ownership.
This position is ideal for technically skilled cybersecurity professionals who thrive in fast paced global organizations and enjoy solving complex operational challenges with innovative approaches. In addition to supporting the Cyber Defense pillar, this role will have daily exposure across our entire cybersecurity function and working collaboratively to secure Evinova´s Digital Health Suite.
This position will report directly to the Evinova Head of Cybersecurity with a dotted line to the Head of Cybersecurity Engineering and will have several peers to collaborate with; ensuring adequate leadership visibility and cross-functional exposure across adjacent cyber domains. If you are a cyber defense pro looking to gain cyber leadership experience, this is the perfect role for you.
Due to the business critical nature of this role, there may be times where after-hours support is needed to address cybersecurity incidents. Evinova cybersecurity is a globally distributed team with team members located in both the United States and Spain.
Key Responsibilities:
SIEM Platform Management (Splunk Focus)
- Oversee the work of our outsourced service provider who provides SIEM maintenance support
- Provide architectural and operational ownership of Splunk ES as the enterprise detection platform
- Design data ingestion strategies covering cloud telemetry, identities, SaaS services, and system audit logs
- Engineer compliant data models to normalize security telemetry and enable scalable detection use case development
- Build operational dashboards supporting SOC monitoring, incident tracking, regulatory reporting, and executive cyber risk metrics
- Optimize search performance, indexing strategies, and storage utilization to balance detection depth with cost efficiency
- Integrate third-party and native security tooling into Splunk via APIs, forwarders, and data pipeline engineering
Cloud Detection and Response Architectures (AWS-focused)
- Provide cyber defense telemetry requirements into security architecture reviews for new platforms, applications, and cloud services
- Engineer and operationalize detections leveraging native AWS telemetry sources such as Cloud Trail, Guard Duty, Security Lake, VPC Flow Logs, Cloud Watch, EKS Logs, and others
- Develop detection use cases for IAM privilege escalation, federated identity abuse, cross-account compromise, API misuse, and serverless exploitation
- Monitor containerized and Kubernetes workloads for runtime threats, suspicious process execution, and anomalous network communication patterns
- Partner with Cloud Security peers to define cloud logging standards, retention requirements, and forensic readiness controls
Detection Engineering and Threat Analytics
- Architect, engineer, and operationalize advanced threat detections within Splunk Enterprise Security, including correlation searches, risk-based alerting frameworks, behavioral detections, and anomaly signals aligned to cloud computing threat scenarios
- Design detection logic mapped to the MITRE ATT&CK techniques, cloud threat kill chains, and identity compromise attack paths to ensure comprehensive adversary coverage
- Build security telemetry correlation across cloud control planes, SaaS platforms, and identity providers such as Microsoft EntraID to detect multi-stage intrusion attempts
- Collaborate with our outsourced SOC to continuously tune log sources / detection content to reduce false positives, eliminate alert fatigue, and improve "signal-to-noise" ratios within the SOC escalation pipelines
- Utilize threat intelligence feeds to translate emerging adversary Tactics, Techniques, and Procedures (TTPs) into actionable detection use cases and SIEM content updates
- Establish detection lifecycle governance including use case design documentation, testing validation, and performance monitoring
- Develop "detection-as-code" pipelines leveraging version control and CI/CD processes to ensure repeatable and auditable deployment of correlation logic
Threat Detection, Analysis, and Response
- Serve as the Tier 2 / Tier 3 escalation path for all relevant security alerts and suspicious activity escalated by our SOC
- Conduct deep technical investigations spanning SIEM telemetry, adjacent platforms, cloud logs, identity activity, audit trails, and other forensic artifacts
- Perform threat actor behavior analysis to determine initial access vectors, persistence mechanisms, privilege escalation paths, and lateral movement patterns
- Lead threat hunting initiatives leveraging hypothesis-driven and intelligence-driven methodologies to proactively identify hidden threats
- Function as a Technical Lead / Incident Responder for confirmed cybersecurity incidents and directing containment actions that are proportionate with the incident severity
- Coordinate cross-functional response activities across Product Engineering / Platform Operations and Cybersecurity stakeholders
- Maintain the Cybersecurity Incident Response Playbooks and developing new playbooks for emerging incident types / technologies
- Produce formal investigation reports documenting incident timelines, impacted assets, regulatory exposure risk, and remediation recommendations
- Provide incident briefings summarizing incident severity, business impact, and containment posture to the Head of Cybersecurity, Head of Cybersecurity Engineering, and other relevant leadership stakeholders (including the Evinova Chief Technology Officer)
- Collaborate with Cybersecurity Assurance to document incident root causes, specifically focusing on control failures, detection gaps, and posture improvement actions
- Lead cyber crisis simulations and tabletop exercises with adjacent teams in Product Engineering and Platform Operations to ensure operational readiness
HIGHLIGHT THE SKILLS AND CAPABILITIES NEEDED
Minimum Qualifications:
- University degree in Cybersecurity, Information Security, Computer Science, Information Systems, or a related technical discipline.
- 6-8+ years of progressive experience in Cybersecurity Operations, Detection Engineering, Cybersecurity Incident Response, or Threat Intelligence functions within global enterprises
- Demonstrated hands-on engineering and operational experience administering and developing detection use cases in Splunk Enterprise Security, including correlation searchers, notable event frameworks, risk-based alerting, and data model utilization
- Hands on security monitoring and threat detection experience across Amazon Web Services (AWS) environments
- Operational familiarity with cloud native attack vectors including IAM privilege escalation, credential misuse, token compromise, API abuse, and cross-account persistence mechanisms
- Familiarity with SOAR platforms and automation engineering supporting incident response orchestration and alert enrichment
- Demonstrated experience leading or coordinating incident response activities, including containment execution, stakeholder coordination, forensic triage, and post-incident lessons learned
- Proficiency in SIEM query languages (e.g., SPL, KQL) and log analysis methodologies across various log sources
- Working knowledge of the MITRE ATT&CK framework and its application to detection engineering and threat actor simulation
Desired Qualifications:
- Professional certifications in Cybersecurity, Digital Forensics, Information Assurance or related technical field (e.g., CISSP, CCSP, Splunk Certified, GIAC)
- Proven experience operating as an escalation path within a Security Operations or Incident Response function, including leading technical investigations over advanced threats, account compromise, malware intrusions, and cloud security incidents
- Experience operating within hybrid SOC delivery models that include managed service providers or outsourced Tier 1 monitoring functions
- Deep engineering expertise within Splunk Enterprise Security, including detection-as-code pipelines, SIEM optimization, data onboarding, and search performance tuning
- Experience conducting proactive threat hunting operations
- Experience presenting incident findings and detection maturity metrics to security leadership, auditors, and other interested stakeholders
- Experience working within regulated environments such as Financial Services, Life Sciences / Pharmaceutical, and Healthcare
- While not required, having prior experience with the Microsoft security ecosystem is an added plus (e.g., Purview, Sentinel, Defender)
Senior DevOps Engineer
27 feb.EPAM
Madrid, ES
Senior DevOps Engineer
EPAM · Madrid, ES
.Net C# Java Python Agile TSQL Azure Jenkins Docker Cloud Coumputing Kubernetes Git SQL Server Oracle AWS PowerShell Bash DevOps Terraform Office
Are you a forward-thinking professional with a strong background in DevOps and Backend Engineering and an interest in financial services? Join EPAM in Madrid as a Senior DevOps Engineer in the private banking sector and accelerate your career in financial services technology. We´re looking for a team player with excellent communication skills, engineering mastery and a B2+/C1 English level for effective stakeholder interactions.
This is a hybrid role based in Madrid´s city center, ideal for those eager to thrive in a dynamic environment and make a significant impact in private banking technology. Join EPAM and contribute to shaping the future of financial services in Spain!
Responsibilities
- Design, implement, and optimize CI/CD pipelines for continuous integration and delivery
- Ensure efficient build, test, and deployment workflows for multiple development teams
- Design, deploy, and manage applications using Kubernetes, Docker, Helm, and related technologies
- Manage containerized applications in a cloud-native environment (Azure, etc)
- Implement best practices for ingress, networking, and TLS termination
- Act as feature owners, coordinating with both internal and external stakeholders to fulfill the feature lifecycle
- Resolve bugs and work on stories within sprints, ensuring delivery of high-quality solutions
- Foster innovation, modernization, and the reduction of complexity
- Conduct Proofs of Concept (POC) to explore and pioneer new technologies for the bank
- Serve as pioneers within the bank for adopting and adapting to new systems and tools
- Embrace automation over manual processes to improve team productivity and efficiency
- Advocate for an "everything as code" approach
- Implement observability tools to monitor system performance and ensure uptime (Prometheus, Grafana, Dynatrace, etc)
- Set up logging, tracing, and alerting to proactively identify and resolve issues
- Use deployment tools such as Octopus Deploy, Jenkins, etc to manage releases
Requirements
- Master or Bachelor in Computer Science or a related area, or proven relevant working experience
- Strong knowledge of Corporate Identity (CI)/Corporate Design (CD) pipelines, cloud environments, containerization (Docker/K8s), and APIs
- Proven experience and strong knowledge of DevOps, with a strong understanding of CI/CD, containerization, and cloud platforms
- Proficiency in .NET (C#) or Java
- CI/CD Tools: GitLab, Jenkins, Azure DevOps, or equivalent
- Containerization & Orchestration: Kubernetes, Docker, Helm
- Databases: Microsoft SQL Server, Oracle, etc
- Observability & Monitoring: Prometheus, Grafana, Dynatrace, ELK Stack, Splunk, etc
- Cloud Platforms: Experience with Azure, AWS, or Google Cloud Platform
- Deployment Tools: Octopus Deploy, Terraform, or similar
- Version Control: Strong knowledge of Git for version control and collaboration
- You take pride in your work and strive to lead by example
- Comfortable working in agile and cross-functional teams, with excellent communication and collaboration skills
- You have a can-do attitude, are pragmatic, and open-minded
- Very good English level
Nice to have
- Strong experience with PowerShell, Bash, or Python for automation
- Networking: Understanding of ingress controllers, load balancers, and basic networking principles
We offer/Benefits
- Private health insurance
- EPAM Employees Stock Purchase Plan
- 100% paid sick leave
- Referral Program
- Professional certification
- Language courses
EPAM is a leading digital transformation services and product engineering company with 61,700+ EPAMers in 55+ countries and regions. Since 1993, our multidisciplinary teams have been helping make the future real for our clients and communities around the world. In 2018, we opened an office in Spain that quickly grew to over 1,450 EPAMers distributed between the offices in Málaga, Madrid and Cáceres as well as remotely across the country. Here you will collaborate with multinational teams, contribute to numerous innovative projects, and have an opportunity to learn and grow continuously.
- Why Join EPAM
- WORK AND LIFE BALANCE. Enjoy more of your personal time with flexible work options, 24 working days of annual leave and paid time off for numerous public holidays.
- CONTINUOUS LEARNING CULTURE. Craft your personal Career Development Plan to align with your learning objectives. Take advantage of internal training, mentorship, sponsored certifications and LinkedIn courses.
- CLEAR AND DIFFERENT CAREER PATHS. Grow in engineering or managerial direction to become a People Manager, in-depth technical specialist, Solution Architect, or Project/Delivery Manager.
- STRONG PROFESSIONAL COMMUNITY. Join a global EPAM community of highly skilled experts and connect with them to solve challenges, exchange ideas, share expertise and make friends.