¡No te pierdas nada!
Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo
Nunca compartiremos tu email con nadie y no te vamos a enviar spam
Suscríbete AhoraTransporte y Logística
1.309Comercial y Ventas
1.019Informática e IT
944Adminstración y Secretariado
855Comercio y Venta al Detalle
712Ver más categorías
Desarrollo de Software
641Educación y Formación
440Ingeniería y Mecánica
433Derecho y Legal
429Industria Manufacturera
405Marketing y Negocio
310Instalación y Mantenimiento
266Sanidad y Salud
204Diseño y Usabilidad
158Construcción
131Publicidad y Comunicación
129Arte, Moda y Diseño
120Recursos Humanos
100Alimentación
96Hostelería
91Artes y Oficios
90Contabilidad y Finanzas
86Turismo y Entretenimiento
73Atención al cliente
70Banca
56Cuidados y Servicios Personales
52Inmobiliaria
52Producto
51Seguridad
31Farmacéutica
27Social y Voluntariado
22Ciencia e Investigación
8Energía y Minería
7Seguros
4Telecomunicaciones
3Deporte y Entrenamiento
2Agricultura
0Editorial y Medios
0Incoming Domain
Cloud Engineer / Amazon Web Services
Incoming Domain · Barcelona, ES
Teletrabajo React API .Net C# Java Python Agile CSS TSQL HTML Azure NoSQL Scrum Maven Jenkins Linux Angular Docker Cloud Coumputing Kubernetes Ansible Git Android REST Jira Groovy OpenShift AWS Spring iOS PowerShell Sass Bash DevOps jUnit QA MVC Gradle Eclipse Microservices Perl SQL Server
TELETRABAJO para Desarrolladores (AF, AT, AP, PS) en alguna de las siguientes OFERTAS.
OFERTAS ACTIVAS (*) : / * DATA SCIENTIST / * CLOUD ENGINEER - AMAZON WEB SERVICES /
* DATA SCIENTIST :
-SE REQUIEREN EXPERTOS EN CUALQUIERA DE LAS VERTICALES IA
-Extraer y analizar datos de las bases de datos de la empresa para impulsar la optimización y la mejora del
desarrollo de productos de la Segunda Ola Digital.
-Evaluar la eficacia precisión de las nuevas fuentes de datos y aplicar técnicas de recopilación de datos.
-A partir de los datos disponibles y haciendo uso de algoritmos o de componentes de RAIP desarrollar
modelos personalizados para resolver las iniciativas de IA existentes.
-Utilice el modelado predictivo para aumentar y optimizar las experiencias de los Clientes, la generación de
ingresos, la segmentación de anuncios y otros resultados empresariales.
-Diseñar marcos de pruebas A/B y determinar siempre una métrica para evaluar los modelos.
-Desarrollar procesos y herramientas para analizar el rendimiento del modelo y la precisión de los datos.
-Habilidades fuertes de resolución de problemas con énfasis en el desarrollo del producto.
-Experiencia en el uso del lenguaje Python.
* CLOUD ENGINEER / AMAZON WEB SERVICES :
-Desarrollo y soporte de Arquitectura Gen AI para cliente bancario.
-Experiencia previa en soporte IT, DevOps o administración de sistemas en entornos cloud.
-Conocimiento en AWS (IAM, EC2, S3, Lambda, CloudWatch, etc.)
-Familiaridad con Azure y GCP (no excluyente)
-Manejo de Bases de Datos SQL y NoSQL
-Experiencia con herramientas de monitoreo y logging en la nube
-Capacidad de ejecutar y depurar scripts en Python, Bash ó PowerShell
-Familiaridad con API REST y herramientas de integración SaSS.
* MIGRACION : COBOL, CICS, DB2 , JCL / MICROFOCUS ENTERPRISE DEVELOPER / SERVER
. . . . . . . . . . . . . : Conocimientos de ECLIPSE , LINUX ( bash scripting) , PYTHON
* CORE BANKING: CONOCIMIENTOS GENERALES DE BANCA
. . . . . 1.- ANALISTAS DE REQUERIMIENTOS , FUNCIONAL Y PRUEBAS DE USUARIO
. . . . . 2.- ANALISTAS CORE BANKING REGULATORIO ( MERCADOS FINANCIEROS Y MEDIOS DE PAGO )
. . . . . . . ( Business Analyst con Conocimientos amplios en CIRBE )
* HOST. . : ANALISTAS PROGRAMADORES MANTENIMIENTO : COBOL CICS DB2 JCL AS400
. . . . . . . . . : 1 AP CON NIVEL ALEMAN C1; 1 AP CON NIVEL DE INGLES B2 ; 1 PROGRAMADOR SENIOR
DWH / ETL . . : INFORMATICA, PL-SQL
FrontEnd. . . . : HTML5, HTML, CSS3,APPVERSE, QWT, JAVA SCRIPT , REACT
BackEnd. . . . : Indispensable APIs, Spring MVC, API, Maven, GIT, JIRA,
. . . . . . . . . . . . : Docker, Kubernetes, Openshift, JUnit, Nexus, SonarQ,
. . . . . . . . . . . . : Spring Data, Swagger, Agile
JAVA. . . . . . . : JAVA 8+, J2EE, J2SE,SCRIPT,(SHELL,PYTHON,PERL)
. . . . . . . . . . . . : MICROSERVICIOS , SPRINT BOOT ,
. . . . . . . . . . . . : (SVN,GIT) (API Rest,RAML,Springboot,)
. . . . . . . . . . . . : (JIRA,REMEDY,GITLAB)
Microsoft . . . : .NET, C#, VISUAL BASIC ANGULAR , SQL SERVER
Mobility. . . . . : IOS, ANDROID
Other . . . . . . : C/C++,VISUAL BASIC
Testing . . . . . : QA, SCRUM,AUTOMATION, MANUAL...
DBA . . . . . . . : LINUX,DOCKER,KUBERNETES,JENKINS,ANSIBLE
. . . . . . . . . . . .: (MAVEN,GRADLE)(JUNIT,KARMA,JASMINE)
. . . . . . . . . . . . : (PYTHON,JAVA,GROOVY)
. . . . . . . . . . . . : (ADMINISTRACION DE REDES)
DATABASE . . . : SQL Server, ETLs,DTSX, VISUAL BASIC , . NET , SAS
TIBCO . . . . . . . : BPM , LINUX
Provectus
Middle/Senior Machine Learning Engineer (GenAI)
Provectus · Madrid, ES
Teletrabajo Python Docker Cloud Coumputing AWS Machine Learning
Join us at Provectus to be a part of a team that is dedicated to building cutting-edge technology solutions that have a positive impact on society. Our company specializes in AI and ML technologies, cloud services, and data engineering, and we take pride in our ability to innovate and push the boundaries of what's possible.
As an ML Engineer, you’ll be provided with all opportunities for development and growth.
Let's work together to build a better future for everyone!
Requirements:
- Comfortable with standard ML algorithms and underlying math
- Strong hands-on experience with LLMs in production, RAG architecture, and agentic systems
- AWS Bedrock experience strongly preferred
- Practical experience with solving classification and regression tasks in general, feature engineering
- Practical experience with ML models in production
- Practical experience with one or more use cases from the following: NLP, LLMs, and Recommendation engines
- Solid software engineering skills (i.e., ability to produce well-structured modules, not only notebook scripts)
- Python expertise, Docker
- English level - strong Intermediate
- Excellent communication and problem-solving skills
- Practical experience with cloud platforms (AWS stack is preferred, e.g. Amazon SageMaker, ECR, EMR, S3, AWS Lambda)
- Practical experience with deep learning models
- Experience with taxonomies or ontologies
- Practical experience with machine learning pipelines to orchestrate complicated workflows
- Practical experience with Spark/Dask, Great Expectations
- Create ML models from scratch or improve existing models.
- Collaborate with the engineering team, data scientists, and product managers on production models
- Develop experimentation roadmap.
- Set up a reproducible experimentation environment and maintain experimentation pipelines
- Monitor and maintain ML models in production to ensure optimal performance
- Write clear and comprehensive documentation for ML models, processes, and pipelines
- Stay updated with the latest developments in ML and AI and propose innovative solutions
Data Engineer
3 jun.Factorial
Barcelona, ES
Data Engineer
Factorial · Barcelona, ES
MySQL Python TSQL SaaS R PostgreSQL Office
Hello!
The Data Entry Team Data Engineer plays a critical role in managing and optimizing the flow of data from various sources into the SaaS platform. This position is responsible for data extraction, transformation, and loading (ETL), working closely with other data and configuration teams to ensure seamless integration of operational and historical data.
Key Responsibilities:
- Design and implement data pipelines to efficiently extract, transform, and load (ETL) data into the SaaS platform.
- Collaborate with the Data Analysts and Business Analysts to understand data requirements and ensure the proper structuring of data.
- Manage the integration of multiple data sources, ensuring consistency and accuracy during the data loading process.
- Develop and maintain scripts or tools for automating data entry processes, improving speed and accuracy.
- Handle data migrations from legacy systems, ensuring the integrity and compatibility of historical data.
- Troubleshoot and resolve data discrepancies, errors, and issues that arise during data loading.
- Monitor data performance and troubleshoot slow-running queries or processes.
Key Skills:
- Proficient in SQL, ETL tools, and programming languages (e.g., Python, R) for data manipulation.
- Experience with database management systems like MySQL or PostgreSQL.
- Strong understanding of data integration and transformation methodologies.
- Familiarity with SaaS platforms and their data architectures.
- Ability to handle complex data migration tasks effectively.
- Exceptional verbal and written communication skills in English.
Qualifications:
- Bachelor’s degree in Computer Science or a related discipline.
- 3+ years of experience in data engineering, data integration, or database management.
- Experience with ETL tools and data migration processes.
- Strong problem-solving skills and attention to detail.
- Experience with SaaS data structures is highly desirable.
How your responsibilities look more in deep:
Data Pipeline Development:
- Design, build, and manage data pipelines that support the extraction, transformation, and loading (ETL) of data from various sources into the SaaS platform.
- Ensure that data flows efficiently and securely between the client’s systems and the SaaS platform.
Data Integration:
- Handle the integration of different data sources, ensuring data consistency and accuracy during data transfer and loading.
- Develop scripts or automated processes to streamline data entry and reduce manual effort.
Data Transformation and Loading:
- Transform raw data into formats suitable for the SaaS platform, mapping data fields and ensuring compatibility with system configurations.
- Oversee the loading of data, ensuring that all data is accurately imported into the platform without loss or corruption.
Troubleshooting and Debugging:
- Identify, troubleshoot, and resolve any technical issues related to data migration or integration.
- Address any errors or bottlenecks in the data flow that could affect system performance or data accuracy.
Automation and Process Optimization:
- Develop automated solutions for recurring data entry tasks to reduce manual errors and improve efficiency.
- Continuously optimize data handling processes to enhance speed and accuracy.
Collaboration with Stakeholders:
- Work closely with the Data Analysts and Business Analysts to understand data requirements and business rules.
- Coordinate with the SaaS Implementation Team to ensure that data configurations align with system performance needs.
About us
Factorial is an all-in-one HR Software fast-growing company founded in 2016. Our mission is to help SMEs automate HR workflows, centralize people data, and make better business decisions. Currently, we serve thousands of customers in over 60 countries worldwide and across industries, and we have built a diverse and multicultural team of over 900 people in our Barcelona, Brazil, Mexico, and US offices.
Our Values
- We own it: We take responsibility for every project. We make decisions, not excuses.
- We learn and teach: We´re dedicated to learning something new every day and, above all, sharing it.
- We partner: Every decision is a team decision. We trust each other.
- We grow fast: We act fast. We believe that the worst mistake is not learning from them.
Benefits
We care about people and offer many benefits for employees:
- High growth, multicultural, and friendly environment
- Continuous training and learning based on your needs
- Alan private health insurance
- Healthy life with Wellhub (Gyms, pools, outdoor classes) ♀
- Save expenses with Cobee
- Language classes with Preply
- Get the most out of your salary with Payflow
And when at the office...
- Breakfast in the office and organic fruit
- Nora and Apeteat discounts
- Pet Friendly
Wanna learn more about us? Check our website!
Senior Data Engineer
2 jun.Capgemini
Sevilla, ES
Senior Data Engineer
Capgemini · Sevilla, ES
Python TSQL AWS
Elegir Capgemini es elegir la posibilidad de dar forma a tu carrera profesional como desees. Recibirás el apoyo y la inspiración de una comunidad colaborativa de colegas de todo el mundo y podrás reinventar lo que es posible. Únete a nuestro equipo y ayuda a las principales organizaciones del mundo a descubrir el valor de la tecnología y a construir un mundo más sostenible e inclusivo.
¿Te apetece sumarte a nosotros y participar en proyectos multisectoriales en un equipo conformado por profesionales del dato como Data Scientists, Data Engineers o Data Analysts? Nuestro objetivo es ayudar a nuestros clientes en el camino hacia la innovación continua.
¿Qué harás en el proyecto? ¿Cuál será tu rol?
En tu primer proyecto, como Data Engineer, formarás parte del equipo de la plataforma global de datos de uno de nuestros principales clientes.
Tendrás que liderar y tomar decisiones para llegar a obtener la data perfecta a partir de los datos incrementales (que estarán desordenados). Realizarás intervenciones manuales para la depuración de datos duplicados
Trabajarás con AWS Redshift; SQL y Data Warehouse/Data Marts, así como con Python y Dynamo BD
Estarás en un ambiente internacional, con un nivel alto de interlocución en inglés.
Para Desenvolverte Bien En La Posición Se Requiere Una Experiencia De Entre 6 y 9 Años, Así Como Conocimientos En
SQL y Data Warehouse.
Python y Dynamo BD.
AWS Redshift.
Inglés muy fluido (B2/C1)
Se valorará positivamente el poseer certificado de discapacidad, en el marco de nuestra política de inclusión y diversidad.
Valoraremos todas las candidaturas. Contamos con una amplísima oferta formativa, presencial, online de Certificaciones, etc. Aunque no tengas el 100% de los conocimientos valorados ¡nos encantará conocerte!
Nuestro compromiso con la inclusión e igualdad de oportunidades hace que tengamos un Plan de Igualdad y un Código Ético que garantizan el desarrollo profesional de la plantilla y la igualdad de oportunidades en su selección dentro de un entorno libre de discriminación por cuestión de etnia, nacionalidad, origen social, edad, orientación sexual, expresión de género, religión o cualquier otra circunstancia personal, física o social.
¿Qué te gustará de trabajar aquí?
Tenemos un catálogo de medidas de Desarrollo y Conciliación muy completo, como son, por ejemplo:
- Un ambiente de trabajo único muy valorado por nuestros profesionales en las evaluaciones periódicas.
- Wellbeing HUB - Incluye políticas y acciones para la salud física (Wellhub) y mental.
- 24 días de vacaciones + 2 asuntos propios + 24 y 31 de diciembre + opción a comprar hasta 7 días de vacaciones al año.
- FlexAbroad: posibilidad de trabajar en remoto desde otro país durante 45 días.
- Plan de Compensación Flexible (seguro médico, transporte, formación, tarjeta restaurante o subvención de comida, guardería…)
- Formación continua, podrás disfrutar de Mylearning y de Capgemini University y de nuestros Campus Digitales y Comunidades profesionales. Tendrás acceso a plataformas como: Coursera, Udemy, Pluralsight, Harvard Manager Mentor, Education First para idiomas (inglés francés, alemán…) ¡entre otras!
- Participación en Acciones de Voluntariado y Acción Social con nuestros Grupos de Sostenibilidad, Inclusión e Igualdad.
- Acompañamiento en tus inicios con el programa de Buddies.
- Seguro de Vida y Accidentes
Capgemini es líder global en transformando los negocios de los clientes aprovechando todo el poder de la tecnología. Nos guía el propósito de lograr un futuro inclusivo y sostenible a través de la tecnología y de la energía de quienes la desarrollamos. Somos una compañía responsable y diversa, líder internacional en servicios de IT e Ingeniería con más de 360.000 profesionales en más de 50 países. Con una sólida herencia de 55 años y una amplia experiencia en la industria, los clientes confían en Capgemini para abordar la totalidad de sus necesidades comerciales, desde la estrategia y el diseño hasta las operaciones, impulsadas por el rápido y novedoso mundo de la nube, los datos, la IA, la conectividad, el software, las plataformas e ingeniería digitales. El Grupo reportó en 2022 ingresos globales de €22 mil millones.
Reescribe tu futuro. ¡Únete al equipo!
www.capgemini.com/es-es
DevOps Engineer
2 jun.Pluxee
Madrid, ES
DevOps Engineer
Pluxee · Madrid, ES
API Python Azure Cloud Coumputing Ansible Git PowerShell Bash DevOps Terraform
Pluxee is a global player in employee benefits and engagement that operates in 31 countries. Pluxee helps companies attract, engage, and retain talent thanks to a broad range of solutions across Meal & Food, Wellbeing, Lifestyle, Reward & Recognition, and Public Benefits.
Powered by leading technology and more than 5,000 engaged team members, Pluxee acts as a trusted partner within a highly interconnected B2B2C ecosystem made up of more than 500,000 clients, 36 million consumers and 1.7 million merchants.
Conducting its business as a trusted partner for more than 45 years, Pluxee is committed to creating a positive impact on all its stakeholders, from driving business to local communities, to supporting wellbeing at work for employees while protecting the planet.
🚀 Your next challenge
We’re seeking a hands-on, automation-first DevOps Engineer to join our global engineering team. You’ll lead the design and implementation of secure and scalable infrastructure in Azure, focusing on robust CI/CD pipelines, Infrastructure as Code, and GitOps. You'll be responsible for optimizing deployments, managing AKS clusters, and automating with tools like Terraform, Ansible, and Python, all within the framework of Microsoft’s Cloud Adoption Framework (CAF).
🛠️ Key Responsibilities:
- Build and maintain efficient CI/CD pipelines with Azure DevOps (YAML-based), enabling secure, repeatable, and fast delivery.
- Define, provision, and manage infrastructure using Terraform and Ansible, ensuring modular, reusable, and compliant configurations.
- Operate and optimize AKS clusters, including autoscaling, ingress controllers, namespaces, and RBAC.
- Apply GitOps principles using Flux v2 (or ArgoCD), managing cluster state declaratively from Git.
- Automate operational tasks and platform workflows using Python, Bash, or PowerShell.
- Integrate Azure-native services (App Services, Functions, API Management, Gateways) in deployment and infra flows.
- Collaborate with development and security teams to align infrastructure and deployment workflows with the Microsoft Cloud Adoption Framework (CAF).
🧰 Required Skills & Experience:
- Proven experience designing and managing CI/CD pipelines in Azure DevOps (multi-stage, templates, approvals).
- Expert-level knowledge in Terraform (remote state, workspaces, modules) and IaC best practices.
- Experience with Ansible for configuration management and provisioning.
- Proficiency in scripting with Python, Bash, or PowerShell.
- Strong background in container orchestration with AKS/Kubernetes, including security and autoscaling practices.
- Familiarized with GitOps tooling such as Flux or ArgoCD.
- Experience with observability platforms like Azure Monitor, Prometheus, or Grafana.
- Understanding of cloud security: secrets management, policy enforcement, image scanning, etc.
- Working knowledge of Azure services: App Services, Functions, API Management, Gateways.
- Professional working proficiency in English (Spanish is a plus).
- Certified in AZ-400, CKA/CKAD, Terraform Associate, will be a plus.
- Experience with Azure Policy, Landing Zones, and Blueprints, will be a plus.
- Knowledge on tools such as Helm, Vault, ArgoCD, Kustomize, will be a plus.
- Knowledge of FinOps practices and cloud cost optimization, will be a plus.
☀️ Happy at work
1) A meaningful job: Be the change! Help us build the future of employee benefits by bringing to life sustainable and personalized experiences and contribute to make a real impact on millions of lives. Our business model delivers not just for individuals but their communities too, by supporting local businesses and economies.
2) A great culture: People matter – a lot! Be part of a multicultural team that moves as one in a fast paced and innovative environment. We respect and care authentically about our people, we embrace wellbeing and work-life balance, new ideas and we have a lot of fun!
3) An empowering environment: Be yourself! At Pluxee we proudly embrace diversity and value the uniqueness of our talents, fostering an inclusive work place where all abilities are celebrated, and equal learning and growing opportunities are a given.
Devops Engineer AWS
2 jun.Plexus
Madrid, ES
Devops Engineer AWS
Plexus · Madrid, ES
Python Docker Cloud Coumputing Microservices AWS PowerShell Bash DevOps
Join Plexus Tech. We´re looking for an AWS Devops Engineer to join a major, established project in the banking sector.
Requirements:
- Basic AWS Services: Familiarity with key AWS services such as EC2 (Elastic Compute Cloud), S3 (Simple Storage Service), RDS (Relational Database Service), VPC (Virtual Private Cloud), IAM (Identity and Access Management), and Lambda.
- AWS Architecture: Infrastructure as Code (IaC).
- Familiarity with AWS Systems Manager for automating tasks, managing configurations, and administering instances.
- Automation Scripts: Ability to write scripts in Bash, Python, or PowerShell to automate administrative and configuration tasks.
- Networking Fundamentals: Understanding basic networking concepts such as subnets, IP addresses, routing, and gateways, as well as configuring VPCs and subnets in AWS.
- AWS Security: Knowledge of identity and access management (IAM) and best security practices to protect infrastructure and data. This includes using policies, roles, and groups.
- Deployment and Operation of Microservices using Amazon ECS:
- Familiarity with Amazon ECS: Knowledge of creating and managing container clusters using ECS, as well as defining tasks and services to deploy microservices-based applications.
- Integration with Docker:
- AWS CloudWatch: Ability to configure and use Amazon CloudWatch to monitor resources, create alarms, and generate logs.
With our hybrid model, Flexology allows you to work from wherever your talent flows best: from any of our 24 work centers in Spain, from your home, or a combination of both. The Plexus Tech work ecosystem allows for a collaborative environment within the company.
Work with leading professionals
Access to continuing education
Professional advancement
Flexible compensation with health insurance, meal vouchers, childcare, and transportation
Cloud Engineer
2 jun.CAS TRAINING
Madrid, ES
Cloud Engineer
CAS TRAINING · Madrid, ES
Linux Cloud Coumputing AWS
Seleccionamos para proyecto con modalidad hibrida en Madrid, un profesional con mínimo 3 años de experiencia como Cloud Engineer en administración de sistemas Linux (RHEL) y Windows en entornos AWS.
Ubicación del proyecto: zona Ciudad Lineal ( Madrid )
Somos CAS Training, empresa especializada en formación IT, consultoría y outsourcing.
Swiss Re
Madrid, ES
Senior Site Reliability Engineer
Swiss Re · Madrid, ES
Java Python Azure Cloud Coumputing Kubernetes PowerShell
About the role
You would be playing a key role in ensuring the reliability, stability, scalability and security of our Logging & Monitoring cloud systems and infrastructure. You will be designing, implementing, and testing highly automated solutions to shape the technology platform fulfils our business and product vision, ultimately bring value to our customers with positive user experiences.
Key Responsibilities:
- End-to-end responsibility, from development to production, in designing, deploying, operating, and continuously improving performance and fault-tolerance of large-scale multi-cloud solutions.
- Ensure system security, data integrity, and high availability of the platform.
- Establish and improve monitoring, logging, and alerting frameworks to detect and resolve issues promptly.
- Keep up with technology trends and identify promising new solutions that meet our requirements.
- Create technical support documentation and provide hands-on troubleshooting and consulting to our customers.
About the team
Our Logging & Monitoring squad develops and operates state-of-the-art logging, monitoring and event management platforms to collect application behaviour information, detect / limit service disruption and provide the associated reporting capabilities. Our ambition is to help empower the developers, application and platform owners identify any growing risks, have a clear understanding of their SLAs, reduce the mean time to resolution and be ahead of the curve with regards to long term trends.
About you
We are happy to meet you if you possess:
- Hands on expertise in container orchestration system such as Kubernetes running in a hybrid cloud environment such as Azure.
- Experience in continuous integration/deployment, and system engineering experience in large-scale, distributed cloud solutions.
- Experience programming in one or more of the following such as Go, Java, Python and in scripting languages (Shell or PowerShell).
- Hands on expertise in open-source application and infrastructure monitoring tools, e.g., ELK and/or TICK stack, Prometheus and Grafana.
- Passion for sharing knowledge, through interactive sessions as well as documentation.
- Strong analytical and problem-solving skills, as well as the ability to focus on details without losing track of the bigger picture.
- Excellent oral and written English skills, additional language skills are a plus.
Nobody is perfect and meets 100% of our requirements. If you, however, meet some of the criteria below and are curious about the world of observability we´ll be more than happy to meet you!
We provide feedback to all candidates via email. If you have not heard back from us, please check your spam folder
For Spain the base salary range for this position is between [EUR 60.000] and [EUR 100.000] (for a full-time role). The specific salary offered considers:
- the requirements, scope, complexity and responsibilities of the role,
- the applicant´s own profile including education/qualifications, expertise, specialization, skills and experience.
In addition to your base salary, Swiss Re offers an attractive performance-based variable compensation component, designed to recognize your achievements. Further you will enjoy a variety of global and location specific benefits.
Eligibility may vary depending on the terms of Swiss Re policies and your employment contract.
#Li-hybrid
About Swiss Re
Swiss Re is one of the world´s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world.
Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability.
If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience.
Backend Data Engineer
2 jun.Social You
Madrid, ES
Backend Data Engineer
Social You · Madrid, ES
Java Node.js Python TSQL Cloud Coumputing Git AWS PostgreSQL Spring
Estamos en búsqueda de un/a Backend Developer con experiencia en entornos distribuidos, para incorporarse a un proyecto estratégico dentro del sector financiero, con foco en soluciones de banca privada e inversiones.
Buscamos una persona con al menos 3 años de experiencia, apasionada por la ingeniería de software y orientada a la excelencia técnica, que quiera crecer dentro de un equipo multidisciplinar en constante evolución.
Requisitos:
Experiencia avanzada en desarrollo backend con Java (Spring Boot), Python o Node.js.
Conocimientos sólidos en el diseño y consumo de APIs RESTful.
Experiencia en transformación de datos y desarrollo de pipelines con bases de datos relacionales como PostgreSQL, incluyendo modelado de datos y optimización de consultas SQL.
Familiaridad con entornos AWS y buenas prácticas de diseño cloud.
Experiencia con herramientas de control de versiones (Git) y flujos CI/CD.
Certificaciones relacionadas con AWS (valorable).
Conocimiento del sector financiero, especialmente en inversiones y modelos de datos de banca privada (valorable).
Buen nivel de inglés (valorable).
Ofrecemos :
Retribución competitiva+beneficios sociales
Formato de trabajo hibrido en Madrid