¡No te pierdas nada!
Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo
Nunca compartiremos tu email con nadie y no te vamos a enviar spam
Suscríbete AhoraTransporte y Logística
1.305Informática e IT
992Comercial y Ventas
929Adminstración y Secretariado
743Comercio y Venta al Detalle
495Ver más categorías
Desarrollo de Software
481Industria Manufacturera
460Ingeniería y Mecánica
450Educación y Formación
385Derecho y Legal
354Marketing y Negocio
326Instalación y Mantenimiento
295Sanidad y Salud
214Arte, Moda y Diseño
157Diseño y Usabilidad
147Construcción
129Hostelería
105Alimentación
99Contabilidad y Finanzas
99Publicidad y Comunicación
97Recursos Humanos
89Artes y Oficios
76Turismo y Entretenimiento
74Atención al cliente
61Cuidados y Servicios Personales
57Inmobiliaria
46Seguridad
45Producto
44Banca
30Farmacéutica
28Ciencia e Investigación
15Energía y Minería
12Social y Voluntariado
11Telecomunicaciones
4Deporte y Entrenamiento
2Editorial y Medios
1Seguros
1Agricultura
0VeraContent S.L.
Freelance Native-Level English Sci-Tech Technical Writer/Journalist
VeraContent S.L. · ,
Teletrabajo Cloud Coumputing Fintech Machine Learning Word
Location: Remote (EU-based preferred)
Hours: Approx. 10–15 hours/week, starting immediately
VeraContent is a multilingual content agency that helps global brands and institutions craft high-quality content across languages and markets. Our team of language professionals, editors and strategists works together to create meaningful stories—especially when those stories involve complex topics and specialized audiences.
We’re currently partnering with a client in the higher education sector to strengthen its academic presence. Our goal is to help our client establish a leading position in research and innovation within the fields of Science & Technology. As part of this initiative, we’re looking for experienced technical writers with strong journalistic and editorial backgrounds to join our freelance team.
This freelance role is ideal for specialized writers and journalists who are passionate about science and technology, and adept at translating complex research into compelling, high-level content.
About the Project
This project is designed to highlight our client and its researchers, faculty and partners as thought leaders in academic research, transformative technologies and scientific breakthroughs. The project will generate content tied to the following pillars:
- Health & MedTech: Precision Medicine, Biotech, Digital Health, Drug Discovery and AI, Clinical Trials, Drug Approval
-Climate & Energy: AI-driven sustainability, Smart Grids, Impact of climate change, Clean Energy
Deep Tech / Robotics & AI: Supercomputing, Machine Learning, Cloud
- Financial Systems: AI in investments, Fintech, Blockchain, XRP, Quantum Finance, Virtual Asset Regulation, Insurance
- Content formats will be a series of five 2,000-3,000 word papers on a specific topic from the research project mentioned above, published on a specific hub on the client’s website.
Main Responsibilities
- Research and write five in-depth articles by end of August on one or several of the above-mentioned topics
- Tailor writing to fit institutional branding, editorial tone and academic publishing standards
- Collaborate closely with internal project managers, editors and university stakeholders (researchers and university management)
- Ensure accuracy, clarity and relevance for both academic and general audiences
Requirements
- Native English
- Proven experience writing about the above-mentioned topics
- Experience in journalism and editorial workflows
- Ability to brainstorm on possible angles and to synthesize complex material into engaging, accessible and high-quality content
- Strong interviewing and information-gathering skills
- Experience with institutional or research-based writing preferred (e.g. universities, think tanks, scientific journals)
- Self-motivated, detail-oriented and reliable under deadlines
- Familiarity with editorial workflows and cloud-based tools
We offer:
- Flexible, remote freelance setup
- Opportunity to work on a high-impact academic project
- Collaborate with a multicultural, multilingual team of content professionals
To apply:
1. Apply through our website (https://veracontent.com/job-application/)
2. When submitting your application, please choose “Freelance Native-Level English Sci-Tech Technical Writer/Journalist” as the desired position
3. Indicate your availability (how soon you can start and weekly capacity)
4. Share a short cover letter about why you’re a strong fit for this role
5. Include your CV (and LinkedIn profile if applicable)
6. Provide relevant writing samples—especially those that demonstrate academic or technical writing in the mentioned topics
Questions? Reach out to: [email protected]
Please note: Only shortlisted candidates will be contacted.
BackEnd Developer
3 jul.AstraZeneca
Barcelona, ES
BackEnd Developer
AstraZeneca · Barcelona, ES
API Java MongoDB Python Agile NoSQL Scrum Docker Cloud Coumputing Kubernetes Microservices Git Jira AWS Spring Machine Learning Office
Connected Insights is an internal, high-profile, bespoke data & AI service built in collaboration with our in-house competitive intelligence professionals. It is transforming how competitive intelligence is created, disseminated and consumed across AstraZeneca into a rich, digital, interconnected, on-demand experience.
Through the use of data pipelines, knowledge graph, interactive visualisations and machine learning, it seeks to integrate key external and internal data sources, generate complex interactive information, and accelerate our teams of competitive intelligence professionals to provide a clear advantage for AstraZeneca. We value creating an excellent customer experience, reliable data, and effective engineering practises.
The Role
We are looking for a skilled back-end developer to join our team in building innovative applications that will revolutionise healthcare.
The ideal candidate will be an experienced Backend Software Engineer / Java Developer with a strong knowledge of best practice and the full software development lifecycle.
Responsibilities will include collaborating with other back-end and front-end engineers as part of a product delivery pod to build robust APIs and microservices, optimizing application performance, ensuring data integrity, and contributing to the overall system architecture. The successful candidate will have a passion for building scalable and efficient back-end systems, a strong attention to detail, and a collaborative mindset to work effectively within a diverse team.
Accountabilities
- Develop, maintain, and optimize APIs and microservices using Java/SpringBoot to support critical business functions.
- Participate in the full software development lifecycle, providing support and expertise across all stages from design and development to testing, deployment, and operational support.
- Collaborate with Ops, Dev, and Test Engineers to proactively identify and resolve operational issues related to performance, monitoring, alerting, design defects, and other factors at all stages of the product or service lifecycle.
- Partner with other engineers, scrum masters, business and product analysts, and stakeholders to understand requirements, contribute to architectural decisions, and drive the evolution of our software products.
Essential Skills/Experience
- Proven experience in engineering and delivering software products.
- Deep knowledge and extensive development experience with Java, Spring Boot, APIs, and microservices architectures.
- Strong understanding of cloud environments, particularly AWS, with experience in technologies such as Lambda, EKS, and ECS.
- Solid experience in designing complex data models for relational and/or NoSQL databases.
- Hands-on experience with NoSQL databases such as MongoDB or DocumentDB.
- Experience with build and infrastructure tools, including Docker and Kubernetes, for containerization and orchestration.
- Strong understanding of OAUTH, JWT, and other security implementations for securing APIs and microservices.
- Proficiency with Git for version control and collaborative development, including experience with CI/CD pipelines.
- Solid understanding of RESTful API principles and experience designing and implementing RESTful APIs.
- Familiarity with code quality tools like SonarQube for continuous code inspection and quality assurance.
- Demonstrated ability to collaborate effectively across multiple engineering teams in multiple geographies.
- Proven ability to influence stakeholders convincingly with well-considered logic and technical expertise.
- Strong advocate for code quality and a champion for writing testable and well-documented code.
- Working knowledge of agile project management methodologies and experience using Jira and Confluence for tracking and collaboration.
Desirable Skills/Experience
- While the core focus is on Java/SpringBoot back-end development, experience with Python is highly desirable.
When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That´s why we work, on average, a minimum of three days per week from the office. But that doesn´t mean we´re not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.
AstraZeneca offers an environment where you can connect across the business to influence patient outcomes and improve lives. We drive disruptive transformation by unleashing the power of data, machine learning, and technology to turn complex information into practical insights. Collaborate with leading experts in specialist communities, access cutting-edge techniques, and be part of novel solutions. Our diversity is our strength, enabling us to decode business needs and apply technical know-how to add greater value. Here, innovation meets large-scale impact as we empower the business to run faster and better.
Ready to make a difference? Apply now to join our team!
Machine Learning/AI Engineer
3 jul.AstraZeneca
Barcelona, ES
Machine Learning/AI Engineer
AstraZeneca · Barcelona, ES
Docker Cloud Coumputing Kubernetes SaaS AWS Machine Learning Office
Introduction to role:
Are you ready to be part of the future of healthcare? Can you think big, be bold, and harness the power of digital and AI to tackle longstanding life sciences challenges? Then Evinova, a new healthtech business within the AstraZeneca Group, might be for you! Transform billions of patients´ lives through technology, data, and cutting-edge ways of working. You´re disruptive, decisive, and transformative-someone who´s excited to use technology to improve patients´ health. We´re building Evinova, a fully-owned subsidiary of AstraZeneca Group, to deliver market-leading digital health solutions that are science-based, evidence-led, and human experience-driven. Smart risks and quick decisions come together to accelerate innovation across the life sciences sector. Be part of a diverse team that pushes the boundaries of science by digitally empowering a deeper understanding of the patients we´re helping. Launch game-changing digital solutions that improve the patient experience and deliver better health outcomes. Together, we have the opportunity to combine deep scientific expertise with digital and artificial intelligence to serve the wider healthcare community and create new standards across the sector.
Accountabilities:
The Machine Learning and Artificial Intelligence Operations team (ML/AI Ops) is newly formed to spearhead the design, creation, and operational excellence of our entire ML/AI data and computational AWS ecosystem to catalyze and accelerate science-led innovations.
This team is responsible for the design, implementation, deployment, health, and performance of all algorithms, models, ML/AI operations (MLOps, AIOps, and LLMOps), and Data Science Platform. We manage ML/AI and broader cloud resources, automating operations through infrastructure-as-code and CI/CD pipelines, ensuring best-in-class operations-striving to push beyond mere compliance with industry standards such as Good Clinical Practices (GCP) and Good Machine Learning Practice (GMLP).
As a ML/AI Operations Engineer for clinical trial design, planning, and operational optimization on our team, you will lead the development and management of MLOps systems for our trial management and optimization SaaS product. You will collaborate closely with data scientists to transition projects from embryonic research into production-grade AI capabilities, utilizing advanced tools and frameworks to optimize model deployment, governance, and infrastructure performance. This position requires a deep understanding of cloud-native ML/AI Ops methodologies and technologies, AWS infrastructure, and the unique demands of regulated industries, making it a cornerstone of our success in delivering impactful solutions to the pharmaceutical industry.
Role & Team Key Responsibilities:
Operational Excellence
- Lead by example in creating high-performance, mission-focused and interdisciplinary teams/culture founded on trust, mutual respect, growth mindsets, and an obsession for building extraordinary products with extraordinary people.
- Drive the creation of proactive capability and process enhancements that ensures enduring value creation and analytic compounding interest.
- Design and implement resilient cloud ML/AI operational capabilities to maximize our system A-bilities (Learnability, Flexibility, Extendibility, Interoperability, Scalability).
- Drive precision and systemic cost efficiency, optimized system performance, and risk mitigation with a data-driven strategy, comprehensive analytics, and predictive capabilities at the tree-and-forest level of our ML/AI systems, workloads and processes.
ML/AI Cloud Operations and Engineering
- Develop and manage MLOps/AIOps/LLMOps systems for clinical trial design, planning and operational optimization.
- Partner closely with data scientists to shepherd projects from embryonic research stages into production-grade ML/AI capabilities.
- Leverage and teach modern tools, libraries, frameworks and best practices to design, validate, deploy and monitor data pipelines and models in production (examples include, but are not limited to AWS Sagemaker, MLflow, CML, Airflow, DVC, Weights and Biases, FastAPI, Litserve, Deepchecks, Evidently, Fiddler, Manifold).
- Establish systems and protocols for entire model development lifecycle across a diverse set of algorithms, conventional statistical models, ML and AI/GenAI models to ensure best-in-class Machine Learning Practice (MLP).
- Enhance system scalability, reliability, and performance through effective infrastructure and process management.
- Ensure that any prediction we make is backed by deep exploratory data analysis and evidence, interpretable, explainable, safe, and actionable.
Personal Attributes:
- Customer-obsessed and passionate about building products that solve real-world problems.
- Highly organized and detail-oriented, with the ability to manage multiple initiatives and deadlines.
- Collaborative and inclusive, fostering a positive team culture where creativity and innovation thrive.
Essential Skills/Experience:
- Deep understanding of the Data Science Lifecycle (DSLC) and the ability to shepherd data science projects from inception to production within the platform architecture.
- Expert in MLflow, SageMaker, Kubeflow or Argo, DVC, Weights and Biases, and other relevant platforms.
- Strong software engineering abilities in Python/JavaScript/TypeScript.
- Expert in AWS services and containerization technologies like Docker and Kubernetes.
- Experience with LLMOps frameworks such as LlamaIndex and LangChain.
- Ability to collaborate effectively with engineering, design, product, and science teams.
- Strong written and verbal communication skills for reporting and documentation.
- Minimum of 4 years in ML/AI operations engineering roles.
- Proven track record of deploying algorithms and machine learning models into production environments.
- Demonstrated ability to work closely with cross-functional teams, particularly data scientists.
When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That´s why we work, on average, a minimum of three days per week from the office. But that doesn´t mean we´re not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world.
AstraZeneca is where creativity meets critical thinking! We embrace technology to reimagine healthcare´s future by predicting, preventing, and treating conditions more effectively. Our inclusive approach fosters collaboration internally and externally to share diverse perspectives. We empower our teams with trust and space to explore innovative solutions that redefine patient experiences across their journey. Join us as we drive change that benefits both business and patients.
Ready to make an impact? Apply now to join our journey towards transforming healthcare!
Computer Vision Engineer
2 jul.Serem
Computer Vision Engineer
Serem · Madrid, ES
Teletrabajo Python C++ DevOps Machine Learning
Desde Serem nos encontramos en la búsqueda de un COMPUTER VISION ENGINEER con al menos 3 años de experiencia para importante proyecto.
Funciones:
• Desarrollar aplicaciones de COMPUTER VISION y DEEP LEARNING relacionadas con la detección de objetos, segmentación de objetos y detección de actividades/acciones.
• Pensamiento científico y capacidad para inventar, implementar y liderar desarrollos tecnológicos.
• Dedicación a la exploración industrial y de mercado en cuanto a capacidades de aprendizaje automático, evaluando y liderando estudios de viabilidad y aplicaciones.
• Utilización de hardware e imágenes existentes, además de nuevas técnicas de recopilación de datos de imágenes, para producir modelos y algoritmos innovadores de análisis de imágenes.
• Desarrollo de enfoques de aprendizaje automático que mejoren la velocidad y precisión del desarrollo de algoritmos de imágenes para métodos de inspección y control de calidad.
• Liderar la ideación, creación de prototipos y desarrollo de software de inteligencia artificial.
• Demostrar experiencia en la resolución de problemas de visión por computadora.
• Desarrollar algoritmos de aprendizaje profundo y aprendizaje automático tradicional para el negocio.
• Diseñar y desarrollar arquitecturas de software escalables.
• Facilitar el diseño e implementación del equipo de hardware de visión necesario para la recopilación de datos de imágenes.
• Crear y mantener la arquitectura de canalización de datos para el desarrollo de algoritmos de aprendizaje automático.
REQUISITOS:
• Mínimo 2 años de experiencia industrial en el desarrollo y despliegue de aplicaciones de COMPUTER VISION y MACHINE LEARNING en producción a gran escala.
• Experiencia en desarrollo de software, integración con algoritmos de aprendizaje profundo y despliegue en producción.
• Dominio en la comprensión científica e implementación de arquitecturas de aprendizaje profundo en Visión por Computador (clasificación de imágenes, detección de objetos, segmentación, estimación de poses) desde la concepción hasta el despliegue en producción.
• Desarrollo de software en Python, Deep Learning (TensorFlow y PyTorch), bibliotecas de aprendizaje automático (OpenCV, Scikit-learn, NumPy, Pandas) y análisis de datos/informes.
• DevOps: gestión de código, control de versiones, revisión de código, CI/CD, gestión de configuraciones, monitoreo y contenedorización.
• MLOps: gestión de experimentos (versionado de modelos, versionado de parámetros, métricas de rendimiento del modelo), integración, servicio, despliegue, pruebas y monitoreo continuo.
• DataOps: recopilación de datos, anotación, calidad, visualización, versionado e ingeniería de datos.
• Inglés avanzado
DESEABLE:
• Máster o Doctorado en Ciencias de la Computación, Ingeniería, Matemáticas o Estadística, con especialización en visión por computadora y aprendizaje profundo.
• Dominio de frameworks de aprendizaje profundo (Tensorflow, Keras o Pytorch).
• Dominio de bibliotecas de Visión por Computadora y Aprendizaje Automático (OpenCV, Scikit-learn, Numpy, Pandas).
• Experiencia en desarrollo de software en aplicaciones multihilo.
• Dominio en programación en C++.
• Experiencia con Python, C++, NVIDIA Jetson, ONNX, OpenVINO y TensorRT.
• Experiencia en algoritmos de imágenes en 3D.
Graduate AI Engineer
1 jul.HP
Barcelona, ES
Graduate AI Engineer
HP · Barcelona, ES
Python TSQL Machine Learning
At HP Industrial Print, we are building a unified and intelligent conversational insights platform powered by agentic AI workflows. This platform, developed on our PrintOS software ecosystem, will empower customers to interact naturally with end-to-end print workflow and device data.
This strategic initiative, led by the AI Center of Excellence, involves cross-functional collaboration across multiple business units. As part of this effort, we are partnering with the Graphics Experience Center (GEC) in Sant Cugat to showcase early prototypes in a dedicated AI innovation showroom.
We are looking for a Graduate-level AI Engineer who is passionate about applied machine learning and eager to contribute to the development of cutting-edge AI solutions. This role offers the opportunity to work on real-world challenges, influence product direction, and grow into a key contributor within our AI team.
Responsibilities
- Design, develop, and deploy AI-driven features for the conversational insights platform.
- Collaborate with cross-functional teams to integrate AI models into production workflows.
- Analyze print workflow and device data to extract actionable insights.
- Contribute to the evolution of our AI showroom at the GEC, including live demos and customer-facing prototypes.
- Engage with stakeholders to understand business needs and translate them into technical solutions.
- Recent graduate (or final-year student) in Data Science, Computer Science, AI, or a related field.
- Strong programming skills in Python and SQL.
- Solid understanding of machine learning and AI concepts.
- Excellent communication skills in English and Spanish.
- Proactive, self-driven, and eager to learn in a fast-paced environment.
Being part of HP means access to an international community with lots of growth opportunities within the company, professional development resources, networking opportunities, while enjoying in a great atmosphere making an impact.
- You will be able to choose to either work office-based or hybrid work style.
- Flexibility to keep a good work life balance.
- Health & Life insurance.
- Meal Subside.
- HP product discounts.
- Flex optimization program: Kindergarten tickets, public transportation tickets.
- Diverse, continued internal growth and career opportunities. Including HP’s own learning platform and LinkedIn Learning.
- Women, Pride, Young employees, Multicultural, Sustainability and Disability! Just a few of our fantastic global business networks you can get involved with locally.
- We also dedicate time and resources to contribute with our community through Corporate Volunteering activities, including our onsite HP Charity day
- Our HP Barcelona campus’ in Sant Cugat del Vallès is an inspiring diverse and inclusive venue to meet, engage and co-create with your colleagues from all over the world: flexible work, collaborative spaces, sports and leisure areas (gym, tennis court, ping-pong, soccer field or a basketball court).
Analista de datos
1 jul.Canarias7
Palmas de Gran Canaria, Las, ES
Analista de datos
Canarias7 · Palmas de Gran Canaria, Las, ES
TSQL Ciencia de datos Estadística Capacidad de análisis Análisis de datos Analítica de datos Microsoft Power BI Matemáticas Análisis ad hoc Visualización de datos UX/UI Machine Learning Power BI SEO
Transformar los datos generados por la actividad del medio en información estratégica que permita optimizar la producción de contenidos, mejorar la experiencia de usuario y aumentar el impacto en la audiencia, apoyando la toma de decisiones en redacción, producto, negocio y marketing.
- Monitorizar KPls (páginas vistas, usuarios únicos, tiempo de lectura, tasa de rebote, etc.)
- Identificar patrones de consumo y tendencias de comportamiento
- Segmentar usuarios para diseñar estrategias de personalización y fidelización
- Diseñar dashboards interactivos (Looker Studio, Power BI, etc.)
- Generar informes periódicos para redacción, dirección y áreas comerciales
- Aconsejar sobre duración óptima, estructura o posicionamiento en portada
- Aplicar técnicas de machine learning para prever tendencias o tasa de abandono
- Trabajar con SEO, redacción, marketing y producto
- Participar en test A/B para mejora de diseño y experiencia usuario
Machine Learning
1 jul.Capitole
Barcelona, ES
Machine Learning
Capitole · Barcelona, ES
Python Machine Learning
En Capitole continuamos creciendo y queremos seguir haciéndolo contigo. Estamos en búsqueda de Machine Learning Engineers con entre 3 y 5 años de experiencia profesional desarrollando sistemas o features en producción que aprovechen técnicas de Inteligencia Artificial y Machine Learning. Buscamos personas que hayan trabajado en proyectos end-to-end, desde el diseño hasta la implementación, con un fuerte enfoque en la calidad de código y robustez del sistema.
Idealmente, buscamos perfiles con enfoque en Machine Learning Engineering (MLE) por encima de Data Science tradicional, aunque consideraremos candidatos con formación en software que hayan hecho la transición reciente a AI/ML.
Requisitos clave:
- 3-5 años de experiencia en roles relacionados con ML/AI en entornos productivos.
- Experiencia en implementación de modelos en producción, incluyendo aspectos como data pipelines, entrenamiento, validación y monitoreo.
- Experiencia práctica con Python y frameworks/librerías como TensorFlow, PyTorch, scikit-learn, Databricks.
- Conocimiento sólido en aspectos como data augmentation, bias, training pipelines.
- Buenas habilidades de comunicación, capacidad para explicar decisiones técnicas.
- Experiencia en proyectos end-to-end de machine learning y data.
- Experiencia o conocimientos en NLP, GenAI o LLMs (deseable).
- Se valorará positivamente si ha trabajado recientemente con un modelo de NLP/GenAI y puede explicarlo en detalle.
Para ello tendrás:
• Presupuesto de 1.200€ en formación individual para que lo utilices en lo que tú quieras (eventos tecnológicos, libros, formaciones, certificaciones etc.).
• Seguimiento con tu equipo todos los meses para tener un continuo feedback.
• Fullremote.
• Flexibilidad horaria para ayudarte a conciliar tu vida profesional / familiar.
• Seguro médico privado pagado íntegramente por Capitole.
• Retribución flexible (tickets restaurante, transporte y/o guardería).
• Wellhub (Gymforless).
• Descuentos en grandes marcas para emplead@s (Club Capitole).
Para que conozcas a toda la familia:
• Team Buildings cada dos meses. ¡No te puedes perder la fiesta de verano o la cena de Navidad!
• Equipo de fútbol patrocinado por Capitole.
• Comunidades tecnológicas para que compartas tus conocimientos e ideas con los demás equipos. ¡¡¡Compartir el conocimiento interno es fundamental!!!
• ¡Por último y no menos importante un EQUIPAZO!
¿Aún no nos conoces? ¡¡Descúbrenos!! https://capitole-consulting.com/
Mira lo que opinan de nosotros ? https://www.glassdoor.es/Opiniones/Capitole-Consulting-Opiniones-E2060890.htm
No dudes en enviarnos tu perfil, ¡estamos deseando conocerte!
Eroski
Elorrio, ES
Científico/a del dato (Aprovisionamiento)
Eroski · Elorrio, ES
Python TSQL R Machine Learning
¿Te apasiona la analítica avanzada y su aplicación a retos operativos complejos? ¿Te gustaría ser parte de la evolución tecnológica de una gran organización como el Grupo Eroski?
En Eroski estamos inmersos en un ambicioso programa de transformación digital y buscamos incorporar personas como tú en el área de Aprovisionamiento, clave para garantizar una cadena de suministro eficiente y sostenible. Si disfrutas analizando datos para anticipar la demanda, optimizar procesos y aportar valor con modelos matemáticos, ¡este es tu lugar!
¿Cuál será tu misión?
- Identificar oportunidades del negocio dentro del área de Aprovisionamiento que puedan resolverse mediante analítica avanzada.
- Desarrollar o participar en la construcción de modelos predictivos y prescriptivos aplicando estadística, econometría y machine learning.
- Supervisar y mantener actualizados los modelos ya puestos en producción, asegurando su fiabilidad y aportación al negocio.
- Participar en la definición de estándares y operaciones de gobierno del dato.
- Colaborar con equipos multidisciplinares para traducir necesidades del negocio en soluciones analíticas de impacto real.
En Eroski, creemos en las personas y en su capacidad para transformar el entorno. Por eso, te ofrecemos mucho más que un empleo: te invitamos a formar parte de un proyecto compartido, con propósito y futuro.
- Contrato como persona socia (SDD). Nuestra vocación es ofrecerte estabilidad laboral.
- Tu tiempo importa: Contarás con políticas de flexibilidad horaria, trabajo en híbrido, de dos días a la semana y política de desconexión digital.
- Desarrollo profesional a tu medida: Participarás en un proyecto transversal en los sistemas del Grupo Eroski con expertos en la materia y con proveedores de referencia. A través de las herramientas y un programa formativo podrás avanzar en tu desarrollo profesional. Apostamos por el talento interno por lo que tendrás opciones de asumir nuevos roles en el futuro.
- Cuidamos de ti: Médico online, asistencia psicológica gratuita, plataforma de fisioterapia digital, reconocimiento médico anual, así como servicio de parking, servicio de comedor con comida casera y saludable, disponible también para llevar. Además, si eres persona socia, también tendrás acceso al cuadro médico (Lagunaro) y ventajas bancarias (Laboral Kutxa).
- Beneficios adicionales: Disfrutarás del programa MasXmenos, que mejora tu retribución con ventajas fiscales y posibilidad de acceso al plan de euskera.
- Un entorno donde tu voz cuenta: Creemos en el liderazgo compartido, fomentamos la colaboración, la participación y la contribución colectiva. Promovemos un entorno ágil, flexible e inclusivo. con un firme compromiso con la igualdad de oportunidades, la diversidad y la inclusión.
Buscamos personas con…
- Titulación universitaria superior en Matemáticas o Ingeniería Informática.
- Experiencia demostrable en estadística, econometría y construcción de modelos predictivos.
- Conocimientos avanzados de programación (Python, R, SQL) y manipulación de bases de datos.
- Conocimientos sólidos en machine learning, deep learning e inteligencia artificial.
- Experiencia con herramientas de análisis y visualización de datos.
- Valorable experiencia en entornos y plataformas analíticas en el área de Logística o Aprovisionamiento.
Databricks Data Engineer
1 jul.Axpo Group
Madrid, ES
Databricks Data Engineer
Axpo Group · Madrid, ES
MongoDB Python TSQL Azure Linux Docker Cloud Coumputing DevOps PostgreSQL Terraform Spark Machine Learning Power BI
Who We Are
Axpo is driven by a single purpose - to enable a sustainable future through innovative energy solutions. As Switzerland´s largest producer of renewable energy and a leading international energy trader, Axpo leverages cutting-edge technologies to serve customers in over 30 countries. We thrive on collaboration, innovation, and a passion for driving impactful change.
About the Team
You will report directly to our Head of Development and join a team of highly committed IT data platform engineers with a shared goal: unlocking data and enabling self-service data analytics capabilities across Axpo. Our decentralized approach means close collaboration with various business hubs across Europe, ensuring local needs shape our global platform. You´ll find a mindset committed to innovation, collaboration, and excellence.
What You Will Do
As a Databricks Data Engineer, you will:
- Be a core contributor in Axpo´s data transformation journey by using Databricks as our primary data and analytics platform.
- Design, develop, and operate scalable data pipelines on Databricks, integrating data from a wide variety of sources (structured, semi-structured, unstructured).
- Leverage Apache Spark, Delta Lake, and Unity Catalog to ensure high-quality, secure, and reliable data operations.
- Apply best practices in CI/CD, DevOps, orchestration (e.g., Dragster, Airflow), and infrastructure-as-code (Terraform).
- Build re-usable frameworks and libraries to accelerate ingestion, transformation, and data serving across the business.
- Work closely with data scientists, analysts, and product teams to create performant and cost-efficient analytics solutions.
- Drive the adoption of Databricks Lakehouse architecture and help standardize data governance, access policies, and documentation.
- Ensure compliance with data privacy and protection standards (e.g., GDPR).
- Actively contribute to the continuous improvement of our platform in terms of scalability, performance, and usability.
What You Bring & Who You Are
We´re looking for someone with:
- A university degree in Computer Science, Data Engineering, Information Systems, or a related field.
- Strong experience with Databricks, Spark, Delta Lake, and SQL/Scala/Python.
- Proficiency in dbt, ideally with experience integrating it into Databricks workflows.
- Familiarity with Azure cloud services (Data Lake, Blob Storage, Synapse, etc.).
- Hands-on experience with Git-based workflows, CI/CD pipelines, and data orchestration tools like Dragster and Airflow.
- Deep understanding of data modeling, streaming & batch processing, and cost-efficient architecture.
- Ability to work with high-volume, heterogeneous data and APIs in production-grade environments.
- Knowledge of data governance frameworks, metadata management, and observability in modern data stacks.
- Strong interpersonal and communication skills, with a collaborative, solution-oriented mindset.
- Fluency in English.
Technologies You´ll Work With
- Core: Databricks, Spark, Delta Lake, Python, dbt, SQL
- Cloud: Microsoft Azure (Data Lake, Synapse, Storage)
- DevOps: Bitbucket/GitHub, Azure DevOps, CI/CD, Terraform
- Orchestration & Observability: Dragster, Airflow, Grafana, Datadog, New Relic
- Visualization: Power BI
- Other: Confluence, Docker, Linux
Nice to Have
- Experience with Unity Catalog and Databricks Governance Frameworks
- Exposure to Machine Learning workflows on Databricks (e.g., MLflow)
- Knowledge of Microsoft Fabric or Snowflake
- Experience with low-code analytics tools like Dataiku
- Familiarity with PostgreSQL or MongoDB
- Front-end development skills (e.g., for data product interfaces)
Department Installation / Maintenance / Servicing / Craft Locations Madrid Remote status Hybrid