¡No te pierdas nada!
Únete a la comunidad de wijobs y recibe por email las mejores ofertas de empleo
Nunca compartiremos tu email con nadie y no te vamos a enviar spam
Suscríbete AhoraComercial y Ventas
937Informática e IT
875Adminstración y Secretariado
662Transporte y Logística
480Comercio y Venta al Detalle
355Ver más categorías
Desarrollo de Software
354Ingeniería y Mecánica
316Derecho y Legal
315Educación y Formación
300Diseño y Usabilidad
283Marketing y Negocio
275Publicidad y Comunicación
203Construcción
175Instalación y Mantenimiento
149Recursos Humanos
120Sanidad y Salud
119Industria Manufacturera
101Contabilidad y Finanzas
88Hostelería
79Arte, Moda y Diseño
75Atención al cliente
45Artes y Oficios
44Turismo y Entretenimiento
42Producto
39Inmobiliaria
35Cuidados y Servicios Personales
27Seguridad
26Alimentación
25Editorial y Medios
25Banca
19Farmacéutica
15Energía y Minería
11Social y Voluntariado
10Deporte y Entrenamiento
7Agricultura
3Seguros
1Telecomunicaciones
1Ciencia e Investigación
0European Tech Recruit
Zaragoza, ES
Backend Engineer - Python / Go / FTC
European Tech Recruit · Zaragoza, ES
. API MySQL Python Docker Cloud Coumputing Kubernetes Microservices Git AWS PostgreSQL gRPC Kafka Machine Learning
Backend Engineer - Python / Go / FTC
We are currently partnered with a world-leading quantum AI company developing cutting-edge solutions that make artificial intelligence faster, greener, and more accessible. Their products bridge the gap between quantum computing and AI, helping global clients solve complex real-world problems at scale. The team is seeking a highly skilled Backend Engineer to build and maintain the core services that make AI faster, greener, and more accessible.
This is a fixed-term contract with hybrid working from Zaragoza
Key Responsibilities for this Backend Engineer position:
- Develop, scale, and maintain robust backend systems primarily using Python or Go.
- Design, implement, and optimize RESTful API or gRPC endpoints with focus on versioning and high performance.
- Convert R&D scripts and prototypes into stable, product-ready APIs or microservices with a strong product-oriented mindset.
- Manage and optimize relational databases (PostgreSQL/MySQL) including schema design and data modeling.
- Implement and maintain infrastructure-as-code principles using Docker and basic Kubernetes for scalable service deployment.
- Integrate and automate workflows across multiple systems, including data, CI/CD, and machine learning pipelines.
Key Requirements:
- Strong proficiency in either Python or Go.
- Proven experience with API design (REST/gRPC) and performance optimization.
- Hands-on experience with Docker and fundamental understanding of Kubernetes.
- Solid experience with relational databases (PostgreSQL/MySQL) and effective schema design.
- Expertise with Git, testing frameworks, and CI/CD pipelines (e.g., GitLab, GitHub Actions).
- Experience with AWS or other major cloud-based deployment environments.
Keywords: Backend Engineer / Python / Go / RESTful API / gRPC / Microservices / Docker / Kubernetes / AWS / Cloud Deployment / PostgreSQL / MySQL / Relational Database / CI/CD / Git / LLM Deployment / MLOps / AI / Quantum Computing / Deep Tech / Fixed-Term / Observability / Prometheus / Grafana / Kafka
If you are interested in this Backend Engineer position, please send a copy of your CV to [email protected]
By applying to this role you understand that we may collect your personal data and store and process it on our systems. For more information please see our Privacy Notice https://eu-recruit.com/wp-content/uploads/2024/07/European-Tech-Recruit-Privacy-Notice-2024.pdf
Senior Data Engineer
NuevaUltra Tendency
Senior Data Engineer
Ultra Tendency · Saiáns (San Salvador), ES
Teletrabajo . Python Azure Cloud Coumputing Ansible Scala Microservices AWS DevOps Terraform Kafka Spark Big Data Machine Learning Office
Our Engineering department is growing, and we're now looking for a Senior Data Engineer - Databricks (m/f/*) to join our team in Spain, supporting our global growth.
As Senior Data Engineer (m/f/*), you design and optimize data processing algorithms on a talented, cross-functional team.
You are familiar with the Apache open-source suite of technologies and want to contribute to the advancement of data engineering.
What We Offer
Flexible work options, including fully remote or hybrid arrangements (candidates must be located in Spain)
A chance to accelerate your career and work with outstanding colleagues in a supportive learning community split across 3 continents
Contribute your ideas to our unique projects and make an impact by turning them into reality
Balance your work and personal life through our workflow organization and decide yourself if you work at home, in the office, or on a hybrid setup
Annual performance review, and regular feedback cycles, generating distinct value by connecting colleagues through networks rather than hierarchies
Individual development plan, professional development opportunities
Educational resources such as paid certifications, unlimited access to Udemy Business, etc.
Local, virtual, and global team events, in which UT colleagues become acquainted with one another
What You'll Do
Design, implement, and maintain scalable data pipelines using Databricks Lakehouse Platform, with a strong focus on Apache Spark, Delta Lake, and Unity Catalog.
Lead the development of batch and streaming data workflows that power analytics, machine learning, and business intelligence use cases.
Collaborate with data scientists, architects, and business stakeholders to translate complex data requirements into robust, production-grade solutions.
Optimize performance and cost-efficiency of Databricks clusters and jobs, leveraging tools like Photon, Auto Loader, and Job Workflows.
Establish and enforce best practices for data quality, governance, and security within the Databricks environment.
Mentor junior engineers and contribute to the evolution of the team's Databricks expertise.
What You'll Bring
Deep hands-on experience with Databricks on Azure, AWS, or GCP, including Spark (PySpark/Scala), Delta Lake, and MLflow.
Strong programming skills in Python or Scala, and experience with CI/CD pipelines (e.g., GitHub Actions, Azure DevOps).
Solid understanding of distributed computing, data modeling, and performance tuning in cloud-native environments.
Familiarity with orchestration tools (e.g., Databricks Workflows, Airflow) and infrastructure-as-code (e.g., Terraform).
A proactive mindset, strong communication skills, and a passion for building scalable, reliable data systems.
Professional Spanish & English communication skills (C1-level, written and spoken).
Did we pique your interest, or do you have any questions?
We want to hear from you: contact us at ******
About Us
Ultra Tendency is an international premier Data Engineering consultancy for Big Data, Cloud, Streaming, IIoT and Microservices.
We design, build, and operate large-scale data-driven applications for major enterprises such as the European Central Bank, HUK-Coburg, Deutsche Telekom, and Europe's largest car manufacturer.
Founded in Germany in ****, UT has developed a reliable client base and now runs 8 branches in 7 countries across 3 continents.
We do more than just leverage tech, we build it.
At Ultra Tendency we contribute source code to +20 open-source projects including Ansible, Terraform, NiFi, and Kafka.
Our impact on tech and business is there for anyone to see.
Enterprises seek out Ultra Tendency because we solve the problems others cannot.
We love the challenge: together, we tackle diverse and unique projects you will find nowhere else.
In our knowledge community, you will be a part of a supportive network, not a hierarchy.
Constant learning and feedback are our drivers for stable development.
With us you can develop your individual career through work-life balance.
We evaluate your application based on your skills and corresponding business requirements.
Ultra Tendency welcomes applications from qualified candidates regardless of race, ethnicity, national or social origin, disability, sex, sexual orientation, or age.
Data privacy statement: Data Protection for Applicants – Ultra Tendency
Boehringer Ingelheim
Barcelona, ES
AI Content Technology Specialist
Boehringer Ingelheim · Barcelona, ES
. Python TSQL Azure Cloud Coumputing AWS DevOps Machine Learning
This role focuses on using artificial intelligence to improve how digital content is created and delivered. The specialist will work with AI tools to automate content production and manage requests from different regions, mainly Europe and Asia-Pacific, with a focus on the Animal Health business.
Tasks And Responsibilities
- Develop, implement, and improve AI models for content creation (websites, emails, banners)
- Manage the technical production of digital assets and coordinate with external vendors
- Integrate systems used for content planning, review (MLR), asset management, and omnichannel distribution
- Handle content production requests from EU and APAC regions
- Act as a point of contact for issue resolution and stakeholder communication across markets
- AI & Machine Learning Expertise:
- Experience with large language models (LLMs), LangChain, LlamaIndex, and agent-based AI frameworks
- Knowledge of prompt engineering and the full AI model lifecycle (training, deployment, monitoring, retraining)
- Familiarity with multimodal AI models for text, image, and video content
- Cloud & DevOps Skills:
- Hands-on experience with AWS AI/ML services (e.g., SageMaker, Bedrock, Comprehend, Rekognition)
- Experience with Azure AI tools and hybrid cloud environments
- Proficiency in MLOps practices, containerization (Docker/Kubernetes), and CI/CD pipelines
- Strong programming skills in Python, JavaScript/TypeScript, and SQL
- Pharma & Content Domain Knowledge:
- Understanding of content production processes and pharmaceutical compliance standards
- Experience with omnichannel content strategies and tools used in pharma marketing and medical affairs
- Knowledge of content tagging, MLR workflows, metadata management, and personalization engines
- Fluent in English, other European language are a plus.
- We are continuously working to design the best experience for you. Here are some examples of how we will take care of you:
- Flexible working conditions
- Life and accident insurance
- Health insurance at a competitive price
- Investment in your learning and development
- Gym membership discounts
Our Company
Why Boehringer Ingelheim?
With us, you can develop your own path in a company with a culture that knows our differences are our strengths - and break new ground in the drive to make millions of lives better.
Here, your development is our priority. Supporting you to build a career as part of a workplace that is independent, authentic and bold, while tackling challenging work in a respectful and friendly environment where everyone is valued and welcomed.
Alongside, you have access to programs and groups that ensure your health and wellbeing are looked after - as we make major investments to drive global accessibility to healthcare. By being part of a team that is constantly innovating, you'll be helping to transform lives for generations.
Want to learn more? Visit https://www.boehringer-ingelheim.com
Senior Data Engineer
NuevaUltra Tendency
Senior Data Engineer
Ultra Tendency · Ferrol, ES
Teletrabajo . Azure Cloud Coumputing AWS DevOps Terraform Spark Machine Learning Office
Our Engineering department is growing, and we're now looking for aSenior Data Engineer - Databricks (m/f/*)to join our team in Spain, supporting our global growth.AsSenior Data Engineer (m/f/*), you design and optimize data processing algorithms on a talented, cross-functional team.
You are familiar with the Apache open-source suite of technologies and want to contribute to the advancement of data engineering.WHAT WE OFFER
Flexible work options, including fully remote or hybrid arrangements (candidates must be located in Spain)
A chance to accelerate your career and work with outstanding colleagues in a supportive learning community split across 3 continents
Contribute your ideas to our unique projects and make an impact by turning them into reality
Balance your work and personal life through our workflow organization and decide yourself if you work at home, in the office, or on a hybrid setup
Annual performance review, and regular feedback cycles, generating distinct value by connecting colleagues through networks rather than hierarchies
Individual development plan, professional development opportunities
Educational resources such as paid certifications, unlimited access to Udemy Business, etc.
Local, virtual, and global team events, in which UT colleagues become acquainted with one anotherWHAT YOU'LL DO
Design, implement, and maintain scalable data pipelines usingDatabricks Lakehouse Platform,with a strong focus onApache Spark, Delta Lake,andUnity Catalog.
Lead the development of batch and streaming data workflows that power analytics, machine learning, and business intelligence use cases.
Collaborate with data scientists, architects, and business stakeholders to translate complex data requirements into robust, production-grade solutions.
Optimize performance and cost-efficiency of Databricks clusters and jobs, leveraging tools likePhoton, Auto Loader,andJob Workflows.
Establish and enforce best practices for data quality, governance, and security within the Databricks environment.
Mentor junior engineers and contribute to the evolution of the team's Databricks expertise.WHAT YOU'LL BRING
Deep hands-on experience withDatabricksonAzure,AWS, orGCP, including Spark (PySpark/Scala), Delta Lake, and MLflow.
Strong programming skills inPythonorScala, and experience with CI/CD pipelines (e.g., GitHub Actions, Azure DevOps).
Solid understanding of distributed computing, data modeling, and performance tuning in cloud-native environments.
Familiarity with orchestration tools (e.g., Databricks Workflows, Airflow) and infrastructure-as-code (e.g., Terraform).
A proactive mindset, strong communication skills, and a passion for building scalable, reliable data systems.
Professional Spanish & English communication skills (C1-level, written and spoken).
Did we pique your interest, or do you have any questions?
We want to hear from you: contact us ****** US
Ultra Tendency is an international premier Data Engineering consultancy forBig Data, Cloud, Streaming,IIoT andMicroservices.
We design, build, and operate large-scale data-driven applications for major enterprises such as the European Central Bank, HUK-Coburg, Deutsche Telekom, and Europe's largest car manufacturer.
Founded in Germany in ****, UT has developed a reliable client base and now runs 8 branches in 7 countries across 3 continents.
We do more than just leverage tech, we build it.
At Ultra Tendency we contribute source code to +20 open-source projects includingAnsible, Terraform,NiFi, andKafka.
Our impact on tech and business is there for anyone to see.
Enterprises seek out Ultra Tendency because we solve the problems others cannot.
We love the challenge: together, we tackle diverse and unique projects you will find nowhere else.
In our knowledge community, you will be a part of a supportive network, not a hierarchy.
Constant learning and feedback are our drivers for stable development.
With us you can develop your individual career through work-life balance.
We evaluate your application based on your skills and corresponding business requirements.
Ultra Tendency welcomes applications from qualified candidates regardless of race, ethnicity, national or social origin, disability, sex, sexual orientation, or age.
Junior Data Analyst
30 nov.Ribanco
Junior Data Analyst
Ribanco · Madrid, ES
Teletrabajo Javascript Java TSQL HTML xml Aplicaciones web Desarrollo de software Back-End Desarrollo de Android Python Agile Excel Machine Learning Power BI Tableau Office
Job Description
Join Ribanco Development Ltd as a Junior Data Analyst in Madrid, Spain. Analyze data to support business decisions, create reports, and assist in data visualization. Ideal for entry-level professionals with basic SQL and Excel skills, eager to grow in a dynamic tech environment. Hybrid work model offers flexibility while gaining hands-on experience in data analysis.
Full DescriptionAt Ribanco Development Ltd, we are seeking a motivated Junior Data Analyst to join our innovative team in Madrid, Spain. This entry-level role is perfect for recent graduates or early-career professionals passionate about turning data into actionable insights. As a Junior Data Analyst, you will support our data-driven projects by collecting, cleaning, and analyzing datasets to help inform business strategies and operational efficiencies.
Your primary responsibilities will include assisting senior analysts in extracting data from various sources using tools like SQL and Python basics. You will perform exploratory data analysis to identify trends and patterns, and contribute to the creation of dashboards and reports using visualization software such as Tableau or Power BI. Collaborating with cross-functional teams, you will help translate complex data findings into simple, understandable narratives for non-technical stakeholders.
In this role, you will gain exposure to real-world data challenges in the tech industry, working with structured and unstructured data from customer interactions, sales metrics, and operational logs. We emphasize a supportive learning environment where you can develop foundational skills in data manipulation, statistical analysis, and basic machine learning concepts. Expect to participate in team meetings, contribute to ad-hoc queries, and support ongoing projects that drive company growth.
Technologies and tools you will work with include Microsoft Excel for initial data handling, SQL for database querying, and introductory Python scripting for automation. Methodologies like Agile will be part of your daily workflow, ensuring iterative progress and quick adaptations to new requirements. This position offers mentorship from experienced data professionals, regular training sessions on emerging tools, and opportunities to shadow advanced analyses.
We value curiosity, attention to detail, and a proactive attitude. As part of a hybrid work setup, you will split time between our Madrid office and remote work, fostering both collaboration and work-life balance. Ribanco Development Ltd is committed to professional development, providing resources for certifications like Google Data Analytics or Microsoft Certified: Data Analyst Associate.
Over time, you will evolve from supporting tasks to owning smaller analysis projects, building a strong foundation for career advancement in data analytics. Join us to kickstart your journey in a company that values innovation and employee growth, contributing to impactful solutions in the European tech landscape.
Requirements- Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field
- Basic proficiency in SQL and Microsoft Excel
- Familiarity with data visualization tools like Tableau or Power BI is a plus
- Strong analytical thinking and problem-solving skills
- Excellent communication abilities to present findings clearly
- Eagerness to learn and adapt in a fast-paced environment
- No prior professional experience required; internships welcome
- Competitive salary with annual reviews
- Hybrid work model for flexibility
- Comprehensive health insurance coverage
- Paid time off including 25 vacation days
- Professional development budget for courses and certifications
- Team-building events and social activities
- Access to modern office facilities in Madrid
- Employee assistance program for well-being
Marketing Data Management intern
29 nov.The Information Lab Italia
Madrid, ES
Marketing Data Management intern
The Information Lab Italia · Madrid, ES
. Big Data Machine Learning
The Information Lab, empresa líder en las disciplinas de análisis y visualización de datos y en la implementación de estrategias de digitalización empresarial, busca un/a becario/a de gestión de datos de marketing para incorporarse a nuestro departamento de marketing.
Actividades a desarrollar:
- Prospecting: La prospección es la búsqueda de personas o empresas potencialmente "objetivo" para los productos y servicios ofrecidos, pero con las que aún no se ha establecido una conexión directa. Como su nombre indica, este tipo de persona/empresa es un cliente potencial, al que se le ofrecerán una serie de actividades de captación que están orientadas en función de su segmento.En el marketing, el prospecting clasifica los grupos objetivos de forma cualificada y, gracias a esto, una empresa puede diseñar sus campañas de marketing con mayor eficiencia, minimizando las pérdidas por dispersión.
- Data quality: actividad que generalmente identifica aquellas actividades y procesos destinados a ordenar y mejorar la calidad de los datos en una base de datos, concretamente en el CRM de ventas o de marketing, donde se recogen todos los datos de perfil de los prospectos, clientes potenciales y clientes.
Competencias adquiridas después de las prácticas:
Mediante las prácticas curriculares en el departamento de marketing de The Information Lab, tendrás la oportunidad de construir una base sólida sobre la orientación del mercado B2B y los diferentes actores en el mismo.
También tendrás la oportunidad de colaborar en la planificación estratégica de campañas de marketing y en los primeros pasos del enfoque de ventas en un sector en gran expansión.
Podrás, además, ampliar tus conocimientos sobre el mundo del Business Intelligence, adquiriendo nociones sobre diversos software relacionados con Big Data, Machine Learning y Data Analytics, entre otros.
Oportunidades profesionales
La prospección y la calidad de los datos te abrirán las puertas a la profesión de Business Development Representative. Estas actividades son cruciales y constituyen la base para cualquier persona que desee seguir una carrera de desarrollo de negocios, que incluye una progresión final a Sales Account Manager.
Titulaciones de interés:
ADE y Marketing
Fechas de inicio: 1/10/2025
Horario: de Lunes a Viernes desde las 9 hasta las 13 horas.
Quiénes somos
The Information Lab es el mejor socio estratégico para transformar las empresas en entidades basadas en datos. Nuestro objetivo es acompañarlas en la introducción y el desarrollo de una sólida cultura de datos para ayudarlas a dar sentido a sus propios datos.
Creemos en el cambio y lo promovemos con soluciones y servicios
personalizados gracias a nuestro equipo de profesionales certificados que están a disposición del mercado en cada fase del proceso.
Compartimos información y experiencias a través de una red interna que incluye ocho países europeos. Para alcanzar este objetivo, la elección estratégica del grupo es centrarse en las plataformas que consideramos más innovadoras en el panorama analítico y que creemos que son la expresión más madura del Self Service Analytics.
Además de una serie de responsabilidades estimulantes, ofrecemos espacio para tu desarrollo y el desarrollo de tus ideas. Trabajamos tanto con grandes empresas internacionales como nacionales, desde pymes hasta corporaciones, abarcando todos los sectores del mercado.
Mindrift
Evaluation Scenario Writer - AI Agent Testing Specialist
Mindrift · Barcelona, ES
Teletrabajo . Python QA Machine Learning
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.
What We Do
The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.
About The Role
We're looking for someone who can design realistic and structured evaluation scenarios for LLM-based agents. You'll create test cases that simulate human-performed tasks and define gold-standard behavior to compare agent actions against. You'll work to ensure each scenario is clearly defined, well-scored, and easy to execute and reuse. You'll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.
Although every project is unique, you might typically:
- Designing structured test scenarios based on real-world tasks
- Defining the golden path and acceptable agent behavior.
- Annotating task steps, expected outputs, and edge cases
- Working with devs to test your scenarios and improve clarity
- Reviewing agent outputs and adapting tests accordingly
Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone.
Requirements
- Bachelor's and/or Master's Degreein Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / Natural Language Processing (NLP), Information Systems or other related fields.
- Background in QA, software testing, data analysis, or NLP annotation
- Good understanding of test design principles (e.g., reproducibility, coverage, edge cases)
- Strong written communication skills in English
- Comfortable with structured formats like JSON/YAML for scenario description
- Can define expected agent behaviors (gold paths) and scoring logic
- Basic experience with Python and JS
- Curious and open to working with AI-generated content, agent logs, and prompt-based behavior
- You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines
- Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge
- Experience in writing manual or automated test cases
- Familiarity with LLM capabilities and typical failure modes
- Understanding of scoring metrics (precision, recall, coverage, reward functions)
Contribute on your own schedule, from anywhere in the world. This opportunity allows you to:
- Get paid for your expertise, with rates that can go up to $32/hour depending on your skills, experience, and project needs
- Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments
- Participate in an advanced AI project and gain valuable experience to enhance your portfolio
- Influence how future AI models understand and communicate in your field of expertise
Data Analytics & AI Specialist
28 nov.SEA & PORTS
Data Analytics & AI Specialist
SEA & PORTS · Madrid, ES
Teletrabajo . Python TSQL R Machine Learning Power BI Tableau
¿Quiénes somos?
Sea & Ports es un grupo de empresas del sector logístico marítimo portuario con presencia global y altamente especializado en los flujos comerciales con el continente africano, en concreto con África Occidental. Con oficinas centrales en Madrid y un área de operaciones actual que abarca el Mediterráneo, África Occidental, Norte de Europa y Asia, Sea & Ports es el único grupo marítimo español con operativa internacional que posee naviera propia y, al mismo tiempo, cubre todas las áreas del transporte marítimo.
Sobre el Rol
Estamos seleccionando para nuestras oficinas de Madrid un perfil de Data Analytics & AI Specialist.
Sus funciones serán:
- Diseño y mantenimiento de pipelines de datos.
- Limpieza, análisis y modelado de datasets.
- Creación de dashboards en Power BI o Tableau.
- Aplicación de modelos de IA para predicción y optimización.
- Colaboración con el equipo de producto.
Formación requerida
- Ingeniería Informática, Matemáticas, Estadística, Física.
- Economía o Administración con especialización en análisis de datos.
Habilidades Requeridas
- Experiencia mínima de 2 años en análisis de datos, machine learning y visualización.
- Dominio de SQL, Python o R.
- Experiencia con Power BI o Tableau.
- Conocimientos de IA aplicada.
- Capacidad analítica y visión de negocio.
- Inglés B2.
Paquete de Compensación
- Contrato indefinido.
- Retribución fija y beneficios sociales, como seguro médico, tarjeta restaurante.
- Flexibilidad horaria.
- 1 día de teletrabajo a la semana.
Declaración de Igualdad de Oportunidades
Sea & Ports se compromete a la diversidad y la inclusión en el lugar de trabajo.
Spanish Linguistic Expert
27 nov.Innodata Inc.
Marbella, ES
Spanish Linguistic Expert
Innodata Inc. · Marbella, ES
. QA Machine Learning
Innodata (NASDAQ
INOD) is a leading data engineering company.
With more than 2,000 customers and operations in 13 cities around the world, we are an AI technology solutions provider-of-choice for 4 out of 5 of the world's biggest technology companies, as well as leading companies across financial services, insurance, technology, law, and medicine.
By combining advanced machine learning and artificial intelligence (ML / AI) technologies, a global workforce of subject matter experts, and a high-security infrastructure, we're helping usher in the promise of AI.
Innodata offers a powerful combination of both digital data solutions and easy-to-use, high-quality platforms.
Our global workforce includes over 5,000 employees in the United States, Canada, United Kingdom, the Philippines, India, Sri Lanka, Israel and Germany.
About The Role
We are seeking a highly analytical and detail-oriented linguist to support AI training initiatives and linguistic content creation.
This role is ideal for someone with a strong academic background in linguistics (syntax, semantics, pragmatics, morphology, phonology, sociolinguistics, etc.) and a passion for language, technology, and clear communication.
You will play a crucial role in shaping the capabilities of large language models (LLMs) and NLP-based systems through high-quality linguistic data curation, annotation, and evaluation.
Job Title
Linguistics Expert – AI Training & Content Writing
Experience Level
Master's or PhD in Linguistics or a related field
Key Responsibilities
Create or edit linguistically- rich content including grammar guides, syntactic analyses, usage explanations, or examples for NLP pipelines.
Identify and resolve issues related to ambiguity, bias, and grammaticality.
Perform quality assurance (QA) on model outputs for fluency, tone, factual accuracy, and language appropriateness.
Annotate linguistic datasets with syntactic, semantic, or pragmatic labels.
Support internal teams by conducting linguistic research and summarizing findings.
Apply linguistic knowledge to evaluate model behavior, error patterns, and generalization issues.
Qualifications
Master's or PhD in Linguistics, Applied Linguistics, Computational Linguistics, or a related field.
Deep understanding of linguistic theory and language structure.
Experience with one or more of the following is a plus : computational linguistics, corpus analysis, language data annotation, LLM training.
Strong writing, editing, and communication skills.
If interested, kindly share your resume at : ******
#J-*****-Ljbffr