Python Scala pyspark TSQL NoSQL Cloud Coumputing Hadoop AWS Kafka Spark Big Data

About this job

Job type: Full-time
Experience level: Mid-Level
Industry: Information Technology
Company size: 1k-5k people
Company type: Private



Technologies

python, scala, pyspark



Job description

We are looking for a person with a strong engineering background and good skills in the business intelligence area to complete our Global Markets Data Team. Our team is cross-functional and works side-by-side with stakeholders in capturing, protecting and analysing reliable data to improve Adevinta's products.

You will work in close partnership with our insights teams by providing high-level reporting and recommendations to help them make data-driven decisions but also will provide support to all our international Insights Teams. You will definitely make an analytical impact in world-wide Adevinta Marketplaces.

Responsibilities

  • Develop or modify datasets, or specify additions or modifications necessary to meet reporting requests coming from Data Analysts and Scientists.
  • Ensure the traceability of the data from the origins to the decisional and analytical data structures
  • Collaborate with our team of Data Engineers, in order to build on top of highly scalable data integration (ETL) solutions using world-class cloud computing platforms combining a large variety of data sources.
  • Develop comprehensive data capture, transformation and visualization solutions
  • Collaborate with the Data Platform team to ensure correct use of our data infrastructure, datasets and tooling available.
  • Set and coordinate data-quality standards to cover stakeholder needs.

Qualifications

  • Substantial experience with large Data Warehouse environments and ETL processes/programming.
  • Experience translating business needs into technical requirements from a tracking, reporting and analysis perspective.
  • Strong analytical / problem solving skills
  • Strong SQL and analytic skills related to working with both structured and unstructured data.
  • Successful design, implementation, and deployment of multi-dimensional data models for reporting and analytics.
  • Experience with batch and streaming data processing tools (Python, Scala, Spark, Kafka, Kinesis, etc.).
  • Working experience with object-oriented design and programming (functional programming is a plus) and development tools like GIT.
  • Experiences with BI/Analytics on NoSQL or big data systems (Hadoop, S3, Redshift, etc.).
  • Experience with AWS or any other cloud platform
  • Understanding of Software Development best practices

Additional Information

Adevinta is an equal opportunity employer and value diversity in our company. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status or disability status. If any of the above ticks your boxes, then why not Apply Now to find out more



Últimas ofertas de