Position title
Senior Data Engineer
Description

Empowering our Clients with People-Driven Digital Innovation Across Europe
We are a Group managing digital IT services and solutions, driven by people, innovation, agility, and deep industry insight. We are working with the largest private and public institutions to deliver IT services and solutions.
Being an entrepreneurial digital services group with a Human-Sized Tech Company, we are built by passionate experts and led by seasoned leaders in IT and digital transformation.

Context

We are looking for a skilled and pragmatic Data Engineer to join our team. You will design and maintain end-to-end data processing pipelines in both cloud (GCP) and on-premise environments, within a DevOps culture. Working closely with cross-functional teams, you'll ensure data quality, optimize pipeline performance, and manage resource usage efficiently. Solid experience with Big Data technologies in streaming mode (Beam/Dataflow, Spark, Java, Scala) and Agile delivery across domains like IoT, finance, manufacturing, and commerce is essential.

Responsibilities

o Need to be a programming expert, with knowledge on communication protocols, algorithms and big data processing

o Need to implement microservices that would handle ETL needs, event driven solutions, both streaming and batch processing, but also handle with large files processing and producing

o Need to load test his solution and define KPI

o Need to consume APIs, be Oauth2 compliant and deal performance issues

o Need to write technical specs and api specs: AsyncAPI

o Need a product vision, working closely with Tech Lead in technical grooming for defining details of the US and identify limitations

Qualifications

o Mastery of Java (Functional, Lambda, Stream API)

o Expertise in the implementation of end-to-end data processing chains

o Mastery of distributed development

o Basic knowledge and interest in the development of ML algorithms

o Knowledge of ingestion frameworks

o Knowledge of Beam and its different execution modes on DataFlow o Knowledge of Spark and its different modules

o Knowledge of the GCP ecosystem DataProc, DataFlow, BigQuery, Pub-Sub, PostgreSQL/Composer, Cloud Functions, StackDriver, Apache Beam

o Knowledge of the use of Solace

o Knowledge about Scala and Python

o Experience with usage of Generative AI tools (Copilot GitHub, GitLab Duo …..)

o Knowledge of Spotfire & Dynatrace o Knowledge of the ecosystem of NOSQL databases

o Knowledge in building data product APIs

o Knowledge of Dataviz tools and libraries

o Ease in debugging Beam (+ Spark) and distributed systems

o Popularization of complex systems

o Control of the use of data notebooks

o Expertise in data testing strategies

o Strong problem-solving skills, intelligence, initiative and ability to resist pressure

o Excellent interpersonal skills and great communication skills (ability to go into detail)

Job Benefits

o Competitive salary and the opportunity to have a meaningful job where you can make a difference

o The chance to continuously evolve as a professional

o Medical insurance & Meal tickets. Variety of training opportunities

Contacts

Join Us at EASYDO
With a team of 250 dedicated professionals, we combine technological excellence with a people-first culture. We believe in empowering talent, nurturing careers, and building long-term trust with our clients and our teams.
📩 Contact our Talent Team by email to [email protected]
Or visit our careers portal: https://easydo.co/career/

Employment Type
Full-time
Job Location
Date posted
August 7, 2025
PDF Export
Close modal window

Thank you for submitting your application. We will contact you shortly!