Empowering our Clients with People-Driven Digital Innovation Across Europe
We are a Group managing digital IT services and solutions, driven by people, innovation, agility, and deep industry insight. We are working with the largest private and public institutions to deliver IT services and solutions.
Being an entrepreneurial digital services group with a Human-Sized Tech Company, we are built by passionate experts and led by seasoned leaders in IT and digital transformation.
About the Program
We’re looking for interns passionate about Machine Learning and Engineering who want to learn how to take models to production on Google Cloud. You'll work on real use cases across various industries, contributing to internal reusable products that directly impact cost, SLA, and reliability.
What You’ll Do (with a dedicated mentor)
- Contribute to end-to-end ML pipelines on Vertex AI (training, registry, deployment).
- Help expose models via Cloud Run or Vertex Endpoints.
- Work in BigQuery for feature engineering, batch scoring, and analytics.
- Automate workflows using Cloud Composer (Airflow), Dataflow (Beam), and Pub/Sub.
- Add observability: Cloud Logging/Monitoring, alerts, drift & cost tracking.
- Follow security best practices (IAM, Secret Manager) and CI/CD (Cloud Build, GitLab/GitHub).
What You’ll Learn (indicative curriculum)
- GCP MLOps lifecycle: Vertex AI (Experiments, Registry, Pipelines, Endpoints), GCS.
- Data fundamentals: BigQuery (partitioning, clustering), feature store design.
- Serving & latency: autoscaling, canary/blue-green deployment, caching, precomputations for SLA.
- Infra & CI/CD: Docker, Artifact Registry, Cloud Build, quality gates.
- Observability & cost: logs, metrics, traces, SLOs, budgets & alerts.
- ML supply chain: ETA predictions, time-dependent routing, demand forecasting, anomaly detection.
Your Profile
- Student or recent graduate (0–1 year experience or career switcher).
- Comfortable with Python, Git, Linux/CLI; basic knowledge of Docker and SQL.
- A personal/academic project in ML/DS/DevOps (public repo – even simple is OK).
- Curious, detail-oriented, clear communicator, focused on impact.
Nice to Have
- Any of the following: FastAPI, Airflow/Prefect, Beam, MLflow, Kubernetes, Vertex AI.
- Interest in ETA, forecasting, routing, or network optimization.
- Paid internship + benefits.
- 1:1 mentorship, code reviews, and pair programming.
- Access to real data & projects.
- Safe-to-learn culture: feedback, documentation, and clear standards.
- Potential to join us as a Junior MLOps Engineer after the internship.
Application Process
- Apply: Send your CV + repo link + 3–5 lines on your motivation.
- Technical interview (30–45 min): Python, Docker, ML basics, GCP, CI/CD.
- Offer & onboarding.
Apply Now
Send your application to [email protected] with the subject: “Internship MLOps GCP – [Your Name]”
Join Us at EASYDO
With a team of 250 dedicated professionals, we combine technological excellence with a people-first culture. We believe in empowering talent, nurturing careers, and building long-term trust with our clients and our teams.