Devapo
Devapo
Nowa

MLOps Engineer

Brak informacji o wynagrodzeniu
MidFull-time·Umowa o pracę·B2B
#338870·Dodano dziś·1
Źródło: theprotocol.it
Aplikuj teraz

Tech Stack / Keywords

PythonDockerKubernetesMLflowKubeflowAirflowAWS SageMakerAzure MLGCP Vertex AIGitHub ActionsGitLab CIJenkinsAzure DevOpsTerraformPulumiCloudFormationPrometheusGrafanaDatadogDatabricksAzure AI FoundryAWS BedrockQdrantWeaviatePineconepgvector

Firma i stanowisko

We are looking for an MLOps Engineer who knows that a model is only as good as the pipeline behind it — someone who has actually kept ML systems running in production, not just deployed a tutorial to a notebook. You will work on international projects for clients in banking, insurance, and telco (US, Netherlands, UK), building the infrastructure that makes AI reliable at scale.


Wymagania

  • Proven experience running ML/AI systems in production — you’ve dealt with model drift, pipeline failures, and scaling issues in real environments
  • Strong Python skills and hands-on experience with MLOps tooling: MLflow, Kubeflow, Airflow, or similar
  • Solid experience with containerization (Docker) and orchestration (Kubernetes) in production settings
  • Working knowledge of at least one major cloud platform (AWS SageMaker, Azure ML, or GCP Vertex AI) and its ML services
  • Experience with CI/CD tools (GitHub Actions, GitLab CI, Jenkins, or Azure DevOps) applied to ML workflows
  • Infrastructure as Code experience (Terraform, Pulumi, or CloudFormation)
  • Understanding of ML fundamentals — you don’t need to build models, but you need to understand what makes them break in production
  • Experience with monitoring and observability tools (Prometheus, Grafana, Datadog, or similar)
  • English B2+ — client-facing role, calls and written communication included

Nice to have:

  • Experience with LLM serving infrastructure (vLLM, TGI, Triton Inference Server)
  • Databricks, Azure AI Foundry, or AWS Bedrock
  • GPU infrastructure management and cost optimization
  • Kafka or streaming pipelines for real-time inference
  • Experience with vector databases (Qdrant, Weaviate, Pinecone, pgvector) in production RAG setups
  • Familiarity with AI governance and regulatory context (EU AI Act, GDPR)

Obowiązki

  • Designing, building, and maintaining CI/CD pipelines for ML model training, evaluation, and deployment
  • Managing model lifecycle end-to-end — from experiment tracking and versioning to production serving and monitoring
  • Setting up and maintaining infrastructure for ML workloads on cloud platforms (AWS, Azure, or GCP)
  • Implementing monitoring, alerting, and observability for deployed models — detecting drift, latency issues, and quality degradation
  • Building and managing feature stores, data pipelines, and ETL processes that feed ML models
  • Containerizing and orchestrating ML services using Docker and Kubernetes
  • Collaborating with data scientists and ML engineers to streamline the path from experimentation to production
  • Implementing Infrastructure as Code (Terraform, Pulumi, or CloudFormation) for reproducible ML environments
  • Defining and enforcing MLOps best practices, standards, and documentation across teams

Oferta

  • Certifications and training funded
  • Private medical care (Medicover)
  • Multisport card
  • English language classes
  • Flexible working hours
  • Team meetups and integration events
  • Referral bonus
Dofinansowanie szkoleń
Opieka zdrowotna
Karta sportowa
Kursy językowe
Elastyczne godziny
Imprezy teamowe
Bonusy
Devapo

Devapo

69 aktywnych ofert

Zobacz wszystkie oferty
Aplikuj teraz