Senior Software Data Platform Engineer
Tech Stack / Keywords
Firma i stanowisko
Our Data Platform powers the Location Engine and predictive analytics pipelines — processing high-volume real-time IoT telemetry and transforming it into production-grade location intelligence for hospitals, clinics, and healthcare networks worldwide.
Wymagania
- 6+ years of software engineering experience with distributed systems at scale.
- Proven experience leading architecture proposals and driving them to production.
- Deep Kafka expertise — Kafka Streams, event-driven architecture design, high-throughput pipeline operation.
- Strong Kubernetes skills — deploying, operating, and optimizing containerized workloads in production.
- Experience with Spring Boot or a comparable JVM microservices framework.
- Hands-on AWS experience including infrastructure as code (Terraform, CDK, or equivalent).
- Solid CI/CD, test automation, and version control best practices.
- Demonstrated use of AI coding tools in daily workflows (Claude, GitHub Copilot, or equivalent).
- Strong written and verbal communication in English.
Nice to have:
- Working knowledge of HIPAA compliance requirements in software systems.
- AWS EMR experience (Spark, Flink, or large-scale batch compute).
- Experience integrating with EHR systems (Epic, Cerner, HL7/FHIR).
- Background in healthcare IoT or real-time location systems.
Obowiązki
Platform architecture:
- Lead proposals and hands-on implementation of scalable, AI-augmented data platform architecture — from streaming pipelines to predictive analytics at scale.
Privacy & compliance:
- Design and maintain secure systems with encryption at rest and in transit, audit logging, access controls, and proactive observability — built for regulated healthcare environments.
Performance engineering:
- Identify and eliminate bottlenecks across streaming pipelines, microservices, and data stores.
- Own performance and load testing for compute-intensive paths.
Infrastructure as code:
-
Build and maintain cloud-native AWS infrastructure using IaC tooling.
-
Design containerized, Kubernetes-native workloads with a focus on cost, reliability, and zero-downtime deployments.
-
Architect and implement streaming & batch data pipelines handling high-volume real-time location telemetry and predictive analytics workloads.
-
Apply and champion AI-first engineering practices — from Claude and Copilot-assisted development and code review to LLM-augmented observability and automated testing workflows.
-
Define and evolve data models across PostgreSQL and other relational or columnar stores.
-
Build and maintain event-driven systems using Kafka and Kafka Streams.
-
Ensure low-latency, high-availability workloads running in Kubernetes.
-
Integrate with third-party and custom in-house hospital systems — including proprietary location prediction engines, patient flow platforms, and clinical data feeds — via REST APIs, Webhooks, and event-based architectures.
-
Contribute to architectural decisions impacting scalability, resilience, and platform evolution.
-
Write performance and load tests for compute-heavy services.
-
Maintain and improve CI/CD workflows for reliable production deployments.
-
Participate in 24/7 on-call rotation for platform reliability.
-
Can independently design and implement scalable data processing systems.
-
Understands problems from a product and system architecture perspective.
-
Proposes pragmatic, high-impact solutions and executes them end-to-end.
-
Thinks in terms of performance, reliability, observability, and cost-efficiency.
Oferta
- Sport subscription
- Private healthcare
- Flat structure
- Small teams
- International projects
- Free coffee
- Free parking
- Bike parking
- Shower
- Free snacks
- In-house hack days
- No dress code
Kontakt.io
2 aktywne oferty