Nowa
Data Engineer
140 - 170 PLN/ mies.B2B (netto)
SeniorFull-time·B2B
#336968·Dodano dziś·0
Źródło: theprotocol.itTech Stack / Keywords
AWSAzureSQLPower QueryPythonPostgreSQLAirflowCI/CDETL/ELTBigQueryApache SparkKafkaPower BI
Firma i stanowisko
Pretius is a Polish software company founded in 2006 in Warsaw with 19 years of experience in developing dedicated software systems. The company works with market leaders in need of enterprise solutions and has over 180 specialists providing services from business analysis, design, development to long-term maintenance. In 2016, Pretius founded a sister company IN Team specializing in body leasing, and in 2021 launched the Pretius Low-Code brand.
Wymagania
- 8+ years of experience in data engineering, analytics engineering, or similar data-focused roles
- Expert-level proficiency in Python for data processing, pipeline development, and automation
- Advanced SQL skills, including query optimization and complex analytical transformations
- Strong experience with relational and analytical databases (e.g., PostgreSQL, Snowflake, BigQuery, Redshift, Synapse)
- Hands-on experience designing and implementing data warehouse architectures (ETL/ELT, batch, near-real-time)
- Proven experience with big data processing frameworks such as Apache Spark (PySpark, Spark SQL)
- Strong cloud experience across AWS, Azure, and/or GCP, including core data services
- Experience building and operating scalable data pipelines using orchestration tools (Airflow, ADF, Prefect, Dagster)
- Understanding of distributed systems principles and large-scale data processing challenges
- Strong knowledge of data quality, governance, security, and compliance best practices
- Experience with DevOps practices, including CI/CD, Git, and Infrastructure as Code (Terraform or equivalent)
- Ability to design scalable, production-grade data solutions in complex enterprise environments
Nice to have:
- Familiarity with streaming technologies (Kafka, Kinesis, Pub/Sub)
- Experience with dbt and BI tools (Power BI, Tableau, Looker)
Obowiązki
- Design, build, and maintain scalable, production-grade data pipelines using Python (ETL/ELT) and orchestration tools
- Write and optimize advanced SQL queries for efficient data extraction, transformation, and performance tuning
- Design and implement scalable data models (star/snowflake schema) for analytics and reporting
- Build and maintain end-to-end data warehouse solutions, including batch and near-real-time ingestion, data marts, and semantic layers
- Work with Apache Spark (PySpark, Spark SQL) for large-scale data processing and analytics
- Develop and operate cloud data solutions across AWS, Azure, and/or GCP (e.g., S3, Glue, EMR, Redshift, ADLS, Data Factory, Synapse, BigQuery)
- Design scalable, secure, and cost-efficient data architectures with FinOps awareness
- Build and maintain reliable data pipelines using orchestration tools (Airflow, ADF, Prefect, Dagster) with proper scheduling, retries, and monitoring
- Ensure data reliability through validation, monitoring, idempotent design, and failure recovery mechanisms
- Develop streaming and real-time data pipelines using Kafka, Kinesis, Pub/Sub, or Event Hubs where required
- Implement data quality, governance, and security standards (PII protection, encryption, RBAC, data lineage)
- Apply DevOps practices including Git, CI/CD, Infrastructure as Code, and production monitoring
- Integrate external APIs and SaaS data sources into data platforms
Oferta
- Co-financing of the Multisport card and Medicover private healthcare
- Modern office available
- Team bonding activities, internal courses, conferences, certifications
- Sharing the costs of professional training & courses
- Flexible working time
- Integration events
- Video games at work
- Parking space for employees
- Leisure zone
Elastyczne godziny
Karta sportowa
Opieka zdrowotna
Dofinansowanie szkoleń
Imprezy teamowe
Parking dla aut
Pakiet wypoczynkowy
Pretius Software
41 aktywnych ofert