Tech Stack / Keywords
DatabricksPySparkPythonSparkRESTAzureData LakeCloud
Wymagania
- 5+ years of experience as a Data Engineer, including at least two Databricks projects
- Strong hands-on experience with PySpark, Python, and Spark SQL
- Experience with Databricks ecosystem, including Delta Tables, Structured Streaming, Auto Loader, Databricks Jobs (with tasks), Notebooks, REST APIs, and Databricks Asset Bundles
- Experience with Azure Data Lake Storage (Blob Storage) and cloud-based data processing
- Experience with knowledge sharing, documentation, and code reviews
- Experience in testing and ensuring code quality
- Strong communication skills and experience working with client stakeholders
Obowiązki
- Design, develop, and maintain data engineering solutions using Databricks, PySpark, and Azure services
- Build and optimize batch and streaming data pipelines
- Collaborate with business stakeholders to gather requirements and translate them into technical solutions
- Participate in consulting and advisory activities related to data engineering and platform capabilities
- Contribute to code reviews, testing, technical documentation, and knowledge-sharing within the team
- Support and guide other developers, ensuring development best practices and high-quality delivery
- Work closely with client stakeholders and project teams in an international environment
linkgroup
248 aktywnych ofert