Senior Big Data Engineer (Scala/Spark/AWS)

do 185 PLN/ godz.B2B (netto)
SeniorFull-time·B2B
#345074·Dodano dziś·0
Źródło: emagine
Aplikuj teraz

Tech Stack / Keywords

Big DataScalaSparkAWSPythonDevOpsContinuous DeliveryRobot Framework

Firma i stanowisko

The role is within the Group Financial Crime Prevention unit of a banking group, focusing on implementing data requirements. The position is based in Poland with hybrid work in Warsaw, Łódź, Gdańsk, or Gdynia.


Wymagania

  • 7+ years of experience in Spark & Scala.
  • 5+ years of development experience with Python.
  • 3+ years of experience with Robot Framework.
  • 3+ years of experience with CI/CD tools such as Jenkins.
  • Experience with distributed data processing engines like Spark.
  • Strong SQL skills and experience in creating data flows.
  • Expertise in BitBucket and GIT.
  • AWS knowledge of architectural components (e.g., Lambda, step functions).
  • Unit testing experience (e.g., Junit 5, Mockito).
  • Strong working experience with Linux and bash.

Nice to have:

  • Experience with containerization using Docker.
  • Knowledge of streaming and queue technologies (e.g., KAFKA, IBM MQ).
  • Familiarity with DevOps practices.

Obowiązki

  • Build distributed and highly parallelized BigData processing pipelines to handle massive amounts of data in near real-time.
  • Leverage Spark to enrich and transform corporate data for advanced analytics.
  • Conduct requirement analysis and clarify ambiguous requirements.
  • Collaborate closely with DevOps, QA, and Product Management teams in a Continuous Delivery environment.
  • Implement test-driven development, including test case preparation, test data setup, and automation using Robot Framework and Python.

Inne informacje

Hybrid work with 1 day/week office presence plus once/twice per year physical presence in Gdynia/Gdańsk for quarterly planning (2 days) may be required.

emagine

emagine

209 aktywnych ofert

Zobacz wszystkie oferty
Aplikuj teraz