We are looking for a Data Engineer who is passionate about Python programming, Big Data, and solutions based on cloud technologies. So join our Big Data Team and help us to build a modern and highly scalable Data Platform for the entire enterprise.
What are your duties?
- contribute to Data Lake architecture and implementation,
- design optimal ETL infrastructure from a wide variety of data sources used for reporting, real-time analytics, and machine learning,
- build data pipelines,
- ensure consistency in data, promote processes to ensure data unicity between our data, third party platforms, and other databases,
- work with game teams to efficiently use the Google Cloud Platform to analyze data, and build data models.
What should you offer?
- 2+ years experience working as a Data Engineer or similar,
- experience in writing ETL\ELT processes,
- experience in designing, building, optimizing, and maintaining big data pipelines on the cloud,
- cloud computing experience (e.g. GCP, AWS, Azure),
- experience with relational and non-relational database technologies,
- experience with the base Python modules and the following ones: pandas, NumPy, requests, JSON, pyarrow, sqlalchemy, google-cloud-bigquery, google-cloud-storage,
- standard and procedural SQL,
- good English (at least B2),
- nice to have: experience with Apache Beam, Spark (PySpark or other), and asynchronous REST requests.
What are we offering?
- Opportunity to grow professional development in the team of experienced professionals,
- Total flexibility on working on-site (offices in Bydgoszcz and Warsaw) and remotely, as well as planning your daily work cycle,
- Being a part of a small, independent team with as little beaurocracy and as much transparency as it’s possible,
- Competitive stable salary adequate to experience,
- Other cool stuff that supports physical, emotional, and intellectual well-being like sports pay-off, private medical care, drinks, fresh fruits, and personal development budget at your disposal.