Open Vacancies at LeverX - Join the team of talented software professionals

Middle Data Engineer | Work at LeverX

Written by Liza | Aug 19, 2025 11:36:35 AM

At LeverX, we have had the privilege of delivering over 950 projects. With 20+ years in the market, our team of 2,200 is strong, reliable, and always evolving: learning, growing, and striving for excellence.

We are looking for a Middle Data Engineer to join us. Let’s see if we are a good fit for each other!

what we offer:

  • Projects in different domains: Healthcare, manufacturing, e-commerce, fintech, etc.
  • Projects for every taste: Startup products, enterprise solutions, research & development projects, and projects at the crossroads of SAP and the latest web technologies.
  • Global clients based in Europe and the US, including Fortune 500 companies.
  • Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one.
  • Healthy work atmosphere: On average, our employees stay in the company for 4+ years.
  • Market-based compensation and regular performance reviews.
  • Internal expert communities and courses.
  • Perks to support your growth and well-being.

REQUIREd skills:

  • 3–5 years of experience in data engineering.
  • Strong SQL and solid Python for data processing.
  • Hands-on experience with at least one cloud and a modern warehouse/lakehouse: Snowflake, Redshift, Databricks, or Apache Spark/Iceberg/Delta.
  • Experience delivering on Data Warehouse or Lakehouse projects: star/snowflake modeling, ELT/ETL concepts.
  • Familiarity with orchestration (Airflow, Prefect, or similar) and containerization fundamentals (Docker).
  • Understanding of data modeling, performance tuning, cost-aware architecture, and security/RBAC.
  • English B1+.

nice-to-have skills:

  • Vendor certifications: Snowflake, Databricks, or AWS.
  • BI exposure: Tableau, Metabase, Looker/Looker Studio, etc.
  • dbt (models, tests, macros, exposures) for ELT and documentation.
  • Git (branching strategies, PR reviews, basic CI) for collaborative delivery.
  • Experience with streaming (Kafka/Kinesis), data contracts/metric layers, and data observability tools.

RESPONSIBILITIES:

  • Design, build, and maintain batch/streaming pipelines (ELT/ETL) from diverse sources into DWH/Lakehouse.
  • Model data for analytics (star/snowflake, slowly changing dimensions, semantic/metrics layers).
  • Write production-grade SQL and Python; optimize queries, file layouts, and partitioning.
  • Implement orchestration, monitoring, testing, and CI/CD for data workflows.
  • Ensure data quality (validation, reconciliation, observability) and document lineage.
  • Collaborate with BI/analytics to deliver trusted, performant datasets and dashboards.