At LeverX, we have had the privilege of delivering over 1,500 projects for various clients. With 20+ years in the market, our team of 2,200+ is strong, reliable, and always evolving: learning, growing, and striving for excellence.
We are looking for a Senior Databricks Data Engineer to join us. Let’s see if we are a good fit for each other!
what we offer:
-
- Projects in different domains: healthcare, manufacturing, e-commerce, fintech, etc.
- Projects for every taste: Startup products, enterprise solutions, research & development initiatives, and projects at the crossroads of SAP and the latest web technologies.
- Global clients based in Europe and the US, including Fortune 500 companies.
- Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one.
- Healthy work atmosphere: On average, our employees stay with the company for 4+ years.
- Market-based compensation and regular performance reviews.
- Internal expert communities and courses.
- Perks to support your growth and well-being.
required skills:
-
5+ years in Data Engineering
-
Strong experience with Databricks: Spark (PySpark + SQL), Delta Lake internals, Jobs, Workflows, SQL Warehouses, Unity Catalog
-
Excellent SQL skills (analytical queries, performance tuning)
-
Strong Python for data engineering
-
Experience with dbt (models, tests, exposures, CI/CD)
-
Experience with orchestration tools (Airflow, Dagster)
-
Experience designing scalable lakehouse architectures
-
Understanding of cost optimization in Databricks (clusters, warehouses, storage layout)
- English B2+.
nice-to-have skills:
- Databricks certifications
- Experience building production-grade dashboards in BI tools (Tableau, Looker, Superset)
responsibilities:
-
-
Design and build scalable data pipelines on Databricks using PySpark, Databricks SQL, and Delta Lake.
-
Develop, optimize, and troubleshoot Spark workloads for performance and reliability.
-
Manage Delta Lake tables, including Z-Ordering, compaction, schema evolution, retention, and file layout optimization.
-
Configure and administer Databricks Jobs, clusters, SQL Warehouses, and workflows.
-
Own data governance using Unity Catalog, including permissions, lineage, and catalog/schema management.
-
Implement data transformations and medallion architecture (bronze–silver–gold) using dbt.
-
Build CI/CD pipelines and ensure data quality, observability, SLAs, auditing, and monitoring.
-
Collaborate with and mentor engineers, partner with analytics and ML teams, and drive continuous platform improvements.
-