We’re looking for a Data Engineer with hands-on Databricks experience, who enjoys working directly with clients and shaping modern cloud-based data solutions.
Key Responsibilities
- Design, implement, and optimize data pipelines and architectures using Databricks and Azure technologies
- Collaborate directly with clients to understand business needs and translate them into technical solutions
- Work on data integration from multiple structured and semi-structured sources (Azure Data Lake, APIs, SQL systems, etc.)
- Ensure data quality, security, and governance best practices are applied throughout the data lifecycleó
- Participate in architectural discussions, contribute to tool and process improvements
- Support knowledge sharing and contribute to team growth
Requirements
- At least 4–5+ years of experience in data engineering or related roles
- Hands-on experience with Databricks, Apache Spark, and Delta Lake
- Solid SQL and Python skills
- Experience in Azure data services (Data Lake, Data Factory, Synapse, etc.)
- Strong analytical mindset and structured problem-solving skills
- Comfortable working with clients directly, including technical discussions and requirement gathering
- Good communication skills in English and Hungarian
Nice to Have
- Experience with Microsoft Fabric or similar unified data platforms
- Familiarity with CI/CD, Azure DevOps, or GitHub Actions
- Knowledge of medallion architectures or Data Lakehouse concepts
- Exposure to data migration or modernization projects
- Relevant certifications (e.g. Databricks, Microsoft Azure)
What We Offer
- Opportunity to work on cutting-edge cloud data solutions
- Hybrid setup: weekly one office day (Tuesday) + flexible remote work
- Client-facing, impactful role within a growing team
- Learning opportunities: certifications, training, and conferences