For the proper functioning and anonymous analysis of our website, we place necessary and functional cookies,
which have no consequences for your privacy.
We use more cookies, for example to make our website more relevant to you,
to make it possible to share content via social media and to show you relevant advertisements on third-party websites.
These cookies may collect data outside of our website. By clicking "Accept" By clicking you agree to the placing of these cookies.
You can find more information in our cookie policy.
Lead the design and implementation of scalable data platforms, setting technical direction for pipelines, APIs, and production workflows.
Enable and mentor teammates and Data Scientists, ensuring Databricks, Snowflake, and DBT environments are production-ready and cost-efficient.
Drive engineering excellence through clean code, CI/CD, observability, and architectural decision-making that supports Eneco’s long-term vision.
At Eneco, we’re working hard to achieve our mission: sustainable energy for everyone. Learn more about how we’re putting this into action in our One Planet Plan.
As a Full Stack Data Engineer, you will play a crucial role in our diverse team, solving real-world forecasting problems through cutting-edge ML models. Our product leverages a modern data stack end-to-end: from data ingestion into Snowflake, to transformations with DBT, to running forecasts in Databricks, and finally exposing results through Python APIs and aggregation services.
This product has high visibility and impact at Eneco, driving innovation in how we forecast, optimize, and deliver energy solutions to our consumers.
Must Have:
Strong proficiency in SQL (experience with DBT and/or Airflow is preferred).
Solid experience writing clean, maintainable code (preferably in Python).
Hands-on experience with Databricks, specifically deployment and production use.
Strong knowledge of CI/CD pipelines and observability practices.
Experience building and maintaining REST APIs.
Nice to Have:
Experience with high-volume time series data.
Familiarity or interest in MLOps and data science techniques.
Experience deploying applications on Kubernetes. Helm is a plus.
Experience with data ingestion workflows (e.g., Snowpipe, Kafka, or similar).
Familiarity with cloud platforms (e.g., AWS or Azure). Infrastructure as Code (e.g. Terraform) is a plus.
Designing and maintaining robust SQL-based data pipelines (leveraging DBT and/or Airflow) for both streaming and batch workloads.
Building and maintaining clean, production-quality Python code, including APIs and aggregation services.
Supporting Data Scientists by ensuring their Databricks environments and workflows are production-ready and scalable.
Applying CI/CD pipelines and observability practices to guarantee reliable and maintainable deployments.
Contributing to application deployments and operations, ensuring solutions run smoothly in production.
Influencing architectural decisions and mentoring teammates to raise engineering standards across the team.
Collaborating with product managers, data scientists, and engineers to deliver high-impact forecasting products.
You will join a cross-functional team of Data Engineers, Machine Learning Engineers, Data Scientists, and Analysts, all working together to deliver forecasting solutions with real business impact. Collaboration and knowledge-sharing are at the core of how we work: we encourage experimentation, celebrate successes, and learn quickly from setbacks.
Our engineering culture values clean, maintainable code, automation, and end-to-end ownership. You’ll have the opportunity to shape data products from ingestion to deployment, contribute to technical decisions, and help ensure our solutions are reliable, scalable, and ready for production.
Together, we drive Eneco’s mission to innovate and accelerate the energy transition.
Then please reach out to our Recruiter: [email protected]