In A Nutshell
Work with project teams and clients to build data systems to drive social impact.
Responsibilities
- Working with clients to understand their current processes and pain points, and identifying which of these can be addressed through automating and streamlining data flows.
- Designing, building, and maintaining optimal data pipeline architecture.
- Designing, building, and maintaining data warehouses and data lakes.
- Synthesising, visualizing, and communicating results: dashboards, plots, interactive viz, presentations, and reports to provide actionable data insights to decision-makers.
- Making thoughtful decisions about application architecture, data flows, integrations, and user-facing behavior to support scalable, production-grade solutions.
- Working closely with clients, other engineers, product owners, and domain experts to review code, plan releases, and deliver features end-to-end.
- Writing blog posts or presenting on lessons learned.
- Supporting teammates through formal and informal coaching and collaboration that enables continuous learning and improvement for the team.
Skillset
- Experience of at least 1+ year in data engineering, with proficiency in Python and SQL for building production systems.
- Proficiency building ELT pipelines in a production setting.
- Proficiency with data modelling techniques.
- Proficiency working with OLTP and OLAP data stores.
- Proficiency in Flask/FastAPI or similar backend frameworks.
- Experience working with a cloud hosting platform like AWS or GCP.
- Sound foundations in statistics and probability.
- Ability to independently scope, design, and deliver end-to-end features.
- Clear written and verbal communication skills for collaborating with technical and non-technical stakeholders.
- A willingness to pick up new skills and technologies (e.g., front-end) based on project and team needs.