Staff Data Engineer
Karbon
About Karbon
Karbon is the global leader in practice management software for growth-minded accounting firms. We provide an award-winning, highly collaborative cloud platform that streamlines work and communication, enabling the average accounting firm using Karbon to save 18.5 hours per week, per employee.
We have customers in 34 countries and have grown into a globally distributed team, with our people based throughout the US, Australia, New Zealand, Canada, the United Kingdom, and the Philippines. We are well-funded, ranked #1 on G2, have a fantastic team culture built on our values, are growing rapidly, and making a global impact.
About this role and the work
Karbon is at the start of its Data & AI journey meaning that you will have the opportunity to revolutionize our data platform. This role supports both our AI team and our Insights team, critical in delivering features for the Karbon platform. You’ll assess our existing architectures and strategically identify opportunities to improve our data platform centered around Databricks. The successful candidate will be a hands-on builder and a strategic thinker, capable of designing scalable, robust, and forward-looking data solutions.
We are seeking an experienced Staff Data Engineer who thrives in a fast paced environment. You will have the unique opportunity to build the new unified data platform to power our suite of AI tools and insight delivery.
Some of your main responsibilities will include:
- Architecting a unified data platform: Design, implement, and own our new unified data platform on Databricks. You will be instrumental in establishing the Medallion Architecture (Bronze, Silver, Gold layers) using dbt for data modeling and transformations.
- Develop Data Pipelines: Create and manage resilient data pipelines for both batch and real-time processing from various sources in our Azure data ecosystem. This includes building a "hot path" for streaming data and orchestrating complex dependencies using Databricks Workflows.
- Enable Data Integration and Access: Implement and manage data replication processes from Databricks to Snowflake. You will also be responsible for developing a low-latency query endpoint to serve our production Karbon application.
- Champion Data Quality and Governance: Establish best practices for data quality, integrity, and observability. You will build automated quality checks, tests, and monitoring for all data assets and pipelines to ensure trust in our data.
- Advance our MLOps Capabilities: Partner with the AI team to design and implement MLOps frameworks. This includes building pipelines for feature engineering, model training, evaluation, and deployment to accelerate our machine learning lifecycle.
- Implement Robust Security and Governance Practices: Design and enforce a comprehensive security model for the data platform. This includes management of PII and implementing a fine-grained Role-Based Access Control (RBAC) model through IaC
- Collaborate and Mentor: Work within a cross-functional team of AI engineers, analysts, and developers to deliver impactful data products. As a senior member of the team, you will mentor junior engineers, lead by example, and help define our technical standards.
About you
If you’re the right person for this role, you have:
- 7+ years of relevant work experience as a data engineer, with a proven track record of building and scaling data platforms
- Extensive experience with Databricks
- Extensive experience architecting ETL & ELT data migration patterns with strong proficiency in dbt.
- Experience scaling data pipelines in a multi-cloud environment
- Strong proficiency in Python
- Strong proficiency in SQL and a deep understanding of relational DBMS
- Experience with both batch and streaming data technologies
- DevOps experience, including CI/CD, and infrastructure-as-code (e.g., Terraform)
- Experience building and maintaining APIs or query endpoints for application data access
- Proven ability to lead technical projects and mentor other engineers
It would be advantageous if you have:
- Previous experience with Azure cloud services (Highly desirable)
- DevOps experience is highly desirable
- Practical MLOps experience, such as implementing solutions with MLflow, feature stores, and automated model deployment and evaluation pipelines.
Why work at Karbon?
- Gain global experience across the USA, Australia, New Zealand, UK, Canada and the Philippines
- 4 weeks annual leave plus 5 extra "Karbon Days" off a year
- Flexible working environment
- Work with (and learn from) an experienced, high-performing team
- Be part of a fast-growing company that firmly believes in promoting high performers from within
- A collaborative, team-oriented culture that embraces diversity, invests in development, and provides consistent feedback
- Generous parental leave
Karbon embraces diversity and inclusion, aligning with our values as a business. Research has shown that women and underrepresented groups are less likely to apply to jobs unless they meet every single criteria. If you've made it this far in the job description but your past experience doesn't perfectly align, we do encourage you to still apply. You could still be the right person for the role!
We recruit and reward people based on capability and performance. We don’t discriminate based on race, gender, sexual orientation, gender identity or expression, lifestyle, age, educational background, national origin, religion, physical or cognitive ability, and other diversity dimensions that may hinder inclusion in the organization.
Generally, if you are a good person, we want to talk to you. 😛
If there are any adjustments or accommodations that we can make to assist you during the recruitment process, and your journey at Karbon, contact us at people.support@karbonhq.com for a confidential discussion.
At this time, we request that agency referrals are not submitted for this position. We appreciate your understanding and encourage direct applications from interested candidates. Thank you!