Data Engineer
About The Position
About us
We are Tailor Brands. We built a platform that builds businesses. From launching and managing to growing a business, Tailor Brands is the all-in-one solution that empowers any business owner. Our platform services over 40 million small businesses.
Our “Guidance Engine” assesses each new business introduced to our platform and devises a tailored plan that allows you to manage all your business needs from a single dashboard. Through our business-building platform, we’re turning the process of starting, managing, and growing a business into a better experience; we’re simplifying the business journey. At Tailor Brands, we believe in more than just handing you another tool; we are dedicated to teaching you the art of building a business.
About the Role:
As a Data Engineer at Tailor Brands, you’ll design and maintain the data pipelines that power decision-making across product, marketing, and finance. Working with Apache Airflow, Python, and SQL, you’ll build trusted, well-documented datasets that enable self-service analytics for the entire organization.
You’ll champion data quality through validation and monitoring, optimize infrastructure for scale, and be part of a forward-thinking team that actively leverages AI tools like Claude to push the boundaries of modern data engineering. If you thrive on turning messy data into reliable, actionable foundations — and want to do it for a platform serving over 40 million small businesses — this role is for you.
What we’re looking for
- Design, implement and maintain pipelines that deliver data with measurable quality and SLAs for our product, marketing, finance, and other departments
- Partner with business domain experts, data analysts, and engineering teams to build foundational datasets that are trusted, well-understood, aligned with business strategy and enable a self-service analysis
- Focus on data quality, detect data/analytics quality issues, implement bug fixes and data validations for prevention
- Identify, design, and implement internal process improvements such as automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
- Be part of a team that actively integrates AI tools like Claude into day-to-day workflows, pushing the boundaries of how data engineering gets done
Requirements
- Proven experience in creating and optimizing big data processes, pipelines and architectures
- 2+ years of experience with ETL design, implementation, and maintenance
- 2+ years of experience with Python programming
- Strong knowledge of SQL and Data modeling, preferably PostgreSQL
- Working experience with Apache Airflow
- Working experience with Git
- Working experience with Cloud environment, preferably
- Team Player, Can-Do attitude and creativity