Insight Global is hiring a Sr. Data Engineer in Calgary, AB. This role is hybrid - in-office with 20% flexibility to work from home.
This resource will play a key role in architecting and optimizing a modern, scalable, and productized data ecosystem that enables internal teams and external partners to derive maximum value from our data assets. Responsible for designing high-performance data pipelines, storage solutions, and API-driven data access mechanisms that support real-time decision-making and innovation.
The ideal candidate has 5+ years of experience in data engineering, with modern data storage and processing, proficiency in ETL frameworks and event-driven architectures. They will also have strong API development skills and proficiency in Go, Java and/or Python.
Duties & Responsibilities:
- Design and build a scalable, cloud-native data platform aligned with microservices
- Develop real-time and batch data pipelines to power data-driven products
- Implement SQL, NoSQL, and hybrid storage strategies
Data as a Product
- Enable self-serve data access with secure, well-documented data APIs.
- Collaborate with Product & Business teams to define and optimize data products.
- Ensure data quality, lineage, and governance in all data pipelines and products.
Microservices Integration & Performance
- Build event-driven architectures using Kafka, Azure Event Hub, or Service Bus.
- Develop scalable ETL/ELT processes for ingestion, transformation, and distribution.
- Optimize query performance, indexing, and caching for data-intensive apps.
Data Governance, Security, and Compliance
- Enforce data privacy, security, and access controls aligned with compliance standards.
- Implement observability and monitoring for data infrastructure and pipelines.
- Work with DevSecOps teams to integrate security into CI/CD workflows.
We are a company committed to creating inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity employer that believes everyone matters. Qualified candidates will receive consideration for employment opportunities without regard to race, religion, sex, age, marital status, national origin, sexual orientation, citizenship status, disability, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to
Human Resources Request Form. The EEOC "Know Your Rights" Poster is available
here.
To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy:
https://insightglobal.com/workforce-privacy-policy/ .
- 5+ years of experience in data engineering, with exposure to Data Mesh and Data as a Product preferred.
- Expertise in modern data storage and processing, including SQL, NoSQL (Cosmos DB, PostgreSQL), Data Lakes (Azure Data Lake, Delta Lake, Apache Iceberg).
- Proficiency in ETL frameworks (e.g. Apache Kafka, Airflow, Flink, Spark, Azure Data Factory, Databricks).
- Experience with Event-driven architectures using Queues, Pub/Sub services (e.g. Azure Service Bus, Azure Event Grid, Amazon Event Bridge) and Containerized Environments (Container Apps, AWS ECS).
- Experience with Apache and/or Azure data platforms or similar, e.g. Fabric, Databricks, Snowflake, and Apache Hudi.
Strong API development skills using GraphQL, REST, and/or gRPC for enabling data as a product.
- Proficiency in Go, Java, and/or Python.
- Deep understanding of data governance, security, lineage, and compliance using Microsoft Purview, OpenLineage, Apache Ranger, or Azure Key Vault.
- Experience with Infrastructure as Code (IaC) using Bicep, Terraform, or CloudFormation for managing cloud-based data solutions.
- Strong problem-solving and collaboration skills, working across data, engineering, and business teams.
Knowledge of ML, AI, and LLMs, including data engineering for model training and inference with Azure Machine Learning, Databricks ML, and MLflow.
Hands-on experience in Notebooks (e.g. Jupyter, Databricks, Azure Synapse) for data workflows.
Experience in real-time data streaming architectures.
Exposure to data monetization strategies and analytics frameworks.
Familiarity with federated data governance and self-serve data platforms.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.