Spark Engineer

Post Date

Apr 23, 2026

Location

Toronto,
Ontario

ZIP/Postal Code

M5J 0
Canada
Jul 01, 2026 Insight Global

Job Type

Contract

Category

Computer Engineering

Req #

TOR-2271d251-cc95-4857-9dbd-6c34e8f715c1

Pay Rate

$54 - $68 (hourly estimate)

Who Can Apply

  • Candidates must be legally authorized to work in Canada

Job Description

Insight Global is seeking a senior engineer with deep Spark internals knowledge who can harden and scale an existing framework so teams build “in the library” rather than converting code manually. Comfortable with 5 days per month on site.

This role is focused on framework and platform engineering, not one‑off Databricks pipelines or notebook‑only development.

We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.

Required Skills & Experience

Strong PySpark internals experience (not just Databricks usage), understanding of execution model, memory management, shuffles, partitioning, caching, and performance tradeoffs.
Hands‑on Databricks development experience
Experience building frameworks or libraries (reusable patterns, abstractions, standardization) rather than only pipelines
Strong Spark SQL experience (execution plans, optimization, transformation costs)
Solid data engineering fundamentals: transformations, schema management, data quality patterns, metadata / lineage concepts
Experience hardening work already in production under ambiguous requirements
Strong communication skills and ability to explain design and performance tradeoffs to platform stakeholders

Nice to Have Skills & Experience

Legacy ETL background such as Informatica, SSIS, SSAS, Pentaho, or Talend
Delta Lake and Lakehouse architecture experience
Orchestration and scheduling tools such as AutoSys, Azure Data Factory, Oozie, or similar
DBT experience (especially Spark SQL–based transformations)
Snowflake or downstream analytics integration experience
Experience supporting shared data platforms used by multiple teams (enablement mindset)

Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.