Job Description
Day‑to‑Day / Responsibilities
A client in the insurance/annuities space is looking for a highly skilled Data Engineer to join their enterprise data engineering and delivery team. These resource will contribute to modernizing the data ecosystem and enabling high‑quality, business‑ready data across the organization. This role focuses on building scalable data pipelines, transforming raw data into usable assets, and ensuring data reliability for their insurance and annuity operations. Success in this role requires strong Python and SQL skills, deep curiosity about the why behind the data, and the ability to translate engineering work into meaningful business impact. You will work self‑directed on complex problems, contribute to the migration from legacy systems to a modern tech stack, and help the business make better decisions through trustworthy, well‑engineered data. This is an opportunity to influence how data is delivered, understood, and used to drive behavior and outcomes across the organization. The ideal candidate will be looking for $65-75/hr.
Core responsibilities include:
Building and maintaining scalable data pipelines using Python, SQL, and Git
Developing high‑quality transformations using Coalesce or DBT
Supporting the migration from legacy systems (SSIS, stored procedures, Azure DevOps) into the new Snowflake‑based stack
Implementing orchestration workflows using Dagster
Ensuring data quality using Datafold
Troubleshooting data issues by digging into root causes and understanding business impact
Collaborating with analysts and business teams to understand what the data means and how it drives decisions
Documenting data logic, lineage, and constraints
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience
-5+ years of experience as a Data Engineer building and supporting production data pipelines
-Strong proficiency in Python, SQL, and Git within modern data environments
-Experience with cloud data platforms and processing frameworks (Snowflake, Databricks, Spark)
-Hands‑on experience with analytics engineering tools such as dbt or Coalesce
-Experience using DAG‑based orchestration tools (Airflow, Dagster) and data quality frameworks
-Strong problem‑solving, communication skills, and curiosity about how data impacts business outcomes, with exposure to finance/insurance/annuity data
Nice to Have Skills & Experience
-Understanding of infrastructure and cloud networking concepts (e.g., AWS private endpoints)
-Experience migrating from legacy systems (SSIS, stored procedures, Azure DevOps)
-Ability to influence technical decisions or improve engineering practices
-Prior mentorship experience with junior engineers
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.