Job Description
This specialty Insurance company is looking for a Senior Data Engineer to join their Enterprise Data Hub (EDH) team. The role is highly hands-on, focusing on coding in PySpark, building and optimizing Databricks-based frameworks, and developing scalable data pipelines. You will work directly with the VP of Architecture & Engineering, participate in daily status calls, and collaborate with cross-functional teams to ingest, transform, and expose data across all systems within the organization.
Key responsibilities include optimizing Databricks, implementing job orchestration and automation using Azure tools, and building reusable frameworks for ingestion and transformation. You will also create notebooks, contribute to governance models, and ensure best practices in coding and deployment.
An excellent candidate will have deep technical expertise in Databricks and Azure, thrive in a hands-on coding environment, and bring a strong engineering mindset with leadership capabilities. Experience with Unity Catalog, Delta Lake, and MLflow will set you apart.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience
• 5-7+ years of experience designing & developing data pipelines with Databricks & Apache Spark
○ Hands-on coding with PySpark/Spark SQL
• Job orchestration & automation using Azure Data Factory, Azure Functions, and Azure DevOps
○ End-to-end workflow: scheduling, triggers, monitoring, error handling, CI/CD deployment
• Experience with Unity Catalog
• Technical leadership:
○ Code reviews, architecture guidance, mentoring junior engineers
• Strong Azure ecosystem experience: Data Lake, Data Factory, Synapse, Functions
Excellent problem-solving and communication skills
Nice to Have Skills & Experience
• Familiarity with data governance and security best practices
Exposure to Agile methodologies and DevOps practices
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.