Job Description
We are seeking a Senior / Staff Data Engineer with deep Python and Google Cloud Platform (GCP) experience to design, build, and support scalable, cloud‑native data pipelines. This role is highly hands‑on while also providing technical leadership across data engineering initiatives.
You will work closely with data engineers, analytics, BI, and business stakeholders to deliver high‑quality, reliable data products that power analytics, reporting, and near real‑time insights. This role plays a key part in shaping data engineering standards, mentoring others, and enabling self‑service analytics across the organization.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience
-5–7+ years of experience in data engineering, data architecture, or Python development.
-Strong proficiency in Python for data pipelines, automation, and workflow development.
-Strong proficiency in SQL and analytical data warehouses.
-Hands‑on experience with Apache Airflow and Cloud Composer.
-Experience working in Google Cloud Platform (GCP), including BigQuery.
-Solid understanding of ETL/ELT concepts, modern data pipeline architecture, and dimensional modeling (star/snowflake).
-Proven experience working with large datasets and performance optimization.
-Strong problem‑solving skills and ability to communicate technical concepts clearly.
-Bachelor’s degree in Computer Science, Data Engineering, or a related field (or equivalent experience).
Nice to Have Skills & Experience
-Experience supporting BI and visualization tools such as Power BI, SSRS, or Tableau.
-Experience with near real‑time or streaming pipelines (Pub/Sub, Kafka).
-Exposure to CI/CD automation, Git/GitHub, and infrastructure‑as‑code tools (e.g., Terraform).
-Familiarity with data governance, security, and regulated environments.
-Experience with containerized deployments (Docker).
-Exposure to legacy or hybrid ETL tools (e.g., SSIS).
-Cloud certifications (GCP, Azure, or AWS).
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.