Job Description
An Insight Global client has a role seeking a Senior ETL/Data Engineer to design, build, and optimize scalable data pipelines in an AWS-based ecosystem. This position focuses on high-performance ETL workflows leveraging Databricks, Spark, and AWS-native data services. The engineer will support ingestion, transformation, orchestration, data quality, and production reliability for enterprise analytics initiatives. Responsibilities include building and maintaining ETL pipelines, integrating structured/unstructured data, implementing monitoring frameworks, and collaborating with U.S.-based stakeholders to ensure robust, scalable solutions.
Targeted Pay Rate: $10- 18/hr
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience
7–10 years of hands-on ETL/Data Engineering experience
Strong expertise with Databricks and Apache Spark
Solid experience across AWS data services (Glue, S3, Lambda, EMR, Athena, Secrets Manager)
Strong SQL skills + experience with both relational and NoSQL stores
Experience with CI/CD, Git, and modern data engineering best practices
Strong debugging, performance tuning, and pipeline optimization skills
Nice to Have Skills & Experience
Experience with Python/Scala for data workflows
Familiarity with AWS orchestration tools (Kinesis, SNS/SQS, CloudWatch)
Background in data quality frameworks and data lineage implementation
Exposure to enterprise-scale analytics initiatives
Knowledge of data modeling and job optimization techniques
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.