Sr. Data Engineer / ETL Expert

Post Date

Jul 15, 2025

Location

Charlotte,
North Carolina

ZIP/Postal Code

28203
US
Sep 15, 2025 Insight Global

Job Type

Contract

Category

Software Engineering

Req #

CLT-795413

Pay Rate

$46 - $57 (hourly estimate)

Job Description

We are seeking a Senior Data Engineer / ETL Specialist with strong experience in AWS cloud services, PySpark, and ETL pipeline development to support our growing data initiatives. The ideal candidate is a sharp, self-motivated engineer who can quickly ramp up on our existing stack, contribute to modern data solutions, and drive performance, scalability, and reliability across the platform.
Youll work on building and optimizing robust data pipelines, supporting large-scale data processing, and collaborating closely with cross-functional teams to ensure high-quality, accessible data for analytics and downstream applications.

Key Responsibilities:
Design, build, and maintain scalable ETL/ELT pipelines using PySpark and AWS Glue/EMR/S3.
Integrate data from multiple sources (structured and semi-structured) into a centralized data lake or warehouse.
Develop reusable, modular code to process and transform large datasets efficiently.
Collaborate with data analysts, data scientists, and product teams to define requirements and deliver high-quality data solutions.
Perform data quality checks, error handling, and logging within pipelines to ensure data integrity.
Optimize Spark jobs for performance and cost-efficiency in distributed processing environments (EMR/Glue).
Leverage AWS services like Lambda, Redshift, Athena, CloudWatch, and Step Functions to orchestrate and monitor data workflows.
Support infrastructure automation and version control via Git, Jenkins, and CI/CD pipelines.
Participate in code reviews and contribute to team best practices for data engineering.

We are a company committed to creating inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity employer that believes everyone matters. Qualified candidates will receive consideration for employment opportunities without regard to race, religion, sex, age, marital status, national origin, sexual orientation, citizenship status, disability, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to Human Resources Request Form. The EEOC "Know Your Rights" Poster is available here.

To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/ .

Required Skills & Experience

10+ years of experience in ETL/Data Engineering, preferably with large-scale distributed systems.
Hands-on experience with AWS big data services: S3, Glue, EMR, Lambda, Redshift, Athena, etc.
Strong coding skills in Python and deep knowledge of PySpark for large data transformations.
Proven experience designing and supporting high-performance, production-grade data pipelines.
Solid understanding of data modeling, data partitioning, and performance tuning.
Experience handling structured and semi-structured data formats (JSON, Parquet, Avro).
Familiarity with Agile development, DevOps practices, and Git-based version control.

Nice to Have Skills & Experience

Experience with Snowflake, Databricks, or similar cloud data warehouses.
Knowledge of data governance, security policies, and data cataloging.
Exposure to data visualization tools like Tableau or Looker.

Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.