Jr. Data Engineer

Post Date

Feb 11, 2026

Location

Washington,
District Of Columbia

ZIP/Postal Code

20003
US
Apr 18, 2026 Insight Global

Job Type

Perm

Category

Programmer / Developer

Req #

DC0-08b44312-2fd6-4747-aac2-2ddcefea223e

Pay Rate

$72k - $90k (estimate)

Job Description

We are looking for a Jr. Data Engineer. The focus of this role is supporting the modernization of legacy Informatica-based ETL pipelines into Databricks using PySpark and Spark SQL.

This position supports data migration and modernization efforts within a data-heavy, potentially regulated environment. Candidates will work closely with senior engineers, data architects, and QA teams during iterative migration cycles.

We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.

Required Skills & Experience

2–3 years of experience in data engineering, ETL development, or data integration.
Working knowledge of Informatica PowerCenter, including:
Mappings, workflows, and sessions
Common transformations (Source Qualifier, Expression, Lookup, Joiner, Aggregator, Router, Filter)
Basic to intermediate experience with Databricks, including:
PySpark
Spark SQL
Notebooks and jobs
Strong SQL fundamentals, including joins, aggregations, and window functions.
Solid understanding of ETL / ELT concepts, data warehousing principles, and batch processing.

Nice to Have Skills & Experience

Prior experience in regulated or data-heavy environments (finance, government, healthcare).
Exposure to Informatica-to-Databricks migrations or similar data modernization efforts.
Familiarity with Delta Lake and medallion architecture (Bronze / Silver / Gold).
Basic understanding of AWS, including S3 and IAM concepts.

Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.