REMOTE Data Engineer - INTL LATAM

Post Date

Sep 15, 2023

Location

Los Angeles,
California

ZIP/Postal Code

90046
US
Aug 21, 2025 Insight Global

Job Type

Contract

Category

Software Engineering

Req #

LAX-651416

Pay Rate

$42 - $62 (hourly estimate)

Job Description

An entertainment and ticketing company headquartered in Los Angeles CA is looking to bring on a Data Engineer to the Account & Fidelity team within the Marketplace Engineering Organization. This resource will work closely with the Data Scientists on an "Account Fidelity" initiative that is being funded to help reduce the amount of "bad actors" that is coming through Ticketmaster's application, in efforts to limit the number of spammers that are receiving access to the "Verified Fan" status. They will optimize technology, data pipelines, and machine learning efforts to qualify Fans to make sure they aren't abusing accounts or being fraudulent. Their main responsibility will be to move the existing infrastructure and migrate third-party data into Databricks, while simultaneously increasing the effectiveness of some of the ML models that sit in the same scope.



Day to Day:

- 50% of day will be building out data pipelines.

- 30% of day will be building out the initial framework and testing it.

- 20% of day will be taking raw data (using ETL) and building out tables.

- Provide the right tools and implementation for the data pipeline.

- Working on ETL development and writing SQL queries from scratch.

- Monitor and maintain the data pipelines ensuring they run smoothly.

- Working on reporting and dashboards to ensure sure they know when the data was last updated, was this data correct, has it been refreshed.

- Familiar with migrating data into Databricks.

Required Skills & Experience

Requirements:

- 5-7+ years of experience as a Data Engineer

- Strong ETL development experience (extracting, transforming and loading data into Databricks)

- Experience migrating data into Databricks

- Advanced experience coding in Python (proficient in SQL)

- Advanced experience building out data pipelines and tables

- Able to read and assess advanced SQL queries

- Exposure to Apache Spark

- Experience communicating, collaborating and interacting with other teams

Nice to Have Skills & Experience

Plusses:

- An understanding of ML Flow pipeline

- An understanding of AM modeling work in Databricks

- Spark experience

- PySpark experience

- Migrated 3rd party data sources into Databricks

Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.