Job Description
As a Lead Data Engineer within the Trailblazer initiative, you will play a crucial role in architecting, implementing, and managing robust, scalable data infrastructure. This position demands a blend of systems engineering, data integration, and data analytics skills to enhance the company's data capabilities, supporting advanced analytics, machine learning projects, and real-time data processing needs. The ideal candidate brings deep expertise in Lakehouse design principles, including layered Medallion Architecture patterns (Bronze, Silver, Gold), to drive scalable and governed data solutions. This is also a highly visible leadership role that will represent the data engineering function and lead the Data Management Community of Practice.
As a Lead Data Engineer you will:
• Design and implement scalable and reliable data pipelines to ingest, process, and store diverse data at scale, using technologies such as Apache Spark, Hadoop, and Kafka.
• Work within cloud environments like AWS or Azure to leverage services including but not limited to EC2, RDS, S3, Lambda, and Azure Data Lake for efficient data handling and processing.
• Architect and operationalize data pipelines following Medallion Architecture best practices within a Lakehouse framework—ensuring data quality, lineage, and usability across Bronze, Silver, and Gold layers.
• Develop and optimize data models and storage solutions (Databricks, Data Lakehouses) to support operational and analytical applications, ensuring data quality and accessibility.
• Utilize ETL tools and frameworks (e.g., Apache Airflow, Fivetran) to automate data workflows, ensuring efficient data integration and timely availability of data for analytics.
• Lead the Data Management Community of Practice, serving as the primary facilitator, coordinator, and spokesperson. Drive knowledge sharing, establish best practices, and represent the data engineering discipline across to both technical and business audiences.
• Collaborate closely with data scientists, providing the data infrastructure and tools needed for complex analytical models, leveraging Python or R for data processing scripts.
• Ensure compliance with data governance and security policies, implementing best practices in data encryption, masking, and access controls within a cloud environment.
• Monitor and troubleshoot data pipelines and databases for performance issues, applying tuning techniques to optimize data access and throughput.
• Stay abreast of emerging technologies and methodologies in data engineering, advocating for and implementing improvements to the data ecosystem.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience
• Bachelor's Degree in Computer Science, MIS, or other business discipline and 10+ years of experience in data engineering, with a proven track record in designing and operating large-scale data pipelines and architectures Req or
• Master's Degree computer science, MIS, or other business discipline and 5+ years of experience in data engineering, with a proven track record in designing and operating large-scale data pipelines and architectures Req
• Demonstrated experience designing and implementing Medallion Architecture in a Databricks Lakehouse environment, including layer transitions, data quality enforcement, and optimization strategies. Required
• Expertise in developing ETL/ELT workflows Required
• Comprehensive knowledge of platforms and services like Databricks, Dataiku, and AWS native data offerings Required
• Solid experience with big data technologies (Apache Spark, Hadoop, Kafka) and cloud services (AWS, Azure) related to data processing and storage Required
• Strong experience in AWS cloud services, with hands-on experience in integrating cloud storage and compute services with Databricks Required
• Proficient in SQL and programming languages relevant to data engineering (Python, Java, Scala) Required
• Hands on RDBMS experience (data modeling, analysis, programming, stored procedures) Required
• Familiarity with machine learning model deployment and management practices is a plus Required
• Strong executive presence and communication skills, with a proven ability to lead communities of practice, deliver presentations to senior leadership, and build alignment across technical and business stakeholders. Required
• AWS Certified Solution Architect Preferred
• Databricks Certified Associate Developer for Apache Spark Preferred
• DAMA CDMP Preferred
• or other relevant certifications. Preferred
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.