This resource will be responsible for designing, developing, evaluating, modifying, deploying and troubleshooting as well as appropriate documentation of all data components (data architecture, logical and physical data models, database objects and database administration) that meet the needs of customer-facing applications, business applications, internal user applications, and business intelligence platforms.
Utilizes cloud technology to design and implement data movement, transformation, storage, and access for numerous systems and business use cases.
Performs data mapping analysis to facilitate the centralization of disparate data sources.
Designs and employs event-driven services to stream data in real-time.
Creates pipelines to orchestrate data movement activities.
Engineers and deploys APIs to facilitate data access.
Creates and maintains meta-data including data catalogues, dictionaries, lineage, taxonomy, etc...
Utilizes code repositories to ensure code quality and version control.
Adheres to strict standards and policies as it relates to data governance, retention, classification, privacy, and legislation.
Automates and employs data quality and code tests.
Accurately tracks and manages tasks through project lifecycle management tools.
Partners directly with product and project managers, architects, and other engineers to design and implement optimal, efficient, and scalable data solutions.
Maintains visibility and transparency of work performed through the documentation of requirements and processes.
Performs database design review and supports database testing.
Provides production environment support for database systems and processes.
We are a company committed to creating inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity employer that believes everyone matters. Qualified candidates will receive consideration for employment opportunities without regard to race, religion, sex, age, marital status, national origin, sexual orientation, citizenship status, disability, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to
Human Resources Request Form. The EEOC "Know Your Rights" Poster is available
here.
To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy:
https://insightglobal.com/workforce-privacy-policy/ .
At least 3+ Years of Experience within GCP Data Engineering (Big Query, Pub Sub, Dataflows, Dataproc, ETL and Airflow jobs)
Experience with Pyspark, Dataproc and creating Airflow jobs
Experience building out Data Pipelines and utilizing Big Query
CI/CD Process: Github, Gitlab, Jenkins
Experience with Restful or GraphQL APIs
Experience designing and deploying pipelines as well as experience with CI/CD Pipelines, Github actions, docker and jenkins builds
Experience with Java or Python
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.