Job Description
· Partner with existing internal report developers to review and analyze the current Jira reporting backlog.
· Perform technical and functional analysis of approximately forty (40) outstanding reporting tickets.
· Identify root causes, dependencies, data issues, and opportunities for consolidation or retirement.
· Design, develop, and implement reporting solutions as required based on Jira ticket content, including but not limited to:
o Report enhancements or defect fixes
o New report development
o Data model adjustments
o Query optimization and performance tuning
· Collaborate with business stakeholders and internal teams to validate requirements and confirm solutions.
· Actively update Jira tickets with development progress, technical notes, and status to ensure transparency and traceability.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience
Core Experience
• 5–8 years of experience in data engineering and/or business intelligence roles.
• 3+ years of hands-on experience with Microsoft Power BI and Databricks in production environments.
Data Engineering & Analytics
• Strong proficiency in SQL and Python/PySpark (or Spark SQL).
• Experience working with Delta Lake and lakehouse architectures.
• Ability to design analytics-ready data models for reporting and self-service use cases.
Power BI
• Strong experience with DAX and Power Query (M).
• Experience with Power BI Service including workspaces, gateways, deployment pipelines, and row-level security (RLS).
Oracle BI Publisher (BIP)
• Experience with BIP data templates, layout templates (RTF/XPT), bursting, and scheduling.
• Practical knowledge of the Oracle BRM data model and extracting data for analytics and reporting.
Azure & Platform Experience
• Experience with Azure Databricks, Azure Data Lake Storage (ADLS) / Blob Storage, and SSIS (as applicable).
Performance Optimization
• Proven track record of tuning Spark workloads (partitioning, caching, broadcast joins).
• Experience optimizing Power BI models and datasets.
• Experience tuning BIP queries and report execution.
Nice to Have
• CI/CD for data and BI solutions (Git integration with Databricks, Power BI deployment pipelines).
• Data governance, lineage, and cataloging tools.
• Testing frameworks for data pipelines and semantic models.
• REST APIs, event streaming, or near real-time analytics use cases.
• Power BI integration with Databricks SQL Endpoints and Lakehouse patterns.
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.