Job Description
We are looking for a Senior Software Engineer with strong Java experience and practical exposure to data-oriented systems. This role is best suited for an engineer who has built, tested, deployed, and supported production Java applications and is comfortable working with data processing, SQL, batch workflows, automated testing, cloud-hosted workloads, and operational troubleshooting.
The primary focus of this role is Java engineering. The engineer will design and maintain reliable application logic, write clean and testable code, build meaningful automated test suites, troubleshoot production issues, and contribute to improvements in reliability, performance, and maintainability.
The ideal candidate is not expected to be a specialist in every data or cloud platform. However, they should understand how data moves through production systems, how to validate correctness, how to test end-to-end behavior, and how to investigate issues using logs, metrics, test results, and source code.
This is not a pure BI, data science, infrastructure, or reporting role. It is a Java-focused engineering role with meaningful data, testing, cloud, and production support responsibilities. This role is hybrid on site 3X/week in Dayton, OH and the pay rate ranges between $55-65/hour based on years of experience.
Responsibilities:
• Design, build, test, and maintain Java-based applications and data-processing components.
• Develop reliable Java logic for data validation, transformation, enrichment, processing, and workflow support.
• Work with modern Java, Maven, JUnit, and related tooling to deliver maintainable production software.
• Create and maintain automated test suites, including unit tests, integration tests, and performance-oriented tests.
• Design tests that validate data correctness, end-to-end behavior, failure handling, retry behavior, and performance characteristics.
• Work with SQL to inspect, query, validate, and troubleshoot data.
• Support batch-oriented and data-oriented workflows involving structured and semi-structured data.
• Troubleshoot production issues involving failed jobs, bad input data, data quality issues, slow processing, resource limits, deployment issues, and unexpected system behavior.
• Support applications running in Azure-hosted environments, including Kubernetes-based deployments.
• Use logs, metrics, alerts, test output, job output, and source code to diagnose issues.
• Contribute to CI/CD pipelines, build automation, test automation, and release readiness.
• Participate in code reviews, design discussions, incident reviews, and technical planning.
• Collaborate with software engineers, data engineers, analysts, product owners, and operational stakeholders.
• Improve system reliability, performance, observability, maintainability, and operational supportability.
• Help define useful metrics, dashboards, alerts, and runbook guidance for production support.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience
• 7+ years of professional software engineering experience.
• Significant hands-on experience building and supporting production systems in Java.
• Strong recent Java development experience, preferably with modern Java versions such as Java 11 or Java 17.
• Experience with Maven, dependency management, build lifecycle, and automated software delivery practices.
• Experience writing automated tests using JUnit and related Java testing tools.
• Experience creating or maintaining integration test suites for production applications, batch jobs, or data-processing workflows.
• Familiarity with performance testing concepts, including throughput, latency, resource usage, bottleneck identification, and baseline comparison.
• Strong understanding of Java fundamentals, object-oriented design, collections, streams, error handling, clean code practices, and maintainable design.
• Practical experience with data-oriented systems, such as batch processing, file processing, transformation logic, validation workflows, or pipeline support.
• Working knowledge of SQL, including joins, filters, aggregations, data inspection, and troubleshooting query results.
• Familiarity with common data formats such as CSV, JSON, XML, or Parquet.
• Experience supporting software in production or production-like environments.
• Working knowledge of cloud-hosted application environments, preferably Azure.
• Familiarity with containers, Kubernetes concepts, deployment configuration, runtime logs, and resource-related troubleshooting.
• Experience using Git-based development workflows and CI/CD pipelines.
• Ability to investigate issues independently using logs, metrics, test output, job output, and source code.
• Strong communication skills, including the ability to explain technical issues clearly to engineering and non-engineering partners.
Nice to Have Skills & Experience
• Experience creating or maintaining dashboards, metrics, and alerts in Grafana or similar observability tools.
• Experience with Azure monitoring, Log Analytics, or cloud-native operational tooling.
• Experience with Microsoft Fabric.
• Experience with PowerBI, especially understanding how downstream reporting depends on source data quality.
• Experience with Databricks, especially workflows involving PySpark.
• Experience with PySpark or Python-based data processing.
• Experience working with data lake, blob storage, object storage, or large file processing patterns.
• Experience tuning batch workloads for throughput, memory usage, retry behavior, and failure recovery.
• Experience with batch-oriented processing patterns.
• Experience with performance or load testing tools and frameworks.
• Experience operating, monitoring, or supporting agent-based workflows in a production or production-like environment.
• Experience troubleshooting agent execution failures, stale runs, retry loops, malformed inputs, failed tool calls, handoff issues, or degraded agent performance.
• Experience defining operational guardrails for agents, including logging, metrics, alerting, escalation paths, and failure recovery.
• Experience with GitHub Actions, Artifactory/JFrog, or similar enterprise build and artifact management tools.
• Experience with Agile delivery, JIRA, production incident triage, and operational handoffs.
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.