Were looking for a Lead Software Engineer to help shape the future of Ad Technologys Generative AI platform, with a focus on building reusable capabilities and intelligent agents that streamline and automate critical operational workflows. This is a unique opportunity to apply cutting-edge AI tools to real-world challenges, improve efficiency, reduce manual effort, and empower teams across our ad delivery ecosystem.
Youll play a dual role:
Engineering reusable GenAI platform services, such as prompt routing, vector search, auditing, guardrails, and secure gateway abstraction
Building production-ready agents using frameworks like LangGraph and LangChain to automate workflows across infrastructure, support, CI/CD, and business operations
This role is ideal for engineers excited to work at the intersection of backend systems, AI orchestration, platform architecture, and enterprise automation. You'll be joining a collaborative team at the forefront of GenAI enablement, helping to shape how modern AI tools are applied to real operational challenges in one of the most advanced ad tech environments.
Responsibilities:
- Design and build reusable GenAI platform components, including: Prompt orchestration layers; Secure gateway abstraction using tools like LiteLLM, Portkey, or Kong; Embedding and retrieval infrastructure using vector databases like Pinecone, or FAISS Audit, logging, and trace analysis with tools like LangSmith Guardrails integration for output validation, safety checks, and policy enforcement.
Develop multi-turn LLM agents and tools using LangGraph and LangChain to automate operational workflows
Build robust APIs, SDKs, and accelerator components that serve as initializers for internal product teams, bots, and platforms to rapidly integrate and adopt GenAI services
Standardize approaches to context management, tool calling, fallback handling, and observability across agents
Ensure platform components meet security, governance, and compliance standards
Partner with infrastructure, data, security, and product teams to integrate GenAI workflows into existing systems
We are a company committed to creating inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity employer that believes everyone matters. Qualified candidates will receive consideration for employment opportunities without regard to race, religion, sex, age, marital status, national origin, sexual orientation, citizenship status, disability, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to
Human Resources Request Form. The EEOC "Know Your Rights" Poster is available
here.
To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy:
https://insightglobal.com/workforce-privacy-policy/ .
- 8 plus years of backend or platform engineering experience, with a track record of building scalable APIs
- Hands-on experience integrating with LLM APIs such as OpenAI, Claude, and Anthropic, and building LLM-powered workflows, agents, or AI assistants
- Proficient in Python; familiarity with LangChain, LangGraph, or similar LLM orchestration frameworks
- Familiarity with observability practices for LLM-based systems, including logging, latency tracking, and output monitoring
- Experience with LLM evaluation and testing frameworks to validate prompt behavior and agent reliability across iterations
- Strong understanding of cloud-native design patterns, secure AI API integration, and service scalability
- Working knowledge of vector databases such as Pinecone, FAISS, or Weaviate, along with retrieval-augmented generation (RAG) techniques
- Experience building modular, reusable GenAI components that support cross-functional adoption and internal accelerators
- Comfort working with CI/CD pipelines and collaborating with DevOps teams to deploy and monitor GenAI workflows
- Experience with LangSmith, PromptLayer, or tracing tools for debugging and evaluation of LLM workflows.
- Knowledge of AI gateway patterns and usage throttling like Kong, LiteLLM, or Portkey.ai
- Familiarity with guardrails, safety evals, prompt injection defense, and model governance frameworks.
- Previous experience building platforms or enablement tooling used across multiple engineering teams.
- Exposure to infrastructure or DevOps automation use cases is a plus.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.