Job Description
Key Responsibilities
Design and develop Python-based microservices (REST/async services) with strong API contracts and clean service boundaries
Build and integrate event-driven services using the Kafka ecosystem (including Kafka Streams concepts where applicable) and schema-based messaging (Avro)
Implement batch and streaming workloads that support downstream systems (Spark / Spark Streaming), leveraging Databricks for job execution and notebooks when needed
Collaborate with product and engineering partners to evaluate architecture, define requirements, and deliver scalable features
Write and optimize SQL for data access, transformations, validation, and QA workflows
Build reliable delivery pipelines and deployments using Docker, Kubernetes, and CI/CD tooling (e.g., Jenkins, ArgoCD)
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience
Required Skills & Experience
8-10 years engineering experience building and deploying backend services
Expert-level Python (microservices, APIs, async patterns, testing)
Strong experience with microservices architecture and distributed systems patterns
Experience working with Databricks platform, using it to run Spark jobs, notebooks, etc.
Experience with Spark / Spark Streaming for batch + streaming jobs
Experience with Kafka and streaming/event-based integrations; familiarity with Avro schemas
Hands-on experience with SQL and data-driven applications
Cloud experience in GCP and/or Azure
Experience with containerization and orchestration: Docker / docker-compose, Kubernetes
CI/CD experience with Jenkins and/or ArgoCD
Strong Git-based workflow experience (Git)
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.