Job Description
• Design and implement scalable, secure, and high-performance data engineering frameworks
• Build and maintain ETL/ELT pipelines to support data ingestion, transformation, and integration
• Leverage Snowflake features (Snowpipe, Streams & Tasks, Native Apps) and DBT for building scalable, enterprise-grade data solutions across ingestion, transformation, and orchestration
• Optimize Synapse SQL Dedicated Pools to enhance BI performance
• Build data pipelines that seamlessly connect to reporting platforms such as Tableau and Power BI, enabling interactive and timely business insights
• Ensure data accuracy and integrity through validation checks, lineage tracking, and quality monitoring
• Tune workflows for performance and cost-efficiency at scale
Pay Rate: $10.00-$14.00 an hour depending on skills and experience
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience
• 4–7 years of total experience in Data engineering and big data pipeline development
• Demonstrated experience in Snowflake and DBT
• Experience with big data processing frameworks and modern data architecture principles
• Advanced proficiency in SQL, Python, and PySpark for scalable data processing and transformation
• Hands-on experience in building end-to-end ETL/ELT pipelines using DBT.
• Strong grasp of data warehousing, lakehouse architecture, and modern analytics on cloud platforms
• Hands-on experience with large-scale data platforms and performance tuning for high-volume data pipelines
• Well-versed in DevOps practices, Git, and CI/CD pipelines for automating data engineering workflows
• Strong understanding of modern data architecture and governance frameworks in a cloud environment
Nice to Have Skills & Experience
• Proficiency in Tableau and BI tools with experience in building scalable dashboards and optimizing data models
• Basic knowledge of SAP data models
• Ability to independently lead and deliver end-to-end data engineering projects with minimal supervision
Benefit packages for this role will start on the 1st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.