Remote | Engineering
Role Overview
We are seeking an experienced Senior Data Engineer on a freelance/contract basis to design, build, and optimize scalable data pipelines and analytics solutions. The ideal candidate has strong hands-on expertise with Databricks and Informatica, and can work independently while collaborating with cross-functional teams.
Key Responsibilities
Design, develop, and maintain scalable data pipelines using Databricks (Apache Spark)
Build and manage ETL/ELT workflows using Informatica (PowerCenter / IICS)
Integrate data from multiple sources (databases, APIs, files, cloud storage)
Optimize data processing performance, reliability, and cost
Implement data quality checks, validation, and monitoring
Collaborate with data analysts, data scientists, and business stakeholders
Support data modeling for analytics and reporting use cases
Troubleshoot and resolve data pipeline and performance issues
Ensure data security, governance, and compliance standards are met
8+ years of experience in Data Engineering
Strong hands-on experience with Databricks and Apache Spark
Proven experience with Informatica (PowerCenter and/or Informatica Cloud)
Strong SQL skills and experience with relational and cloud data warehouses
Experience working with large-scale data processing systems
Familiarity with cloud platforms (AWS, Azure, or GCP)
Experience with data modeling and data architecture concepts
Excellent problem-solving and communication skills
Ability to work independently in a remote, contract-based environment
Experience with Delta Lake, PySpark, and Spark SQL
Exposure to CI/CD pipelines for data platforms
Knowledge of data governance, lineage, and metadata management tools
Prior experience in freelance or consulting roles
How can we assist you today?