Remote | Engineering
Job Title: Python Django Data Engineer
Job Summary
We are seeking a skilled Python Django Data Engineer to design, build, and maintain scalable data pipelines and backend services. The ideal candidate has strong Python expertise, hands-on experience with Django, and a solid understanding of data engineering principles, including ETL processes, databases, and data optimization.
Key Responsibilities
Design, develop, and maintain data-driven backend services using Python and Django
Build and optimize ETL/ELT pipelines for structured and unstructured data
Integrate data from multiple sources (APIs, databases, third-party services)
Develop and maintain RESTful APIs to expose data services
Ensure data quality, reliability, and performance across pipelines
Optimize database queries and data models for scalability
Collaborate with data scientists, analysts, and product teams
Implement logging, monitoring, and error-handling mechanisms
Write clean, maintainable, and well-documented code
Participate in code reviews and contribute to best practices
Required Skills & Qualifications
Strong proficiency in Python
Hands-on experience with Django / Django REST Framework
Solid understanding of data engineering concepts (ETL, data modeling, pipelines)
Experience with SQL and relational databases (PostgreSQL, MySQL, etc.)
Familiarity with NoSQL databases (MongoDB, Redis, etc.)
Experience working with APIs and microservices
Knowledge of data processing tools (Pandas, NumPy, Airflow, or similar)
Understanding of cloud platforms (AWS, GCP, or Azure)
Experience with Git and CI/CD pipelines
Preferred Qualifications
Experience with big data technologies (Spark, Hadoop, Kafka)
Knowledge of containerization (Docker, Kubernetes)
Exposure to data warehousing (Redshift, BigQuery, Snowflake)
Preferred Qualifications
Experience with big data technologies (Spark, Hadoop, Kafka)
Knowledge of containerization (Docker, Kubernetes)
Exposure to data warehousing (Redshift, BigQuery, Snowflake)
Familiarity with Linux/Unix environments
Experience in Agile/Scrum development environments
Preferred Qualifications
Experience with big data technologies (Spark, Hadoop, Kafka)
Knowledge of containerization (Docker, Kubernetes)
Exposure to data warehousing (Redshift, BigQuery, Snowflake)
Familiarity with Linux/Unix environments
Experience in Agile/Scrum development environments
How can we assist you today?