Senior Python Systems Developer (Data & Automation)
Architect and deploy robust, high-performance Python backends, specializing in large-scale data ingestion, sophisticated anti-bot countermeasures, and scalable automation systems.
About Contezy
Contezy is a digital innovation firm focused on engineering scalable web infrastructure, proprietary automation tools, and performance-driven AI systems. We build with an emphasis on speed, production-level reliability, and rapid horizontal growth.
Role Summary
As a Senior Python Systems Developer, you will be a key engineer driving our automated data infrastructure. This includes designing, implementing, and maintaining high-volume web crawling frameworks, deploying state-of-the-art anti-bot evasion strategies (e.g., bypassing Turnstile/Cloudflare), and optimizing persistent data storage pipelines using both relational (SQL) and non-relational (NoSQL) databases. Success in this role requires close collaboration with Data and DevSecOps teams to ensure our systems are reliable and performant at extreme scale.
Key Responsibilities
- Design, develop, and maintain industrial-scale data scraping and web crawling frameworks using expert-level Python and modern libraries (e.g., Scrapy, Playwright).
- Implement and manage advanced anti-detection and bot evasion strategies, including sophisticated proxy rotation, CAPTCHA integration, session lifecycle management, and bypassing leading systems (Cloudflare, PerimeterX, Turnstile).
- Build, test, and manage scalable RESTful APIs and microservices in Python to serve internal and external data consumption needs.
- Develop and optimize critical data pipelines writing into PostgreSQL (SQL) and MongoDB (NoSQL) for highly structured and unstructured data persistence.
- Collaborate directly with Engineering and DevSecOps teams on robust deployment, monitoring, and performance tuning leveraging containerization (Docker) and CI/CD methodologies.
Required Qualifications
- 3-5 years of professional experience in Python development, specifically in high-throughput backend and data systems.
- Deep expertise in web scraping, browser automation (Playwright/Selenium), and advanced anti-bot evasion and detection bypass techniques.
- Strong proficiency with SQL databases, particularly PostgreSQL, including complex schema design, replication, and advanced query optimization.
- Proven experience with MongoDB or equivalent NoSQL databases for managing large, high-velocity unstructured datasets.
- Expertise in proxy management, detailed network traffic analysis, and handling complex JavaScript-rendering pipelines.
- Proficiency in building high-performance, asynchronous Python services and APIs (e.g., FastAPI, Flask).
Preferred Skills
- Experience with distributed systems, advanced message queues, or task queues (Celery, RabbitMQ, Kafka).
- Familiarity with container orchestration (Kubernetes) and major cloud platforms (AWS/GCP/Azure) for production deployment.
- Contributions to relevant open-source Python projects or a public portfolio demonstrating complex scraping capabilities.
- Advanced knowledge of data modeling, ETL (Extract, Transform, Load) processes, and data quality assurance.