ABOUT THE PRODUCT:
ElementalTV, founded in 2020 in Pasadena, CA, empowers publishers to maximize advertising inventory value with advanced audience data and AI curation for direct programmatic transactions. ElementalTV’s 1Audience solution provides a comprehensive view of audiences using deterministic and probabilistic data.
1Audience Alliance (1AA) unites CTV publishers to address common challenges, enhance buyer engagement, and optimize revenue. The product offers a video advertising technology stack, including ad server, SSP, DSP, DMP, and more, facilitating access to direct, programmatic, and third-party inventory buyers/sellers. The tech stack utilizes Java 21, Jetty/Netty, Druid, Kafka, Aerospike, PostgreSQL, and runs on Proxmox provisioned by Ansible with Jenkins for deployment. Client-side is Angular 8-based SPA.
ElementalTV’s team spans the USA, Germany, Portugal, Ukraine, Poland, Armenia, Belarus, Pakistan and India. The regular day-to-day process management is done using Jira, Confluence, Slack, and audio/video calls.
FOR THOSE WHO APPRECIATE:
- Physical cluster: Proxmox: 80 servers, 1Gbit/s public and 10Gbit/s private networks;
- Logical clusters: PostgreSQL, MongoDB, Ceph, Zookeeper, Kafka, Druid, Aerospike;
- VMs: 1000+ KVM-based;
- Load balancing: Nginx (OpenResty);
- Big Data: 30+TB of data volume (+100G daily income);
- High-load: up to 100K RPS incoming; ~4.5 Gbps external and ~20 Gbps internal traffic;
- Apps: multiple clustered Java applications.
WHAT YOU’LL BE UP TO:
- Build and maintain geo-distributed highly available on-prem clusters;
- Implement Multi-Data Center operation and fault tolerance in clusters;
- Optimize physical and logical cluster performance continuously;
- Support and develop monitoring systems (Prometheus/VictoriaMetrics, Grafana);
- Fine-tune PostgreSQL, Kafka, Druid, Aerospike;
- Evolve CI/CD pipeline for the engineering team;
- Work on other tasks assigned by the management.
WHAT YOU’LL NEED TO HAVE:
- 5+ years of experience in a DevOps role;
- Proactive problem-solving approach;
- Proficiency in Linux (Debian, Ubuntu), Linux networking (firewall, VPN, routing, load balancing), shell scripting (bash);
- Practical experience in building and maintaining distributed bare-metal Kubernetes clusters with Cluster API and tools like Tinkerbell or MAAS;
- Experience with Ansible;
- Monitoring expertise with Grafana, Prometheus/VictoriaMetrics.
WOULD BE A PLUS:
- Experience with Proxmox clusters or other Corosync-based solutions;
- Knowledge of Ceph storage;
- Working with Openresty;
- Hands-on experience with real-time analytics stack, such as Kafka + Druid;
- Experience with any of PostgreSQL, KafkaConnect, Aerospike, Zookeeper, MongoDB;
- Understanding of Java-based projects;
- Familiarity with Jenkins;
- Understanding of high-load big data environments.
INTERVIEW STEPS:
- HR interview;
- Technical interview;
- Interview with a Product Owner and CTO.
APPLY NOW