Position: Sr. Engineer/Assistant Manager - Linux/Kafka
Qualifications:
Any bachelor’s degree or Equivalent education, any technical certification in Linux domain will be added advantage
Minimum 6+ years of Operations Experience in 24x7 high-availability Linux Production environment.
Good Understanding of Operating System:
Administer and tune Linux systems (Red Hat, SUSE, Ubuntu) in a 24x7 production environment.
Perform kernel tuning, system hardening, and firewall configuration.
Manage file systems (EXT3, XFS, NFS), LVM, and patching
Demonstrated ability to implement and maintain software load balancers like HAProxy, Keepalived for traffic distribution and high availability
Troubleshoot system issues using tools like iOS tat, vmstat, netstat, Sar, and strace Internal content
Expertise in Kafka & Streaming Systems:
Design, deploy, and manage Kafka clusters for real-time data ingestion and analytics.
Integrate Kafka with enterprise systems and data pipelines.
Monitor and optimize Kafka performance for low latency and high throughput.
Implement Kafka Connect, Kafka Streams, and Schema Registry.
Implemented Kafka for data streaming and real-time analytics.
Conducted performance testing and optimization to ensure high throughput and low latency.
Developed monitoring and alerting solutions to ensure the health of the Kafka cluster Expertise in Elasticsearch Administration:
Deploy, configure, and maintain Elasticsearch clusters in production and development environments.
Monitor cluster health, performance, and availability using tools like Kibana, Grafana, and Prometheus.
Optimize indexing, querying, and storage strategies for performance and scalability.
Manage Elasticsearch upgrades, backups, and disaster recovery plans.
Implement and manage index lifecycle policies and shard allocation strategies
Good understanding of CI/CD & Automation:
Build and maintain CI/CD pipelines using Jenkins, GitLab CI, or similar tools.
Automate infrastructure provisioning and configuration using Ansible, Terraform, or Puppet, similar tools
Implement GitOps workflows and Infrastructure as Code (IaC) practices
Integrate Elasticsearch into CI/CD pipelines for log aggregation and monitoring
Hands on in Implementing Observability stack:
Set up monitoring and alerting using Prometheus, Grafana, ELK Stack, or Zabbix.
Develop custom scripts for log parsing, system health checks, and alert automation
Familiarity with cloud platforms (AWS, Azure, GCP) and services like EC2, S3, IAM, and VPC.
Familiarity with containerization (Docker) and orchestration (Kubernetes, Helm).
Awareness on Security & Compliance:
Role-based access control (RBAC), TLS encryption, and audit logging
Security best practices including SE Linux, firewall, and audit
Integration of Elasticsearch with authentication systems (LDAP, SSO, etc.)
Compliance with data protection and operational standards
Must possess significant work experience in a production, mission-critical environments
Red-Hat and Certified Kubernetes Administrator (CKA) Certification will be an added advantage Internal content
Upman Placements is a firm of Researcher & Recruitment Specialists. The company started life in 2002 in India as a recruitment specialist firm. The company has built a stellar reputation for high ethical standards, a specialized focus and unparalleled service. Emerged As one of the only truly global specialized staffing firms focusing exclusively in professional positions for our clients spread across more than 20 countries with a sizable presence across industry levels. With footprints in competitive markets of India, Europe, United Kingdom, Middle East, Singapore, Malaysia, Philippines, Indonesia, Brunei and East & Central African countries.