Job Description : SaaS - Hadoop Support
The Hadoop SRE will be responsible for the L2 and L3 production support of the Securonix cloud platform as well as the design of optimization and cloud native enhancements and cost controls.
knowledge in Microsoft Azure or Google Cloud is required and Experience with AWS is desired along with Cloudera and EMR distributions of hadoop.
Essential Functions of the Job:
3+ years of overall Hadoop administration experience with a minimum of 1+ years experience on Azure Infrastructure, AWS, or GCP.
Experience in hadoop and its ecosystem tools like HDFS, YARN, Hbase, Solr and kafka.
Experience in AWS services like Ec2, VPC, S3, RDS, Elastic cache, Athena.
Extensive experience in provisioning, configuring resources, storage accounts, resource groups, Security ports
Hands on experience on Linux administration and troubleshooting (CentOS 7.x, Red Hat 7.x)
Experience with HIDS, file integrity and antivirus configuration on linux
Adding/removing the servers in availability set/load balancer.
Implement storage encryption, application gateway, local and virtual gateways, best practices from vendor
Experience with automation tool such as Ansible, Chef, Puppet as well as having experience with additional 3rd party containerization tools e.g. Docker, Kubernetes, etc.
Ability to learn deep knowledge of our complex applications.
Assist in the roll-out and deployment of new product features and installations to new cloud infrastructure our rapid iteration and constant growth.
Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
Deploy, operate, maintain, secure and administer solutions that contribute to the operational efficiency, availability, performance and visibility of our customers infrastructure and Hadoop platform services, across multiple vendors (i.e. Cloudera, Hortonworks, EMR)
Gather information and provide performance and root cause analytics and remediation planning for faults, errors, configuration warnings and bottlenecks within our customers infrastructure, applications and Hadoop ecosystems.
Deliver well-constructed, explanatory technical documentation for architectures that we develop, and plan service integration, deployment automation and configuration management to business requirements within the infrastructure and Hadoop ecosystem.
Understand distributed Java container applications, their tuning, monitoring and management; such as logging configuration, garbage collection and heap size tuning, JMX metric collection and general parameter-based Java tuning.
Observe and provide feedback on the current state of the clients infrastructure, and identify opportunities to improve resiliency, reduce the occurrence of incidents and automate repetitive administrative and operational tasks.
Contribute to the development of deployment automation artifacts, such as images, recipes, playbooks, templates, configuration scripts and other open source tooling.
Knowledge and Skills Requirements:
Strong understanding across Cloud and infrastructure components (server, storage, network, data, and applications) to deliver end to end Cloud Infrastructure architectures and designs.
Knowledge of related Cloud technologies (Azure, AWS, GCP)
Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility in how to design, deploy and operate cloud based Architectures.
Ability to work with team members from around the globe / experience working with off-shore models.
Proactive approach to problem solving and identifying improvements.
Must possess strong written and verbal communication skills and must be capable of the understanding, documenting, communicating and presenting technical issues