Join our Talent Network
Skip to main content

Job Description

Description

Securonix provides the Next Generation Security and Information Event Management (SIEM) solution. As a recognized leader in the SIEM industry, Securonix helps some of the largest organizations globally to detect sophisticated cyberattacks and rapidly respond to these attacks within minutes. With the Securonix SNYPR platform, organizations can collect billions of events each day and analyze them in near real time to detect advanced persistent threats (APTs), insider threats, privilege account misuses and online fraud. 

Securonix pioneered the User and Entity Behavior Analytics (UEBA) market and holds patents in the use of behavioral algorithms to detect malicious activities. The Securonix SNYPR platform is built on big data Hadoop technologies and is infinitely scalable. Our platform is used by some of the largest organizations in the financial, healthcare, pharmaceutical, manufacturing, and federal sectors. 

The Principal Hadoop Architect will be responsible sizing, HA and DR setup, configuration and tuning and scale out, troubleshooting for Sev-1 issues and other very high CEO visible aspects of Hadoop tools and platforms as well as the design of optimization and cloud native enhancements and cost controls. someone who is expert in one of knowledge in AWS, Microsoft Azure or Google Cloud is required and Experience with Kafka, Cloudera Hbase and Yarn, EMR HBase, EMR Yarn and Solr is desirable. Must know atleast 2 of the following tools: Kafka, Solr, HBase, EMR Yarn, Cloudera Yarn.

15+ years of overall experience, with 5+ years of Bigdata experience with a minimum of 3+ years experience on cloud platforms like AWS, Azure Infrastructure, or GCP. Person has to have architect acumen but should have passion for SRE and operations work also because a lot of architecture of hadoop emerges from the problems you run into managing the environment operationally.


Experience in hadoop and its ecosystem tools like HDFS, YARN, Hbase, Solr and kafka.

Experience in AWS services like Ec2, VPC, S3, RDS, Elastic cache, Athena.

Extensive experience in provisioning, configuring resources, storage accounts, resource groups, Security ports

Hands on experience on Linux administration and troubleshooting (CentOS 7.x, Red Hat 7.x)


Adding/removing the servers in availability set/load balancer.

Implement storage encryption, application gateway, local and virtual gateways, best practices from vendor


Ability to learn deep knowledge of our complex applications.

Assist in the roll-out and deployment of new product features and installations to new cloud infrastructure our rapid iteration and constant growth.

Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.

Deploy, operate, maintain, secure and administer solutions that contribute to the operational efficiency, availability, performance and visibility of our customers infrastructure and Hadoop platform services, across multiple vendors (i.e. Cloudera, Hortonworks, EMR, Databricks, HDInsights etc)

Gather information and provide performance and root cause analytics and remediation planning for faults, errors, configuration warnings and bottlenecks within our  Hadoop ecosystems.

Deliver well-constructed, explanatory technical documentation for architectures that we develop, and plan service integration, deployment automation and configuration management to business requirements within the infrastructure and Hadoop ecosystem.

Knowledge and Skills Requirements:

Strong understanding across Cloud and infrastructure components (server, storage, network, data, and applications) to deliver end to end Cloud Infrastructure architectures and designs.

Knowledge of related Cloud technologies (Azure, AWS, GCP)

Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility in how to design, deploy and operate cloud based Architectures.

Ability to work with team members from around the globe / experience working with off-shore models.

Strong knowledge for auto scaling and auto healing for Bigdata and hadoop components

Proactive approach to problem solving and identifying improvements.

Must possess strong written and verbal communication skills and must be capable of the understanding, documenting, communicating and presenting technical issues 

Save Job Saved
Share: mail

Similar Jobs