Securonix provides the Next Generation Security and Information Event Management (SIEM) solution. As a recognized leader in the SIEM industry, Securonix helps some of largest organizations globally to detect sophisticated cyberattacks and rapidly respond to these attacks within minutes. With the Securonix SNYPR platform, organizations can collect billions of events each day and analyze them in near real time to detect advanced persistent threats (APTs), insider threats, privilege account misuses and online fraud.
Securonix pioneered the User and Entity Behavior Analytics (UEBA) market and holds patents in the use of behavioral algorithms to detect malicious activities. The Securonix SNYPR platform is built on big data Hadoop technologies and is infinitely scalable. Our platform is used by some of the largest organizations in the financial, healthcare, pharmaceutical, manufacturing, and federal sectors.
We are seeking a Big Data Architect to join our team. We are passionate about domain driven design and distributed systems development. The candidate will specialize in helping our team expand our large-scale systems to more scalable, and maintainable distributed systems.
- Architect and design the next generation of scalability solutions for Securonix’s NextGen SIEM 2.0 Technology
- Partner with multiple internal and external teams to design, develop, and deliver scalable solutions
- Plan, triage, and prioritize work across multiple projects and clients
- Researching and implementing for code design, adoption of new technologies and skills.
- Design, develop, maintain product performance and scaling for petabyte scale of indexed data while giving fast searching in Java.
- Working with big data technologies KAFKA, SPARK, SOLR, HDFS, HBASE / REDIS / MYSQL
- Advance Securonix’s core analytics engine in conjunction with the data science teams to provide real time analytics on petabytes of streaming data.
- Share your knowledge with teammates and help mentor and guide clients
- 8+ years of experience with distributive design and implementation of highly scalable streaming data processing platforms.
- 5+ years of experience building web-based software systems
- Experience designing and developing distributed systems and composite UI applications
- Experience with Kubernetes docker, Kubeless, queueing, autoscaling and stream processing.
- Strong Experience with Kafka, Spark, Kubernetes, Hadoop ecosystem, Solr or Elasticsearch.
- Strong communication skills
- Experience in performance analysis and optimization.
- Experience in setting up, troubleshooting, and maintaining Hadoop and Kubernetes components for performance.
- Hands on experience scaling and performance optimization of large scale streaming data systems.