Work Location : Hyderabad
Mode : Full Time
About the company:
At Techsophy, we believe that technology has the power to elevate lives. We’re not just building solutions; we’re building a future where everyone has the tools to thrive in four crucial dimensions of well-being:
Physical Health: Offering accessible, high-quality healthcare that reaches everyone, everywhere, ensuring no one is left behind.
Financial Health: Providing financial security and insurance support, creating opportunities for everyone to build a stable, prosperous future.
Mental Health: Delivering compassionate mental and emotional support to individuals and organizations, fostering resilience and well-being at every level.
Cyber Health: Protecting the digital world, making sure every person and organization remains safe, secure, and free from cyber threats.
We build engineers with entrepreneurial and leadership mindsets who can create highly impactful products and solutions using technology to deliver immense value. For more details, please visit – https://www.techsophy.com/
Project:
We are engineering partners for Client development team in India & US. Our engineers work with Client engineering team to build highly scalable and highly performing applications on top of Client platform in addition to building platform components.
The Client’s team is developing tools to analyse, visualise, process, manage and curate data at large scale. We are looking for Hadoop Engineer who can be part of the team building a massive, scalable, distributed systems that process geospatial data at scale to power our products. Your support will be running on a big data environment with 10K+ nodes and data running into few peta bytes.
Requirements:
- Experience range: 6 to 9 Years
- Well versed with AWS – EMR, and other AWS services and dashboards
- Preferred – AWS certification for EMR cluster management
- Should be strong in Spark troubleshooting
- Responsible for maintaining large scale (1000+ nodes) production Hadoop clusters supporting MR and Spark.
- Strong knowledge of kerberos for operation and debugging C issues in Hadoop
- Point of contact for Hadoop related issues coming from Application teams and internal clusters
- Improve scalability, service reliability, capacity, and performance of the cluster and applications • running in the cluster.
- Triage production issues when they occur with other operational teams.
- Conduct ongoing maintenance across our large scale deployments across the world • Write automation code for managing large Big Data clusters
- Hands on experience to troubleshoot incidents, formulate theories and test hypothesis, and • narrow down possibilities to find the root cause.
- Deep understanding of Hadoop Eco system including Hive, MR, Spark and Zeppelin • Understanding and hands on experience with AWS Big data is a plus
Scope of Work :
- Sr. Hadoop Engineers will work with cross-functional collaboration and communication with members from other teams within Client (E.g. Developers, Leads, UX Designer, QA, Program Manager, Architects) to complete development
and integration work