In this role, you'll work in one of their IBM Consulting Client Innovation Centers (Delivery Centers), where they deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Their delivery centers offer their clients locally based skills and technical expertise to drive innovation and adoption of new technology. A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll
work with visionaries across multiple industries to improve the hybrid cloud
and AI journey for the most innovative and valuable companies in the world.
Your ability to accelerate impact and make meaningful change for your clients
is enabled by their strategic partner ecosystem and their robust technology
platforms across the IBM portfolio; including Software and Red Hat.
Curiosity
and a constant quest for knowledge serve as the foundation to success in IBM
Consulting. In your role, you'll be encouraged to challenge the norm,
investigate ideas outside of your role, and come up with creative solutions
resulting in ground breaking impact for a wide network of clients. Their
culture of evolution and empathy centers on long-term career growth and
development opportunities in an environment that embraces your unique skills
and experience.
Name of the
Organization: IBM
Requisition
ID: 737679BR
Positions: Data
Engineer: Data Platforms-AWS
Location: Chennai
Salary: As per
company Norms
Educational
Qualifications:
- Bachelor's Degree
Preferred
Education:
- Master's Degree
Required
Technical and Professional Expertise
- Developed the Pysprk code for AWS Glue jobs and for EMR.. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution..
- Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine).
- Developed Hadoop streaming Jobs using python for integrating python API supported applications..
- Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations..
- Re - write some Hive queries to Spark SQL to reduce the overall batch time
Preferred
Technical and Professional Experience
- Understanding of Devops.
- Experience in building scalable end-to-end data ingestion and processing solutions
- Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Job Description:
Your Role
and Responsibilities:
As
a Big Data Engineer, you will develop, maintain, evaluate, and test big data
solutions. You will be involved in data engineering activities like creating
pipelines/workflows for Source to Target and implementing solutions that tackle
the clients needs.
Your primary
responsibilities include:
- Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
- Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
- Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.
Apply Link –
Click Here
For Regular
Updates Join our WhatsApp – Click Here
For Regular Updates Join our Telegram – Click Here
0 Comments
Thanks for your comment, Will Reply shortly.