Associate Data Engineer Job Openings in Bangalore 2025!!!
Morgan Stanley has announced a job vacancy for the role of Associate Data Engineer. The place of posting will be Bangalore. Candidates who have completed Graduate / Engineering / Post Graduate programs with Fresher or Experience backgrounds are eligible to apply. More information regarding qualifications, job description, and roles & responsibilities is provided below.
Company
Overview
|
Name of the Company |
Morgan Stanley |
|
Required Qualifications |
Graduate / Post Graduate |
|
Skills |
Apache Hadoop / Apache Spark / Unix Shell
scripting / Python / SQL |
|
Category |
Data and Analytics |
|
Work Type |
Onsite |
This
position is for a Big Data Engineer within the Morgan Stanley Wealth Management
– Framework CoE team at the Mumbai or Bengaluru offices. The CoE team is
responsible for defining and governing enterprise data platforms, ensuring
scalable and secure solutions supported by cloud data engineering services that
enhance platform reliability. They are seeking professionals with a strong
sense of ownership and the ability to drive innovative solutions. The role
involves automating existing processes, introducing new ideas, coding,
participating in architecture discussions, and performing code reviews and
testing. The ideal candidate will have a passion for building robust systems,
contributing to big data pipeline optimization, and leveraging distributed data
processing frameworks to support mission-critical analytics. Candidates should
be self-motivated team players capable of working independently with minimal
supervision.
Job Details
Θ Positions: Associate
Data Engineer
Θ Job
Location:
Bangalore
Θ Salary: As per
company standards
Θ Job Type: Full Time
Θ Requisition
ID: PT-JR024702
Roles and
Responsibilities:
- Design and develop new automation frameworks for ETL processing.
- Support the existing framework and act as the technical point of contact for related teams.
- Enhance the current ETL automation framework based on user requirements.
- Perform performance tuning of Spark and Snowflake ETL jobs.
- Conduct POCs and suitability analysis for cloud migration.
- Drive process optimization through automation and utility development.
- Collaborate on new features and troubleshoot issues.
- Support batch-related issues.
- Assist application teams by addressing technical queries.
Required
Skills
- Strong knowledge of UNIX Shell and Python scripting.
- Expertise in Spark.
- Strong SQL knowledge.
- Hands-on understanding of HDFS, Hive, Impala, and Spark operations.
- Strong logical reasoning and problem-solving capability.
- Working knowledge of GitHub, DevOps, CICD, and enterprise code-management tools.
- Excellent collaboration, communication, and teamwork skills.
- Ability to create a positive environment of shared success.
- Strong ability to prioritize, execute tasks, and resolve issues independently.
- Good to have: Experience with Snowflake and any data integration tool such as Informatica Cloud.
Primary
Skills:
- Apache Hadoop
- Apache Spark
- Unix Shell scripting
- Python
- SQL
Good to Have
Skills:
- Snowflake / Azure / AWS (any cloud)
- IDMC / Any ETL Tool
How to Apply
Apply Link –
Click Here
For Regular
Updates Join our WhatsApp – Click Here
For Regular Updates Join our Telegram – Click Here
Disclaimer:
The
information provided on this page is intended solely for informational purposes
for Students, Freshers, and Experienced candidates. All recruitment details are
sourced directly from official company websites and pages. Latest MNC Jobs does
not guarantee job placement, and the recruitment process will follow the
company’s official rules and Human Resource guidelines. Latest MNC Jobs does
not charge any fees for sharing job information and strongly advises candidates
not to make any payments for job opportunities.

0 Comments
Thanks for your comment, Will Reply shortly.