Data Engineer – AWS Job Openings in India 2026!!!
NTT Data announced job vacancy for the post of Data Engineer - AWS. The place of posting will be at Remote (Work from Home) – Hiring Office in Bangalore. Candidates who have completed Graduate / Engineering / Post Graduate with Fresher / Experience are eligible to apply. More details about qualifications, job description and roles & responsibilities are as follows
Company
Overview
|
Name of the Company |
NTT Data |
|
Required Qualifications |
Graduate |
|
Skills |
Amazon S3, Redshift, IAM, CloudWatch |
|
Category |
Technology |
|
Work Type |
Remote |
They
are looking for a skilled Data Engineer to design, build, and maintain
scalable, reliable data pipelines and platforms that support analytics,
reporting, and operational decision-making. The role’s primary focus is
enabling an end-to-end data ingestion and processing pipeline—extracting data
preferably from Salesforce, landing it in Amazon S3, and transforming/loading
it into Amazon Redshift for analytics-ready consumption. The engineer will also
work on SQL modernization (including Oracle SQL development and
conversion/optimization for Redshift), data quality, governance, monitoring,
and performance tuning.
Job Details
Θ Positions: Data
Engineer - AWS
Θ Job
Location:
Remote (Work from Home) – Hiring Office in Bangalore
Θ Salary: As per
company standards
Θ Job Type: Full Time
Θ Requisition
ID: 366941
Roles and
Responsibilities:
- Salesforce to AWS Data Pipelines (Core)
- Build and maintain pipelines that extract data from Salesforce (API-based or connector-based), land data in Amazon S3, and load into Amazon Redshift
- Implement incremental loads / CDC patterns where applicable; manage full loads and historical backfills as needed
- Establish scheduling and orchestration for daily/near-real-time jobs with reliability and retry mechanisms
SQL
Engineering (Oracle + Redshift)
- Design, develop, and optimize complex SQL in Oracle
- Analyze and convert Oracle SQL to Redshift-compatible SQL, optimizing for Redshift performance and cost
- Tune Redshift queries using best practices such as sort keys, distribution styles, and query patterns
ETL/ELT,
Data Modeling, and Warehousing
- Design and maintain ETL/ELT jobs, transformations, and reusable frameworks
- Build and optimize data models for warehousing/lakehouse patterns (facts/dimensions, curated layers)
- Support both batch and (where applicable) near-real-time processing patterns
Data
Quality, Governance, and Compliance
- Implement data quality checks (completeness, accuracy, consistency), reconciliation, and validation rules
- Ensure data integrity, metadata documentation, lineage, and governance practices
- Apply security and compliance standards (GDPR/regulatory needs where applicable)
Operations,
Monitoring, and Reliability
- Monitor pipelines and infrastructure using AWS monitoring tools; troubleshoot performance and reliability issues
- Improve pipeline resilience through alerting, logging, retries, and error handling
- Contribute to modernization and cloud migration initiatives and automation (DataOps/CI-CD where relevant)
Cross-Functional
Collaboration
- Partner with analytics/reporting and business stakeholders to gather requirements and deliver reliable datasets
- Work effectively with cross-functional teams and provide clear documentation of pipelines and datasets
Required
Skills & Qualifications:
- Technology Stack (Expected Exposure)
- Primary (Must-Have)
- AWS: Amazon S3, Redshift, IAM, CloudWatch
- Salesforce Integration: Salesforce APIs / connectors (extraction & ingestion patterns)
- Programming & Querying: Python, SQL
- Oracle: Complex SQL, stored procedures (as needed), performance tuning
- Orchestration/Scheduling: AWS Glue, Lambda, Step Functions, cron-based scheduling (or equivalent)
- Data Engineering & Platform (Good-to-Have / Nice-to-Have)
- ETL tools: Informatica, Talend, Azure Data Factory
- Warehousing: Snowflake, Azure Synapse (plus Redshift as primary)
- Big data: Spark, Hadoop
- Streaming & APIs: Kafka, Event Hub, REST APIs
- DevOps/DataOps: CI/CD for data pipelines, infrastructure-as-code exposure
- Minimum Skills Required: Required Skills & Experience
- Strong hands-on experience building ETL/ELT pipelines in cloud environments
- Proven experience integrating Salesforce data into a data platform (extraction, S3 landing, transformat"
How to Apply
Apply Link –
Click Here
For Regular
Updates Join our WhatsApp – Click Here
For Regular Updates Join our Telegram – Click Here
Disclaimer:
The
information provided on this page is intended solely for informational purposes
for Students, Freshers & Experience candidates. All the recruitment details
are sourced directly from the official website and pages of the respective
company. Latest MNC Jobs do not guarantee job placement, and the recruitment
process will follow the company’s official rules and Human Resource guidelines.
Latest MNC Jobs do not charge any fees for sharing job information. Latest MNC
Jobs strongly advise Students, Freshers & Experience candidates not to make
any payments for any job opportunities.
0 Comments
Thanks for your comment, Will Reply shortly.