#Hello_connections! #Hiring_alert!!! #Hiring_for_below_mentioned_positions! Data Engineer Senior Data Engineer Lead Data Engineer Experience: 5+ years Location: Bangalore Interested Candidates can share the resume to [email protected] #Datapipelines #Datamodeling #Snowflake #SQLQueries #DataLakes #DataWarehouse #Azure #Python #APIDevelopment #FiveTran #Coding #Documenting #Testing
Sudipta Malick’s Post
More Relevant Posts
-
#Hello_connections! #Hiring_allert!!! #Hiring_for_below_mentioned_positions! Data Engineer Senior Data Engineer Lead Data Engineer Experience: 5+ years Location: Bangalore Interested Candidates can share the resume to [email protected] #Datapipelines #Datamodeling #Snowflake #SQLQueries #DataLakes #DataWarehouse #Azure #Python #APIDevelopment #FiveTran #Coding #Documenting #Testing
To view or add a comment, sign in
-
Dear Connections, We are hiring for AD - Project Data Engineer Skills: GCP, Data Science, Data Modeling, ETL pipelines, Data Visualization Location: Bangalore Exp: 6 - 10yrs Mode of Work: Hybrid can share your resume at [email protected] #DataEngineer# #DataScience# #DataModeler# #GCP# #Bigquery# #ETLPipelines# #Datavisualizatiin#
To view or add a comment, sign in
-
Job Title: Big Data Developer Experience: 3+ years Contract: 3 Months + Extension Location: Onsite Bengaluru (India, Embassy Tech Square, Bangalore) Expected CTC: 100000 per month Regrads Lokesh. K [email protected] #hiring #BigdataDeveloperhiring #hiringalert #BigData #DataScience #DataAnalytics #BigDataDeveloper #DataEngineering #Hadoop #Spark #Python #Java #Scala #MachineLearning #ArtificialIntelligence #DataEngineering #DataArchitecture #DataMining
To view or add a comment, sign in
-
Lead Data Engineer Job Description Job Summary – Looking for a 5-7 years’ Data Engineer with Python exp Years of experience needed – 5-7 years’ Technical Skills: A Data Engineer with Python experience is responsible for designing, building, and maintaining the infrastructure and systems required for processing and analyzing large datasets. They focus on data pipelines, data integration, and data storage while leveraging Python for automation, data transformation, and machine learning workflows. They collaborate with data scientists, analysts, and other engineers to ensure data availability and accuracy for various use cases. Key Responsibilities: 1. Data Pipeline Development and Maintenance Build Data Pipelines: Design and develop scalable, reliable, and efficient data pipelines to extract, transform, and load (ETL/ELT) data from various sources into data warehouses or data lakes. Maintain Data Workflows: Use Python and other tools to automate data ingestion and transformation processes, ensuring they run reliably and on schedule. Optimize Pipelines: Continuously monitor and optimize pipelines for performance, scalability, and cost-efficiency, ensuring low latency and high throughput in data processing. 2. Data Integration and Transformation Data Integration: Integrate data from multiple structured and unstructured sources (e.g., databases, APIs, logs, third-party platforms) into unified datasets. Data Transformation: Use Python to clean, normalize, and transform raw data into usable formats for downstream data analytics, reporting, or machine learning models. Data Quality Management: Implement data validation rules, error handling, and monitoring to ensure data accuracy, consistency, and reliability across systems. 3. Data Storage and Architecture Data Warehousing: Design and maintain robust data storage solutions, including relational databases (e.g., PostgreSQL, MySQL) and cloud-based data warehouses (e.g., AWS Redshift, Google BigQuery, Snowflake). Data Lake Management: Set up and manage data lakes (e.g., AWS S3, Azure Data Lake) for handling large volumes of unstructured and semi-structured data.
To view or add a comment, sign in
-
!!Now Hiring!! One of our Clients is hiring for Data Engineer. Interested Candidate can share their resume via email or WhatsApp WhatsApp @ 9445911058 Email: [email protected] #job #career #hiring #jobsearch #jobopening #nowhiring #kolkatajobs #kolkatajobseekers #data #engineer #snowflake #azure #aws #adls #adf #synapse #redshift #emr #glue #sql #python #etlprocesses
To view or add a comment, sign in
-
Chennai Azure Data Engineer 4+ Yrs Data Engineer with Databricks Skills Design, build, and maintain efficient, scalable, and reliable data pipelines within the Databricks platform. Execute Extract, Load, Transform (ELT) operations using Databricks and DBT Labs tools to streamline data integration processes. Collaborate closely with data architects, analysts, and business stakeholders to understand and implement database requirements. Develop best practices for data handling, quality, and security within Databricks environments. Troubleshoot and optimize complex SQL and Databricks scripts for performance. Databricks Database Skills: Strong expertise in data warehousing, data integration, and database management on Databricks. (DBT Labs): Its good to have if candidate has Hands-on experience with DBT(Data build tools) Labs tools for ELT, ensuring data integrity and transforming raw data into actionable insights. 4+ years of experience in data engineering, with a primary focus on Databricks. Proficiency in SQL, data transformation, and data pipeline optimization. Apply: #WhatsApp 91-6232667387 Follow Jobs - Data Analyst | Data Engineer | Business Analyst | etc. #DataAnalytics #DataAnalysis #DataScience #BusinessIntelligence #DataVisualization #SQL #Python #Statistics #DataEngineering #BigData #ETL #DataPipeline #BusinessAnalysis #DataDriven #interview #career #hiring #jobs #jobsearch #jobseekers
To view or add a comment, sign in
-
Lead Data Engineer Job Description Job Summary – Looking for a 5-7 years’ Data Engineer with Python exp Years of experience needed – 5-7 years’ Technical Skills: A Data Engineer with Python experience is responsible for designing, building, and maintaining the infrastructure and systems required for processing and analyzing large datasets. They focus on data pipelines, data integration, and data storage while leveraging Python for automation, data transformation, and machine learning workflows. They collaborate with data scientists, analysts, and other engineers to ensure data availability and accuracy for various use cases. Key Responsibilities: 1. Data Pipeline Development and Maintenance Build Data Pipelines: Design and develop scalable, reliable, and efficient data pipelines to extract, transform, and load (ETL/ELT) data from various sources into data warehouses or data lakes. Maintain Data Workflows: Use Python and other tools to automate data ingestion and transformation processes, ensuring they run reliably and on schedule. Optimize Pipelines: Continuously monitor and optimize pipelines for performance, scalability, and cost-efficiency, ensuring low latency and high throughput in data processing. 2. Data Integration and Transformation Data Integration: Integrate data from multiple structured and unstructured sources (e.g., databases, APIs, logs, third-party platforms) into unified datasets. Data Transformation: Use Python to clean, normalize, and transform raw data into usable formats for downstream data analytics, reporting, or machine learning models. Data Quality Management: Implement data validation rules, error handling, and monitoring to ensure data accuracy, consistency, and reliability across systems. 3. Data Storage and Architecture Data Warehousing: Design and maintain robust data storage solutions, including relational databases (e.g., PostgreSQL, MySQL) and cloud-based data warehouses (e.g., AWS Redshift, Google BigQuery, Snowflake). Data Lake Management: Set up and manage data lakes (e.g., AWS S3, Azure Data Lake) for handling large volumes of unstructured and semi-structured data. Database Optimization: Optimize database queries, indexing, and partitioning to ensure efficient data retrieval and processing.
Mphasis is Hiring | Lead Data Engineer | 7-10 Yrs | Bangalore
mphasis.ripplehire.com
To view or add a comment, sign in
-
DATA ENGINEER #dataengineering #datascience #bigdata #machinelearning #artificialintelligence #dataengineer #dataanalytics #bigdataanalytics #data #python #coding #deeplearning #programming #analytics #ai #pythonprogramming #hadoop #dataanalysis #jobringer
Data Engineer | Creative Technostructure | Bengaluru / Bangalore / India
jobringer.com
To view or add a comment, sign in
MBA In Finance & HR || B.COM (Accounts)
7moInterested