Senior Azure Data Engineer - Development India,Chennai Full time Budget 30 LPA Job Information Years of Experience10 - 12 Years DomainRetail State/ProvinceTamil Nadu Zip/Postal Code600001 Job Description Creating Azure Synapse pipelines to Ingest data from Flat Files (Blob Storage) and Other Data Sources Data Ingestion covers structured, unstructured, and semi-structured data Creating File shares/Blob container in ADLS to store the Ingested Data Data Modelling for the Staging Layer Data Cleansing/formatting in staging layer in ADLS using Azure Synapse Transform raw data into standardized formats using data transformation processes in Azure Synapse Data Transformation process to merge, harmonize and transform the data and load it in transformation layer using Azure Synapse pipelines Copying the transformed data from ADLS to Synapse DB Schema Creating the final high-level aggregations/summary tables needed for consumption Create Synapse pipelines to expose the summarized data for API calls using SOAP/REST API’s Implement Azure Purview for data cataloguing and governance Implement Azure Monitor for logging and monitoring activities Implement Microsoft Entra ID for secure authentication and access control… Send your profile to [email protected]
Immanuel Raj’s Post
More Relevant Posts
-
Requirement1: SR. Azure Data Engineer Location: Bharteeya City, Bangalore Mode: Hybrid Experience: 8-9+ Years, Relevant 5+years in Azure Data Engineer. Client is not taking 4.9 years relevant also Budget : Look till 32 LPA Summary A purposeful Azure Data Engineer with over 5-7 years of experience in designing, developing, implementing, and optimizing data solutions on Microsoft Azure. Proven expertise in leveraging Azure services for ETL processes, data warehousing and analytics, ensuring optimal performance and scalability. Skills Azure Services: Azure SQL Database, Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake, etc. Programming Languages: T-SQL, PySpark, Spark SQL, Fair knowledge on KQL. Data Modeling: ERD, Dimensional Modeling ETL Tools: Azure Data Factory, Fair Knowledge of SSIS Big Data Technologies: Fair knowledge on Hadoop and Spark Performance Tuning and Troubleshooting Version Control: Git, Azure DevOps Responsibilities Proficient T-SQL developer in developing complex queries, stored procedures and skilled in performance tuning and ensuring database integrity and security. Designed and implemented Azure-based data solutions, by reducing data processing and optimizing costs by implementing efficient resource utilization strategies. Conducted performance tuning and troubleshooting of Azure SQL databases, enhancing system stability. Developed and Managed end-to-end ETL pipelines using Azure Data Factory, with improvement in data integration efficiency. Implemented Azure Synapse Analytics to streamline analytics workflows. Implemented data integration solutions leveraging Azure Data Factory and Azure Synapse Analytics for seamless data flow. Managed the migration of legacy data systems to Azure Synapse Analytics ensuring minimal downtime and improved performance. Orchestrated the migration of data infrastructure to Azure Databricks, ensuring seamless integration and improved efficiency. Developed and deployed scalable data pipelines on Azure Databricks. Implemented robust security measures for data storage and processing in compliance with industry standards and regulations. Collaborated with cross-functional teams to optimize data architecture and improve data quality. Developed and maintained Azure Data Lake storage solutions, enabling seamless data ingestion and processing. SSIS exp. Is mandatory Email: [email protected], contact no: 7995904048. #Azuredataengineer #microsoftazure #Azureservices #SQL #Git #Azuredevops #Troubleshooting #TSQL #Bangalore
To view or add a comment, sign in
-
Requirement1: SR. Azure Data Engineer Location: Bharteeya City, Bangalore Mode: Hybrid Experience: 8-9+ Years, Relevant 5+years in Azure Data Engineer. Client is not taking 4.9 years relevant also Budget : Look till 32 LPA Summary A purposeful Azure Data Engineer with over 5-7 years of experience in designing, developing, implementing, and optimizing data solutions on Microsoft Azure. Proven expertise in leveraging Azure services for ETL processes, data warehousing and analytics, ensuring optimal performance and scalability. Skills Azure Services: Azure SQL Database, Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake, etc. Programming Languages: T-SQL, PySpark, Spark SQL, Fair knowledge on KQL. Data Modeling: ERD, Dimensional Modeling ETL Tools: Azure Data Factory, Fair Knowledge of SSIS Big Data Technologies: Fair knowledge on Hadoop and Spark Performance Tuning and Troubleshooting Version Control: Git, Azure DevOps Responsibilities Proficient T-SQL developer in developing complex queries, stored procedures and skilled in performance tuning and ensuring database integrity and security. Designed and implemented Azure-based data solutions, by reducing data processing and optimizing costs by implementing efficient resource utilization strategies. Conducted performance tuning and troubleshooting of Azure SQL databases, enhancing system stability. Developed and Managed end-to-end ETL pipelines using Azure Data Factory, with improvement in data integration efficiency. Implemented Azure Synapse Analytics to streamline analytics workflows. Implemented data integration solutions leveraging Azure Data Factory and Azure Synapse Analytics for seamless data flow. Managed the migration of legacy data systems to Azure Synapse Analytics ensuring minimal downtime and improved performance. Orchestrated the migration of data infrastructure to Azure Databricks, ensuring seamless integration and improved efficiency. Developed and deployed scalable data pipelines on Azure Databricks. Implemented robust security measures for data storage and processing in compliance with industry standards and regulations. Collaborated with cross-functional teams to optimize data architecture and improve data quality. Developed and maintained Azure Data Lake storage solutions, enabling seamless data ingestion and processing. SSIS exp. Is mandatory #Azuredataengineer #microsoftazure #Azureservices #SQL #Git #Azuredevops #Troubleshooting #TSQL #Bangalore
To view or add a comment, sign in
-
Location - #WFH Pan Indi Share your CV : [email protected] "🚀 Join Our Team as an Azure Data Engineer: Unleash Your Expertise in Cloud Computing, Big Data, and Advanced Analytics! 🌟" Job Description : Role : Azure Data Engineer Experience : 5 - 10 Years ( 5 Min Relevant Ex. ) Location : Pan India Must - Have : Build the solution for optimal extraction, transformation, and loading of data from a wide variety of data sources using Azure data ingestion and transformation components. Following technology skills are required – Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience with ADF, Dataflow Experience with big data tools like Delta Lake, Azure Databricks Experience with Synapse Designing an Azure Data Solution skills Assemble large, complex data sets that meet functional / non-functional business requirements. Roles & Responsibility : Customer Centric Work closely with client teams to understand project requirements and translate into technical design Experience working in scrum or with scrum teams Internal Collaboration Work with project teams and guide the end to end project lifecycle, resolve technical queries Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data needs. Soft Skills Good communication skills Ability to interact with various internal groups and CoEs #DataEngineering #CloudComputing #BigData #ETL #DataAnalytics #DataIntegration #DataWarehousing #DataScience #DataManagement #AzureCloud #TechnicalDesign #AgileDevelopment #DevOps #ContinuousIntegration #DataQuality #DataSecurity #Compliance #Documentation #TrainingAndDevelopment #ProblemSolving #ProjectManagement #Connections
To view or add a comment, sign in
-
Dear Connections!!! We are Hiring for below skills Interested can share #CV's : [email protected] #Senior Azure Data Engineer - Development Job Information #Years of Experience 8 - 12 Years #Domain Retail #Work Mode : Remote #(C2C) Position Job Description Creating Azure Synapse pipelines to Ingest data from Flat Files (Blob Storage) and Other Data Sources Data Ingestion covers structured, unstructured, and semi-structured data Creating File shares/Blob container in ADLS to store the Ingested Data Data Modelling for the Staging Layer Data Cleansing/formatting in staging layer in ADLS using Azure Synapse Transform raw data into standardized formats using data transformation processes in Azure Synapse Data Transformation process to merge, harmonize and transform the data and load it in transformation layer using Azure Synapse pipelines Copying the transformed data from ADLS to Synapse DB Schema Creating the final high-level aggregations/summary tables needed for consumption Create Synapse pipelines to expose the summarized data for API calls using SOAP/REST API’s Implement Azure Purview for data cataloguing and governance Implement Azure Monitor for logging and monitoring activities Implement Microsoft Entra ID for secure authentication and access control
To view or add a comment, sign in
-
Job Title: Azure Data Engineer Location: MG Road, Bangalore (Hybrid – 2-3 days in the office per week) Experience: Minimum 8+ years Role Overview: We are seeking an experienced Senior Data Engineer to design, develop, and maintain scalable data solutions for data generation, collection, and processing. This client-facing role requires a candidate who can independently lead initiatives, ensure data quality, and work closely with the team to meet project goals. Strong communication skills, both verbal and written, are essential to effectively engage with clients and stakeholders. Mandatory Skills: Candidates must have hands-on experience with: Python (Programming Language) Spark (PySpark) Azure Synapse Notebooks Azure Data Factory (ADF) Key Responsibilities: Data Pipeline Development: Design and develop data pipelines to efficiently extract, transform, and load (ETL) data across various systems, ensuring smooth data migration and deployment. Data Quality Assurance: Implement robust data validation and cleansing processes to maintain high data quality and integrity. Performance Optimization: Optimize and tune data infrastructure to improve processing performance and scalability. Data Engineering Leadership: Act as the Subject Matter Expert (SME) on data engineering best practices and techniques, providing guidance and insight in team discussions and project planning. Advanced Data Processing: Process large volumes of structured and unstructured data from multiple sources, ensuring efficient integration. Data Modeling and Database Design: Develop data models and designs, with a strong understanding of SQL and distributed computing. Production Deployment: Participate in code release, production deployment, and ensure post-release stability. Additional Requirements: Communication Skills: As this is a client-facing role, candidates should demonstrate excellent communication skills, both verbal and written. Hybrid Work Model: Candidates must be willing to work from the office 2-3 days per week, with the flexibility to collaborate remotely as needed. Basic Screening Questions: Candidates should be prepared to discuss the following in detail: Project Scope: Describe the scope of your previous work with Azure Data Factory (ADF) and PySpark. How did these tools fit into the overall project? Data Ingestion: Explain your process for bringing data into the landing zone. Troubleshooting: Describe your approach to debugging a Spark job failure. Data Writing: If tasked with writing Spark data in Parquet format as a single file instead of multiple files, how would you accomplish this? This job requires proficiency in the mandatory skills listed. Candidates lacking experience in any of the following – Python, Spark (PySpark), Azure, Synapse, Notebook, and ADF – will not be considered.
To view or add a comment, sign in
-
HI Connection , This is Arul from 3k Technologies we have openings , Role : Cloud Data Engineer Location : Remote Experience : 1 - 3 We need candidates with experience of , Azure Data Bricks , Azure Data Factory and Kafka . Regards [email protected] #CloudNative #DataSecurity #DataManagement #AIinCloud #CloudServices #ServerlessArchitecture #CloudDataPlatform #DataEngineeringLife #DataTransformation #CloudDataManagement #CloudDataIntegration #DistributedSystems #CloudDataLake #DataOrchestration #DataQuality #DataGovernance #CloudAutomation #MachineLearningOps #DataModeling #DataEngineeringCommunity #CloudAutomation #InfrastructureAsCode #CI_CD #TechCareers #DataDriven #DataInnovatio #CloudDevOps #DataAnalyticsSolutions #CloudMigration #RealTimeData
To view or add a comment, sign in
-
We are hiring!!! Skill : Big Data Architect Experience: 10 to 12 years Place: Chennai Interview mode: Face to Face Interview Walk-in Direct Drive; August 23rd If you are interested, kindly share your resume to [email protected] Job Description Ø Design and implement scalable big data architectures on Microsoft Azure and Azure Fabric to handle large volumes of data efficiently and securely. Ø Develop and manage data pipelines for ingesting, processing, and analyzing large datasets, ensuring data quality and integrity. Ø Collaborate with data scientists, data engineers, and business stakeholders to understand data requirements and design solutions that meet business needs. Ø Integrate data from multiple sources and platforms into a unified data architecture, ensuring seamless data flow and accessibility across the organization. Ø Develop and maintain data storage solutions, including data lakes and data warehouses, using Azure services such as Azure Data Lake, Azure Synapse Analytics, and Azure Cosmos DB. Ø Implement data security and compliance measures to protect sensitive information and ensure adherence to industry standards and regulations. Ø Optimize data processing and storage for performance, scalability, and cost-effectiveness, leveraging Azure services like Azure Databricks and HDInsight. Ø Lead the design and implementation of real-time data streaming solutions using Azure Stream Analytics and Event Hubs. Ø Develop and implement data governance frameworks to manage data quality, data lineage, and data cataloging. Ø Design and implement machine learning and AI solutions on big data platforms to support advanced analytics and predictive modeling. Ø Ensure high availability and disaster recovery for big data systems, implementing strategies such as data replication and failover mechanisms. Ø Provide technical leadership and guidance to data engineering teams, ensuring best practices in data architecture and engineering. Ø Collaborate with IT and security teams to integrate big data solutions with enterprise systems and ensure compliance with security policies. Ø Stay updated on the latest advancements in big data technologies and Microsoft Azure services and incorporate new features and tools into the organization & data architecture. Ø Lead and participate in data architecture reviews and technical discussions, providing insights and recommendations for continuous improvement. Ø Develop and maintain documentation for data architectures, data pipelines, and best practices, ensuring clarity and accessibility for team members. Ø Conduct training and workshops for internal teams to promote knowledge sharing and best practices in big data and cloud computing. Thanks and Regards Madhan [email protected]
To view or add a comment, sign in
-
Hi #LinkedIn REFERENCES ARE HIGHLY APPRECIATED 🔍We're #Hiring! Data Integration Solution Architect (Azure Solution Designer)🚀 Role : #AzureSolutionDesigner OR #SolutionDesigner Location : #Remote Looking for Immediate to 30 days Notice Period Key Responsibilities: Design, architect, and implement end-to-end data integration solutions using Azure Data Factory. Collaborate with stakeholders to understand business requirements, data sources, and transformation logic. Develop data pipelines, activities, and workflows to orchestrate complex data integration processes. Design and implement data transformation logic using ADF Data Flows, SQL scripts, and other ETL tools. Identify impacted systems and interfaces for initiatives, aligned with the Target Architecture. Implement data quality checks, error handling, and monitoring mechanisms. Integrate Azure Data Factory with services like Azure Synapse Analytics, Azure SQL Database, and Azure Blob Storage. Provide guidance and support to development teams during implementation, testing, and deployment. Troubleshoot and resolve data integration issues, performance bottlenecks, and scalability challenges. Continuously inspect and adapt ways of working for improvement. Act as a key contact during development and testing cycles to ensure smooth initiative landings. Apply Now: [email protected] #DataIntegration #AzureDataFactory #ETL #DataPipelines #DataTransformation #ADFDataFlows #SQLScripts #TargetArchitecture #DataQuality #ErrorHandling #DataMonitoring #AzureSynapse #AzureSQL #AzureBlobStorage #TechSupport #DataIntegrationIssues #PerformanceTuning #Scalability #ContinuousImprovement #DevelopmentSupport #TestingSupport #TechInitiatives #DataEngineering #CloudSolutions #MicrosoftAzure #TechLeadership #Remote #CFBR
To view or add a comment, sign in
-
Exclusive Job only on Refer Me Group network, shared directly to us Sr. Azure Data engineer Remote Job Description: * Creating Azure Synapse pipelines to Ingest data from Flat Files (Blob Storage) and Other Data Sources * Data Ingestion covers structured, unstructured, and semi-structured data * Creating File shares/Blob container in ADLS to store the Ingested Data * Data Modelling for the Staging Layer * Data Cleansing/formatting in staging layer in ADLS using Azure Synapse * Transform raw data into standardized formats using data transformation processes in Azure Synapse * Data Transformation process to merge, harmonize and transform the data and load it in transformation layer using Azure Synapse pipelines * Copying the transformed data from ADLS to Synapse DB Schema * Creating the final high-level aggregations/summary tables needed for consumption * Create Synapse pipelines to expose the summarized data for API calls using SOAP/REST API’s * Implement Azure Purview for data cataloguing and governance * Implement Azure Monitor for logging and monitoring activities * Implement Microsoft Entra ID for secure authentication and access control Interested candidates can share their cv [email protected] -> Please write "Refer Me Group" as reference else it will not be accepted. If any company asks money for job don't pay, it might be fake. You can follow us Refer Me Group for more such exclusive jobs shared with us to post on our network. #refermegroup #helpinghand #latestjob #jobseeker
To view or add a comment, sign in
-
Hi #LinkedIn REFERENCES ARE HIGHLY APPRECIATED 🔍We're #Hiring! Data Integration Solution Architect (Azure Solution Designer)🚀 Role : #AzureSolutionDesigner Location : #Remote Looking for Immediate to 30 days Notice Period Key Responsibilities: Design, architect, and implement end-to-end data integration solutions using Azure Data Factory. Collaborate with stakeholders to understand business requirements, data sources, and transformation logic. Develop data pipelines, activities, and workflows to orchestrate complex data integration processes. Design and implement data transformation logic using ADF Data Flows, SQL scripts, and other ETL tools. Identify impacted systems and interfaces for initiatives, aligned with the Target Architecture. Implement data quality checks, error handling, and monitoring mechanisms. Integrate Azure Data Factory with services like Azure Synapse Analytics, Azure SQL Database, and Azure Blob Storage. Provide guidance and support to development teams during implementation, testing, and deployment. Troubleshoot and resolve data integration issues, performance bottlenecks, and scalability challenges. Continuously inspect and adapt ways of working for improvement. Act as a key contact during development and testing cycles to ensure smooth initiative landings. Apply Now: [email protected] #DataIntegration #AzureDataFactory #ETL #DataPipelines #DataTransformation #ADFDataFlows #SQLScripts #TargetArchitecture #DataQuality #ErrorHandling #DataMonitoring #AzureSynapse #AzureSQL #AzureBlobStorage #TechSupport #DataIntegrationIssues #PerformanceTuning #Scalability #ContinuousImprovement #DevelopmentSupport #TestingSupport #TechInitiatives #DataEngineering #CloudSolutions #MicrosoftAzure #TechLeadership #Remote #CFBR
To view or add a comment, sign in