Hiring Notification !! We're looking for Azure Data Architect at Clairvoyant, An EXL Company. Experience Needed: Minimum 9 to 14 Years Location: Noida, Gurgaon, Pune, Bengaluru, Hyderabad - Hybrid Email your CV: mamata.rawool@exlservice.com with the subject line Azure Data Architect. Must have skills: -8+ years of proven experience as a Software Technical Architect in Big Data Engineering. -Strong understanding of Data Warehousing, Data Modelling, Cloud and ETL concepts -Experience with Azure Cloud technologies, including Azure Data Factory, Azure Data Lake Storage, Databricks, Event Hub, Azure Monitor and Azure Synapse Analytics -Proficiency in Python, PySpark, Hadoop, and SQL. -Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. -Strong experience in common data warehouse modelling principles including Kimball, Inmon. Responsibilities: -Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. -Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. -Collaborate with project managers for project/sprint planning by estimating technical tasks and deliverables. -Data Modelling: Develop and maintain data models to represent our complex data structures, ensuring data accuracy, consistency, and efficiency. -Technical Leadership: Provide technical leadership and guidance to development teams, promoting best practices in software architecture and design. -Solution Design: Collaborate with stakeholders to define technical requirements and create solution designs that align with business goals and objectives. -Programming: Develop and maintain software components using Python, PySpark, and Hadoop to process and analyze large datasets efficiently. -Big Data Ecosystem: Work with various components of the Hadoop ecosystem, such as HDFS, Hive, and Spark, to build data pipelines and perform data transformations. -SQL Expertise: Utilize SQL for data querying, analysis, and optimization of database performance. -Performance Optimization: Identify and address performance bottlenecks, ensuring the system meets required throughput and latency targets. -Scalability: Architect scalable and highly available data solutions, considering both batch and real-time processing. -Documentation: Create and maintain comprehensive technical documentation to support the development and maintenance of data solutions. -Security and Compliance: Ensure that data solutions adhere to security and compliance standards, implementing necessary controls and encryption mechanisms. -Improve the scalability, efficiency, and cost-effectiveness of data pipelines. -Take responsibility for estimating, planning, and managing all tasks and report on progress. #dataarchitect #azure #python #hiring #exl
Mamata R.’s Post
More Relevant Posts
-
#Hiring #Hiring #Hiring Position – #GCP Data Architect Exp-12+ years Location- #All EXL location(Hybriod) Skills- #GCP #PYTHON #PYSPARK #SQL If you are interested please share your cv #monika.jain@scienstechnologies.com 12+ years of proven experience as an Architect in Big Data Engineering. Mandatory Technical Proficiency - Hands-on experience with #Python, #PySpark, #Google Cloud Tech stack, #SQL, #DevOps. Experience with Google Cloud Services such as Streaming + Batch, Cloud Storage, Cloud Dataflow, Data Proc, Big Query & Big Table Proven real time exposure and use of contemporary data mining, cloud computing and data management ecosystem like Google Cloud, #Hadoop, #HDFS and #Spark. Proficient in Data modelling that can represent complex data structures while ensuring accuracy, consistency, and efficiency; data warehousing, and ETL processes. Ability to perform system analysis and assessment of existing systems and operating methodologies leveraging in-depth knowledge of big data technologies and ecosystems. Excellent problem-solving skills and the ability to address complex technical challenges. Strong communication and leadership skills. Highly Desired: Experience with Anaplan, Looker & PowerBI, Experience with Apigee, Apollo GraphQL. Experience with serverless data warehousing concepts, Additional programming/scripting languages: JavaScript, Java. Knowledge of Snowflake, Generative AI (LLMs), Marketing activation. Role & Responsibilities: Solution Design: Participate in in requirements gathering and architectural discussions. Define technical requirements and create solution designs that align with business goals and objectives. Technical Leadership: Lead creation of technical design/specifications and provide technical leadership and guidance to development teams, promoting best practices. Provide expertise with Master Data Management, Reference Data Management, Data Quality, Meta Data Management and Data Governance in General. Programming Hands on: Develop and maintain software components using Python, PySpark, and GCP services to process and analyze large datasets efficiently. Build data pipelines and perform data transformations. Evaluate newest technologies for optimization opportunities and future enhancement needs like self-serve and ad hoc reporting. Implement necessary infrastructure for optimal and efficient ETL from a disparate data. Performance Optimization: Identify and address performance bottlenecks, ensuring the system meets required throughput and latency targets. Security and Compliance: Ensure that data solutions adhere to security and compliance standards, implementing necessary controls and encryption mechanisms. Scalability: Architect scalable and highly available data solutions, considering both batch and real-time processing. Documentation: Create and maintain comprehensive technical documentation to support the development and maintenance of data solutions.
To view or add a comment, sign in
-
#Role: Azure Data Architect(Data Engineer) #Experience: 8+ Years #Location : Chennai/Hyderabad. #Notice Period: Max 30 Days 8+ years of work experience with Data Engineering 5+ years of work experience with PySpark #Essential Job Functions / Principal Accountabilities: Designs, develops, optimizes, and maintains data architecture and pipelines that adhere to ELT principles and business goals. Solves complex data problems to deliver insights that helps business achieve its goals. Creates data products for engineers, analysts, and data scientist team members to accelerate their productivity. Engineer effective features for modelling in close collaboration with data scientists and businesses Leads the evaluation, implementation and deployment of emerging tools and process for analytics data engineering to improve productivity and quality. Partners with machine learning engineers, BI, and solutions architects to develop technical architectures for strategic enterprise projects and initiatives. Fosters a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Advises, consults, mentors, and coach other data and analytic professionals on data standards and practices. Develops and delivers communication and education plans on analytic data engineering capabilities, standards, and processes. Learns about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics as necessary to carry out role effectively. #MINIMUM SKILLS AND QUALIFICATION REQUIREMENT: Bachelor’s degree in computer science, statistics, engineering, or a related field 10-15 years of experience required. Experience in PySpark is required for this position, and a strong knowledge of Microsoft Fabric would be a plus. Experience with designing and maintaining data warehouses and/or data lakes with big data technologies such as Spark/Databricks, or distributed databases, like Redshift and Snowflake, and experience with housing, accessing, and transforming data in a variety of relational databases. Experience in building data pipelines and deploying/maintaining them following modern DE best practices (e.g., DBT, Airflow, Spark, Python OSS Data Ecosystem) Knowledge of Software Engineering fundamentals and software development tooling (e.g., Git, CI/CD, JIRA) and familiarity with the Linux operating system and the Bash/Z shell Experience with cloud database technologies (e.g., Azure) and developing solutions on cloud computing services and infrastructure in the data and analytics space. Basic familiarity with BI tools (e.g., Alteryx, Tableau, Power BI, Looker) Expertise in ELT and data analysis, SQL primarily Conceptual knowledge of data and analytics, such as dimensional modelling, reporting tools, data governance, and structured and unstructured data #interested can share resume to #harshitha.b@inentinc.com #references are highly appreciated
To view or add a comment, sign in
-
We are hiring #staffdataengineer #dataengineer #python #nosql Experience: 8 to 11 Years Location: Bangalore Job Role: 1. Data Pipeline Development: Design, develop, and maintain robust data pipelines to ingest, process, and analyze large volumes of data. 2. Data Modeling and Architecture: Collaborate with cross-functional teams to design scalable and efficient data models and architecture that meet the requirements of our evolving product ecosystem and align with our business goals. 3. Data Integration: Integrate data from diverse sources, including APIs and third-party services, to create comprehensive data sets for analysis and visualization. 4. Data Quality Assurance: Implement processes and tools to ensure the quality, accuracy, and reliability of data, including data validation, cleansing, and monitoring, while driving the adoption of best practices in data modeling, data quality, data governance, and data security across all data platforms. 5. Performance Optimization: Optimize data processing and querying performance to ensure timely and efficient access to insights for internal stakeholders and customers. 6. Monitoring & Troubleshooting: Oversee data pipeline execution, troubleshoot issues, and serve as the leading point for data engineering incidents. This role demands vigilance and problem-solving skills to ensure data systems' reliability and efficiency. 7. Data Analysis and Insights: Work closely with other technical teams such as software engineering and product management to translate business requirements into data-driven solutions, providing actionable insights and recommendations. 8. Technical Leadership: Serve as a technical lead and mentor junior team members, fostering a culture of collaboration, innovation, and continuous learning. 9. Evaluation of Emerging Technologies: Evaluating and recommending new and emerging data technologies and tools to enhance the capabilities of the data platform, ensuring it stays current with industry advancements and best practices. 10. Documentation and Communication: Document technical designs, implementation details, and best practices, and effectively communicate complex concepts and solutions to both technical and non-technical stakeholders. Mandates: 1 Proficiency in programming languages such as Python, Java, or Scala. 2 Strong SQL skills and experience with relational and NoSQL databases (e.g., PostgreSQL, MongoDB). 3 Hands-on experience with big data technologies such as Hadoop, Spark, Kafka, and distributed computing frameworks. 4 Familiarity with cloud platforms and services for data storage, processing, and analytics (e.g., AWS S3, Redshift, EMR, Glue). 5 Demonstrated experience with advanced data engineering platforms such as Databricks and Snowflake, alongside proficiency in ELT and data modeling tools like Fivetran, dbt, and Debezium etc. Please share the resumes/CVs at shruti.karvade@zispl.com and contact at 9665058844 to grab the opportunity #bangalore
To view or add a comment, sign in
-
Good opportunity
Team Lead | Global Recruitment Leader | IT Talent Acquisition Expert | AI Recruiter | Naukri Maestro | Advanced Sourcing Specialist | Business Development Leader | Client Partnerships | Filmmaker
Hello, LinkedIn Connections! Job Title: Senior Big Data Engineer Location: Chennai Experience Required: 7+ years Key Responsibilities: Design, develop, deploy, and maintain scalable software solutions Build and manage large data lakes and data warehouses (e.g., Snowflake, Redshift, Synapse) Develop and optimize data pipelines for processing high volumes of data Work with AWS resources, including Kinesis, EMR, Glue, SQS, Lambda, etc. Collaborate with cross-functional teams within an Agile framework (Scrum) Ensure the performance, scalability, and reliability of data solutions Lead medium to large-scale projects that align with strategic business objectives Required Skills & Qualifications: 7+ years of experience in Big Data Engineering Expertise in Python, Spark, Hadoop, and AWS Strong experience with SQL and NoSQL databases Proficient in data warehousing and data lake architecture At least 2+ years of experience in deploying and maintaining software in public cloud environments like AWS or Azure Solid understanding of computer science fundamentals, including data structures, algorithms, and object-oriented design Excellent analytical skills with the ability to work under pressure and within time constraints Demonstrated leadership abilities and a passion for technology Preferred Qualifications: Experience with Snowflake or similar data warehousing technologies Familiarity with other public clouds like Azure #BigDataEngineer #DataEngineering #SeniorDataEngineer #DataArchitecture #PythonJobs #SparkJobs #AWSJobs #Hadoop #CloudComputing #DataWarehousing #SQLJobs #NoSQL #DataPipelines #Snowflake #TechJobs #HiringNow #JobOpening #CareerOpportunity #Agile #TechCareers #DataJobs GAC DIGITAL GAC Staffing Solutions Sudha Nujilla Prabhu Yadav
To view or add a comment, sign in
-
Looking for contract to hire 4 positions: Role - Data Engineer (5yrs exp) Location - Bangalore Hybrid, Immediate joiner Please tag urs or ur friend profile or leave contact no to discuss further. Responsibilities: ● Create and maintain optimal data pipeline architecture ● Assemble large, complex data sets that meet functional / non-functional business requirements. ● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability. ● Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. ● Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. ● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ● Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. ● Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Qualification: ● Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. ● Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. ● Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. ● Strong analytic skills related to working with unstructured datasets. ● Build processes supporting data transformation, data structures, metadata, dependency and workload management. ● A successful history of manipulating, processing and extracting value from large disconnected datasets. ● Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. ● Strong project management and organizational skills. ● Candidate with 3-5 years of experience in a Data Engineer role,attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems. They should also have experience using the following software/tools: ○ Experience with #bigdata tools: #Hadoop, #Spark, Kafka, etc. ○ Experience with relational #SQL and #NoSQL databases ○ Experience with #data pipeline and workflow management tools ○ Experience with #AWS cloud services: #EC2, EMR, RDS, Redshift ○ Experience with stream-processing systems: Storm, Spark-Streaming, etc. ○ Experience with object-oriented/object function scripting languages: #Python, Java, C++, #Scala, etc. #pyspark #pythonspark Please share cv and message me on WhatsApp no- +65 85113734 Thanks and regards Sibashis
To view or add a comment, sign in
-
Job Title: Data Engineer Department: Tech Experience required-3-5yrs Location: Bangalore (onsite) Contract:1 year About the role: You have a passion for the craft and a strong desire to grow as professional. You strike the right balance between being technology purist and getting things done. You know that making things right is hard and requires a high level of discipline and dedication. You have strong feeling of good software design and architecture yet prefer to express your knowledge through the code. You have understanding that the software lifecycle doesn’t end with committing the code into repository, and knowledge of how it is running in production is as important as code itself, hence you are keeping your infrastructure knowledge and skills up-to-date. You have a preferred programming language, but on practice it does not make a difference for you what to use. Responsibilities • Building scalable, global, complex systems to solve problems with high quality software. • Develop and implement quality controls and standards to ensure trust in the data • Develop re-usable API Suites for consumption across Business Units • Process Data with most performant & efficient practices to transform data into insights • Experiment with new technologies to improve the platform • Maintain proper just enough documentation to ensure continuity • Works effectively with others in team to achieve shared goals. • Communication skills: Can explain the data from business perspective, communicate clearly and effectively to cross-functional business partners of varying technical levels. Good data story telling skill • Willing to learn Requirement • 2+ years of experience and must have computer science /IT background. • Ability to write high performance and quality code in Python or Other equivalent languages like Node, Java, Go, Scala with ability to adopt. • Knowledge of Computer Science fundamentals such as object-oriented design, algorithm design, data structures, problem solving, and complexity analysis. • Proficient in SQL. • Good understanding about different types of storages/databases and purpose of each. • Self-motivated, fast learner, detail-oriented, team player and a sense of humour • Effectively articulate technical challenges and solutions. • Adept at handling ambiguous or undefined problems as well as ability to think abstractly. Bonus Points For • Exposure with distributed, multi-tiered systems, algorithms, and relational / Non-relational databases. • Exposure with data workflow management tools Airflow, Google DataFlow. • Exposure to various SDLC frameworks such as Docker, Kubernetes, Jenkins, Vault. • Exposure on various cloud-based solutions such as Google Cloud Platform, Qubole, Segment, Tableau, Workato. • Experience with cloud environments such as AWS and container orchestration platforms such as Kubernetes if your interested please share your cv - info@maiginfotech.com
To view or add a comment, sign in
-
Hello Connections , Hope you are doing well. This is #Venkatesh . We have an immediate opportunity with one of our clients. Please find the job description below, and if you are interested, please forward your updated resume to #venkatesh@interaslabs.com Job Title:Data Architect Location: Bangalore/Hyderabad/Pune Responsibilities: Lead a team of data engineers in designing and implementing scalable data solutions. Collaborate with data scientists to understand data requirements and ensure data integrity. Develop and maintain data pipelines for efficient data processing. Skills: Data modeling & Governance. Big data technologies (Hadoop, Spark, Databricks) Datawarehouse/Datalake ( Snowflake/Redshift/Synapse/BigQuery) Cloud platforms (AWS, Azure) SQL and NoSQL databases Analytics (Quicksight/PowerBI/Tableau) Workflow and Orchestration( Oozie/ Airflow/Mage) Python or Java programming Shashi Reddy Soma SreejaThummala Sravani Mohd Akbar Naveed #TechnicalArchitect #EnterpriseSolutions #Java #SpringFramework #Microservices #JavaScript #FrontEndDevelopment #AWS #GCP #Azure #CloudComputing #Security #CodeQuality #SoftwareArchitecture #BusinessAlignment #EmergingTechnologies #MobileDevelopment #RDBMS #NoSQL #RealTimeCommunication #Leadership #CrossFunctionalCollaboration #IndustryTrends #JobOpening #SoftwareEngineering #EnterpriseSolutions #SoftwareEngineering #TechLeadership #ArchitecturalDesign #ScalableArchitecture #SecurityBestPractices #CloudComputing #DevelopmentTeam #CodeReview #TechnicalLeadership #BusinessAlignment #EmergingTech #FrontEndDevelopment #BackendDevelopment #DatabaseManagement #TechnicalExpertise #SoftwareDevelopment #ITJobs #CareerOpportunity #JavaDeveloper #SpringBoot #CodeQualityAssurance #ContinuousImprovement #InnovationLeadership #AgileDevelopment #TechCommunity #JobAlert
To view or add a comment, sign in
-
Hello #Connections We are #Hiring for the below position Job Title : Senior Data Engineer - Snowflake Skill : Snowflake, Azure Data Factory, Python Experience : 6 to 12 years Job Role : Permanent Position Job Location : Pune, Bangalore, Hyderabad, Noida Salary : As per Market Standards Job Description: ✔Ability to implement Snowflake as an Tech Lead-Experience in building data warehouse on snowflake. ✔Experience in Azure data lake and Azure Data Factory (ADF). ✔Good knowledge of ETL/ELT process and provide direction to the team. ✔Must have knowledge of snowflake architecture and implementation patterns. ✔Must have understanding of snowflake roles and deployments of virtual warehouses. ✔Good knowledge of data modelling, integration and design techniques. ✔Must be a hands-on programming(Python/pySpark..etc) , snowflake and SQL queries, standard DWH + ETL concepts ✔Must be able to write complex sql queries and Stored Procedures and UDFs. ✔Creates and updates technical requirement documentation for all systems and solutions ✔Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel and understand how to use these features. ✔Extensive experience in Data Proofing and Data Modelling, Data Quality, Data Standardization, Data Steward. ✔Experiences or knowledge of Marketing modelling is mandatory. ✔The candidate should be able to play a major role in our analytics DataMart project, and help us develop modern end-to-end modelling solution in Marketing domain. ✔Combination of Python and Snowflakes- SnowSQL; writing SQL queries against Snowflake. ✔Experience working on different source data - RDBMS, Flat files, XML, JSON ✔Expertise in SQL, especially within cloud-based data warehouses like Snowflake and Azure ✔Expertise with SnowSQL, advanced concepts (query performance tuning, time travel etc.) and features/tools (data sharing, events, SnowPipe etc.) ✔Experience with end to end implementation of Snowflake cloud data warehouse or end to end data warehouse implementations on-premise ✔Proven analytical and problem-solving skills ✔Strong understanding of ETL concepts and work experience in any of ETL tool like Informatica, Data stage or Talend ✔Ability to work independently and on multiple tasks/initiatives with multiple deadlines ✔Effective oral, presentation, and written communication skills ✔Data modelling and data integration Interested, kindly share your updated resume to arun@zenfinet.com #snowflake #Azure #python #Dataengineer #Azuredatafactory #Bangalore #Hyderabad #Pune #Noida #Immediate #Hiring #Hybrid #Jobopenings
To view or add a comment, sign in
-
We are #hiring Sr. Data Engineer Experience: 7 to 9 years Location: Bangalore Skills Required: 4 to 8 years of total IT experience with 2+ years in big data engineering and Microsoft Azure Experience in implementing Data Lake with technologies like Azure Data Factory (ADF), PySpark, Databricks, ADLS, Azure SQL Database A comprehensive foundation with working knowledge of Azure Full stack, Event Hub & Streaming Analytics. A passion for writing high-quality code and the code should be modular, scalable, and free of bugs (debugging skills in SQL, Python, or Scala/Java). Enthuse to collaborate with various stakeholders across the organization and take complete ownership of deliverables. Experience in using big data technologies like Hadoop, Spark, Databricks, Airflow, Kafka. Adept understanding of different file formats like Delta Lake, Avro, Parquet, JSON, and CSV. Good knowledge of building and designing REST APIs with real-time experience working on Data Lake/Lakehouse projects. Experience in supporting BI and Data Science teams in consuming the data in a secure and governed manner. Certifications like Data Engineering on Microsoft Azure (DP-203) or Databricks Certified Developer (DE) are a valuable addition. Responsibilities: As a Big Data Engineer (Azure), you will build and learn about a variety of analytics solutions & platforms, data lakes, modern data platforms, data fabric solutions, etc. using different Open Source, Big Data, and Cloud technologies on Microsoft Azure. Design and build scalable & metadata-driven data ingestion pipelines (For Batch and Streaming Datasets) Conceptualize and execute high-performance data processing for structured and unstructured data, and data harmonization. Schedule, orchestrate, and validate pipelines. Design exception handling and log monitoring for debugging Ideate with your peers to make tech stack and tools-related decisions. Interact and collaborate with multiple teams (Vendors/Consultants/Data Science & Dev team) and various stakeholders to meet deadlines, to bring Analytical Solutions to life. Should be able to understand complex architectures and be comfortable working with multiple teams. Delivery teams to build, automate, and deploy cloud solutions on Microsoft Azure Monitor production, staging, test, and development environments for a myriad of applications in an agile and dynamic organization. Should be well-versed in Security Skills. Should have a Customer-Focused approach and understanding the problems. Proactivenessis must and self-driven. Interested candidates can share their resumes to careers@cliqhr.co.in #dataengineer #bigdata #azure #bi #hadoop #scala #python #careeradvancement #wfo #wfojobs
To view or add a comment, sign in
-
🌟 #ImmediateJoiner Wanted: #SeniorDataEngineer - #Bangalore We're on the lookout for a talented and driven Senior Data Engineer to join our team in Bangalore immediately. If you're passionate about data engineering and thrive in a dynamic, startup environment, we want to hear from you! Responsibilities: ✨Lead the design and implementation of robust data pipelines for efficient ETL of large datasets. ✨Collaborate with stakeholders to understand business requirements and translate them into scalable data solutions. ✨Enhance and maintain our data architecture for scalability, reliability, and performance. ✨Investigate and resolve data anomalies within ETL pipelines to ensure data integrity. ✨Develop and maintain documentation for data processes to ensure clarity and sustainability. ✨Utilize AWS platform capabilities to architect and operate scalable production solutions. ✨Implement data security practices and compliance measures to protect sensitive information. ✨Build and manage data warehouses and data lakes to store and organize large datasets. ✨Implement data quality checks and procedures to ensure accuracy and consistency. ✨Automate data workflows and processes using scripting languages and data pipeline orchestration tools. ✨Collaborate with business analysts and stakeholders to understand data needs and translate them into technical solutions. ✨Monitor and troubleshoot data pipelines to optimize performance and ensure scalability. ✨Troubleshoot production data issues and propose sustainable solutions. ✨Document data pipelines and processes for maintainability and knowledge sharing. ✨Stay updated with emerging technologies and best practices in data engineering. Skills and Qualifications: 🌟Bachelor's degree in Computer Science, Information Technology, or related field. 🌟Minimum 5 years of experience as a Data Engineer or similar role. Strong proficiency in SQL and experience with relational databases (MySQL, PostgreSQL). 🌟Experience with cloud-based data platforms (AWS, Azure, GCP). 🌟Experience with data warehousing solutions (Snowflake, Redshift, BigQuery). 🌟Experience with data ingestion tools (Fivetran, Stitch, Kafka). 🌟Programming experience in Python, Java, or similar languages. 🌟Experience with scripting languages (Bash, Shell). 🌟Experience with data workflow management tools (Airflow, Luigi). 🌟Excellent problem-solving and analytical skills. 🌟Strong communication and collaboration skills. 🌟Ability to work independently and as part of a team. Ready to seize this thrilling opportunity and drive innovation in our projects? We'd love to hear from you! Kindly share your resume with us at tarun.shah@sumerusolutions.com. Let's collaborate in shaping the future of data engineering together! #DataEngineer #BangaloreJobs #ImmediateJoiner #TechJobs
To view or add a comment, sign in
Experienced Service Desk Analyst and Desktop Support Specialist
7moGreat opportunity!