Data Architect
Mohit Kiran’s Post
More Relevant Posts
-
We are hiring!!! Skill : Big Data Architect Experience: 10 to 12 years Place: Chennai Interview mode: Face to Face Interview Walk-in Direct Drive; August 23rd If you are interested, kindly share your resume to [email protected] Job Description Ø Design and implement scalable big data architectures on Microsoft Azure and Azure Fabric to handle large volumes of data efficiently and securely. Ø Develop and manage data pipelines for ingesting, processing, and analyzing large datasets, ensuring data quality and integrity. Ø Collaborate with data scientists, data engineers, and business stakeholders to understand data requirements and design solutions that meet business needs. Ø Integrate data from multiple sources and platforms into a unified data architecture, ensuring seamless data flow and accessibility across the organization. Ø Develop and maintain data storage solutions, including data lakes and data warehouses, using Azure services such as Azure Data Lake, Azure Synapse Analytics, and Azure Cosmos DB. Ø Implement data security and compliance measures to protect sensitive information and ensure adherence to industry standards and regulations. Ø Optimize data processing and storage for performance, scalability, and cost-effectiveness, leveraging Azure services like Azure Databricks and HDInsight. Ø Lead the design and implementation of real-time data streaming solutions using Azure Stream Analytics and Event Hubs. Ø Develop and implement data governance frameworks to manage data quality, data lineage, and data cataloging. Ø Design and implement machine learning and AI solutions on big data platforms to support advanced analytics and predictive modeling. Ø Ensure high availability and disaster recovery for big data systems, implementing strategies such as data replication and failover mechanisms. Ø Provide technical leadership and guidance to data engineering teams, ensuring best practices in data architecture and engineering. Ø Collaborate with IT and security teams to integrate big data solutions with enterprise systems and ensure compliance with security policies. Ø Stay updated on the latest advancements in big data technologies and Microsoft Azure services and incorporate new features and tools into the organization & data architecture. Ø Lead and participate in data architecture reviews and technical discussions, providing insights and recommendations for continuous improvement. Ø Develop and maintain documentation for data architectures, data pipelines, and best practices, ensuring clarity and accessibility for team members. Ø Conduct training and workshops for internal teams to promote knowledge sharing and best practices in big data and cloud computing. Thanks and Regards Madhan [email protected]
To view or add a comment, sign in
-
Hi, We are #hiringimmediately Data Governance Consultant, If interested Please share your resume to [email protected]. Role: Data Governance Consultant Location: Chennai/Coimbatore- Work from office Experience: 5+ years Notice period: Immediate to 15 days max Data Skills: Data Modeling: Understanding of data models (star schema, snowflake schema) and the ability to identify data entities, attributes, and relationships relevant to the financial industry. Data Quality: Knowledge of data quality principles, metrics and techniques for data cleansing, validation, and transformation. Data Security: Understanding of data security best practices in finance, including access control, encryption, and data loss prevention. Data Catalog Management: Experience with managing data catalogs, including data classification, tagging, and metadata management, if Acryl utilizes a data catalog component. Data Lineage Tracking: Understanding of data lineage tracking principles and how Acryl implements it to track data origin, transformations, and flow throughout the system. Data Access Control: Knowledge of data access control best practices and how Acryl enforces access restrictions based on user roles and permissions. Technical skills and duties Has data engineering knowledge Has familiarity with graphql, python or java Has familiarity with key data platforms: Snowflake, Looker, Aurora Postgres, Fivetran, Airflow, SQL server Can work cross-functionally to identify dataset owners, gather documentation, and coordinate setting up the business glossary, domains, and other similar tasks. This information usually lives across multiple technical teams. Tag all tables and fields, PII/non-PII data Specify the relationship between datasets Have a project manager experience to track tasks against milestones Nice to have: Financial Data Management: Familiarity with financial data regulations and standards (e.g., Know Your Customer (KYC), Anti-Money Laundering (AML)). Basic understanding of financial products, services, and regulations relevant to small public financial companies. AIML : Knowledge of data science, AIML, MLOps, data protection, privacy
To view or add a comment, sign in
-
We are #Hiring for the #DataModellingDeveloper Candidates can share your updated CV'S to [email protected] Position: Data Modelling Developer Experience: 6+ yrs Relevant Notice Period: Immediate - 15 Days Location: ITPL, Whitefield, Bengaluru (3-4 day a week WFO) Shift: 10AM - 7 PM Job Description Good Understanding to #Datamodel Concepts Hands on Experience any #DataModelling Tools, preferably #SAPPowerDesigner > Maintain logical and physical #datamodels along with accurate metadata. > Solution architecture, #design, and #development of models for small projects. > Create conceptual data model to identify key business entities and visualize their relationships. > Provide documentation of solution design, as well as end user training and change management communications at times. > Support teams through all project phases including knowledge transition phase. > Present and communicate modeling results and recommendations to internal stakeholders. > Ensure and enforce a governance process to oversee implementation activities and ensure alignment to the defined architecture. > Collaborate internationally with various stakeholders. > Engage with clients to clarify inbound research requests. > Extract key insights for an executive audience, conveying them in clear, concise written and visual summaries, employing data-driven charts, maps and graphics where appropriate. > Perform data profiling/analysis activities that helps to establish, modify and maintain data model. > Communicate physical database designs and dba and explains features that may affect the physical data model. > Regularly engage and consult stakeholders on solving business problems through data, guiding the conversation rather than following orders. > Analyze data-related system integration challenges and propose appropriate solutions with strategic approach.
To view or add a comment, sign in
-
Hi All I hope you all are doing well. 😊 I would like to connect with you regarding a job opportunity in one of the Big4. Job Role - Data Engineer Experience - 6+ Years Location- Mumbai Job Summary: Data Modeler Job Description: Designing and implementing data models to manage and analyse data effectively. Work closely with data architects, data analysts, and database administrators to create and manage data systems that support efficient data management and analysis. Understand business requirements, translate them into technical designs, deal with complexities of integration of disparate systems, investigate the quality of the data in those systems and put it all together so it can be accessed, manipulated, and turned into business intelligence. Design conceptual/ logical/ physical entity relationship and/or relational models and multidimensional models for an EDW that implements flexible reporting capability in the context of one or more reporting toolsets. Provide multi-dimensional modelling expertise and leadership by designing data structures to support the growth and improved reporting functionality of the Data Lake/ EDW. Guide the development and maintenance of data standards, policies and practices; Guide the design of the data models, in alignment with the solution architecture. Work closely with database administration, ETL, and BI development staff to optimize overall design at an enterprise level. Responsibilities: Analyse Business Needs: Understand and translate business requirements into data models. Develop Data Models: Create conceptual, logical, and physical data models to support business processes. Database Design: Design and implement effective database solutions to store and retrieve company data. Data Migration: Oversee the migration of data from legacy systems to new solutions. Performance Monitoring: Monitor system performance by performing regular tests, troubleshooting, and integrating new features. Compliance: Ensure that database implementation procedures comply with internal and external regulations. Data Security: Understanding of authentication and authorization techniques. Developing the model to support data level security. Please connect with Renu Indoriya or you can mail your resumes to [email protected] Sanjeev Anand Techsist Solution Mohd Qaaid Riya Agarwal Showket Rahman Priti Yadav Nidhi Suryavanshi Shalima Singh Mamtesh Tiwari #Data Engineer #Data Modeler #Data Modelling #Data Migration #ETL #Business Needs #Database Design
To view or add a comment, sign in
-
Senior Azure Data Engineer - Development India,Chennai Full time Budget 30 LPA Job Information Years of Experience10 - 12 Years DomainRetail State/ProvinceTamil Nadu Zip/Postal Code600001 Job Description Creating Azure Synapse pipelines to Ingest data from Flat Files (Blob Storage) and Other Data Sources Data Ingestion covers structured, unstructured, and semi-structured data Creating File shares/Blob container in ADLS to store the Ingested Data Data Modelling for the Staging Layer Data Cleansing/formatting in staging layer in ADLS using Azure Synapse Transform raw data into standardized formats using data transformation processes in Azure Synapse Data Transformation process to merge, harmonize and transform the data and load it in transformation layer using Azure Synapse pipelines Copying the transformed data from ADLS to Synapse DB Schema Creating the final high-level aggregations/summary tables needed for consumption Create Synapse pipelines to expose the summarized data for API calls using SOAP/REST API’s Implement Azure Purview for data cataloguing and governance Implement Azure Monitor for logging and monitoring activities Implement Microsoft Entra ID for secure authentication and access control… Send your profile to [email protected]
To view or add a comment, sign in
-
Dear Connections, We are hiring for AD - Project Data Engineer Skills: GCP, Data Science, Data Modeling, ETL pipelines, Data Visualization Location: Bangalore Exp: 6 - 10yrs Mode of Work: Hybrid can share your resume at [email protected] #DataEngineer# #DataScience# #DataModeler# #GCP# #Bigquery# #ETLPipelines# #Datavisualizatiin#
To view or add a comment, sign in
-
#Hiring for #senior data engineer!! *🚨 Urgent Opportunity for Senior Data Engineers (10+ yrs)! > _*We are actively seeking experienced "Senior Data Engineers" for immediate placement, offering a competitive salary of up to INR 25 LPA. Location: WFH from India Mode: Permanent or yearly renewable Annual Salary: INR 20-25 LPA if 10+yrs total exp Notice: immediate – 2 weeks Interview rounds: 3 or 4 _*Below is the detailed JD, however mandatory skills remains the same - Scala, SQL, PySpark.*_ I. _*KEY RESPONSIBILITIES:*_ •Understand the factories , manufacturing process , data availability and avenues for improvement •Brainstorm , together with engineering, manufacturing and quality problems that can be solved using the acquired data in the data lake platform. •Define what data is required to create a solution and work with connectivity engineers , users to collect the data •Create and maintain optimal data pipeline architecture. •Assemble large, complex data sets that meet functional / non-functional business requirements. •Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability • Work on data preparation, data deep dive , help engineering, process and quality to understand the process/ machine behavior more closely using available data •Deploy and monitor the solution •Work with data and analytics experts to strive for greater functionality in our data systems. •Work together with Data Architects and data modeling teams. _*II. SKILLS /COMPETENCIES*_ (Top 3-7 most important/critical competencies needed for the job both soft and hard skills): •Good knowledge of the business vertical with prior experience in solving different use cases in the manufacturing or similar industry •Ability to bring cross industry learning to benefit the use cases aimed at improving manufacturing process *•Problem Scoping/definition Skills:* Experience in problem scoping, solving, quantification Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores *•Data Wrangling Skills:* Strong skill in data mining, data wrangling techniques for creating the required analytical dataset Experience building and optimizing ‘big data’ data pipelines, architectures and data sets Adaptive mindset to improvise on the data challenges and employ techniques to drive desired outcomes *•Programming Skills:* Experience with big data tools: Spark,Delta,CDC,NiFi, Kafka, etc Experience with relational SQL ,NoSQL databases and query languages, including oracle , Hive, sparkQL. Experience with object-oriented languages: Scala, Java, C++ etc. share your cv at [email protected]
To view or add a comment, sign in