#hiring Data Governance - Senior Consultant, New York City, United States, fulltime #opentowork #jobs #jobseekers #careers #NewYorkCityjobs #NewYorkjobs #HealthcareMedical Apply: https://lnkd.in/gHhBtNaG Job Family:Digital ConsultingTravel Required:Up to 25%Clearance Required:Ability to Obtain Public TrustWhat You Will Do:Conduct interviews with client stakeholders to assess current metadata management and data governance capabilities and refine use cases associated with metadata management, taxonomy, and ontology development.Translate these use cases into requirements to communicate to the technical implementation team to implement into Collibra.Collaborate closely with business stakeholders to ingest and analyze existing data dictionaries, business glossaries, and taxonomies to determine the existence of metadata gaps, alignment with controlled vocabularies, and potential contribution to increase usability, reliability, and search/discoverability of data assets associated with the Collibra adopters' use cases.Provide guidance in setting the relevant workflows, roles, views, audit reports, and other execution and monitoring concepts within Collibra to support governance of the data inventory development.Provide guidance in supporting lineage creation in Collibra.Provide guidance in developing roles and permissions to support cross-community collaboration for managing and governing assets
New York Jobs’ Post
More Relevant Posts
-
#hiring Data Governance - Senior Consultant, New York City, United States, fulltime #opentowork #jobs #jobseekers #careers #NewYorkCityjobs #NewYorkjobs #HealthcareMedical Apply: https://lnkd.in/g7wnNsbq Job Family:Digital ConsultingTravel Required:Up to 25%Clearance Required:Ability to Obtain Public TrustWhat You Will Do:Conduct interviews with client stakeholders to assess current metadata management and data governance capabilities and refine use cases associated with metadata management, taxonomy, and ontology development.Translate these use cases into requirements to communicate to the technical implementation team to implement into Collibra.Collaborate closely with business stakeholders to ingest and analyze existing data dictionaries, business glossaries, and taxonomies to determine the existence of metadata gaps, alignment with controlled vocabularies, and potential contribution to increase usability, reliability, and search/discoverability of data assets associated with the Collibra adopters' use cases.Provide guidance in setting the relevant workflows, roles, views, audit reports, and other execution and monitoring concepts within Collibra to support governance of the data inventory development.Provide guidance in supporting lineage creation in Collibra.Provide guidance in developing roles and permissions to support cross-community collaboration for managing and governing assets
https://www.jobsrmine.com/us/new-york/new-york-city/data-governance-senior-consultant/477434548
To view or add a comment, sign in
-
Hiring Alert ‼️ Northern Trust is hiring for Data Engineers with 3+ years of experience. ✅Direct apply link: https://lnkd.in/gNn-VeYk Stay tuned here Mahesh Gore for more such Data Engineering / Data Analyst related content!! ⚡️Join our telegram community for daily updates, Link in the comments⚡️ #dataengineering #bigdata #jobs #vacancies #datascience #dataanalytics #dataanalyst #dataengineer
Data Engineer
ntrs.wd1.myworkdayjobs.com
To view or add a comment, sign in
-
#Hiring for #Data #Engineer #C2C Position: Data engineer Location: Sunnyvale, CA. (Need Local candidates only) Duration: 12+ Months (Possible Extention) Experience: 12+ Years Job Description: What you'll do: • You will use cutting edge data engineering techniques to create critical datasets and dig into our mammoth scale of data to help unleash the power of data science by imagining, developing, and maintaining data pipelines that our Data Scientists and Analysts can rely on. • You will be responsible for contributing to an orchestration layer of complex data transformations, refining raw data from source into targeted, valuable data assets for consumption in a governed way. • You will partner with Data Scientists, Analysts, other engineers, and business stakeholders to solve complex and exciting challenges so that we can build out capabilities that evolve the marketplace business model while making a positive impact on our customers' and sellers’ lives. • You will participate with limited help in small to large sized projects by reviewing project requirements; gather requested information; write and develop code; conduct unit testing; communicate status and issues to team members and stakeholders; collaborate with project team and cross functional teams; troubleshoot open issues and bug-fixes; and ensure on-time delivery and hand-offs. • You will design, develop and maintain highly scalable and fault-tolerant real time, near real time and batch data systems/pipelines that process, store, and serve large volumes of data with optimal performance. • You will ensure data ingested and processed is accurate and of high quality by implementing data quality checks, data validation, and data cleaning processes. • You will identify possible options to address business problems within one's discipline through analytics, big data analytics, and automation. • You will build business domain knowledge to support the data need for product teams, analytics, data scientists and other data consumers. What you'll bring: • At least 4+ years of experience development of big data technologies/data pipelines • Experience in managing and manipulating huge datasets in the order of terabytes (TB) is essential. • Experience with in big data technologies like Hadoop, Apache Spark (Scala preferred), Apache Hive, or similar frameworks on the cloud (GCP preferred, AWS, Azure etc.) to build batch data pipelines with strong focus on optimization, SLA adherence and fault tolerance. • Experience in building idempotent workflows using orchestrators like Automic, Airflow, Luigi etc. • Experience in writing SQL to analyze, optimize, profile data preferably in BigQuery or SPARK SQL Email- [email protected]
To view or add a comment, sign in
-
#Data Analyst #dataanalysts Title: Data Analyst Location - Sunnyvale, CA Hybrid work- 3 days office W2 Only This role will require working with internet-scale data across numerous product and customer touchpoints, undertaking in-depth quantitative analysis, and distilling data into actionable and intuitive visualizations to drive informed decision-making across all levels of the company. A strong candidate can communicate and collaborate across many different teams, has an agile approach to solve challenging problems quickly, and stresses the details. Our team’s culture is centered around rapid iteration with open feedback and debate along the way. We encourage independent decision-making and taking calculated risks. We produce insights to drive decisions that enhance the customer experience, accelerate growth, and uncover new business opportunities while respecting user privacy. Key Qualification Requires 12+ years of experience in a Data Visualization, Data Scientist, or Data Analyst role, preferably for a digital subscription business. Strong proficiency with SQL-based languages is required. Experience with large-scale data technologies such as Hadoop, PySpark Proficiency with data visualization tools such as Tableau, and/or Microstrategy for analysis, insight synthesis, data product delivery, and executive presentation. You have a curious business mindset with an ability to condense complex concepts and analysis into clear and concise takeaways that drive action. Excellent communication, social, and presentation skills with meticulous attention to detail. Strong time management skills with the ability to handle multiple projects with tight deadlines and executive visibility. Be known for successfully bridging analytics and business teams, with an ability to speak the language of both. Job Description: Build dashboards, self-service tools, and reports to analyze and present data associated with customer experience, product performance, business operations, and strategic decision-making. Create datasets, Develop global dashboards, data pipelines, sophisticated security controls, and scalable ad-hoc reporting Closely partner with our Data Science team to define metrics, datasets, and automation strategy Engage with Product, Business, Engineering, and Marketing teams to capture requirements, influence how our services are measured, and craft world-class tools to support those partners. Establish a comprehensive roadmap to communicate and manage our commitments and stakeholder expectations while enabling org-wide transparency on progress. Focus on scale and efficiency - create and implement innovative solutions and establish best practices across our full scope of delivery Education: Minimum of a bachelor's degree in Computer Science, Statistics, Mathematics, Engineering, Economics, or related field. Thanks & Regards... Suresh Kumar Accounts Manager Galaxy i Technologies inc. EMail: [email protected] Ph No: 480-696-5394
To view or add a comment, sign in
-
🚀 Reflecting on My Journey as a Data Analyst! 🚀 Throughout my 6+ years in data analytics, I've had the opportunity to tackle complex data challenges across various industries, from healthcare to finance. My role has always been focused on transforming raw data into powerful insights that drive strategic decisions and operational efficiency. 💼 Key Contributions in My Recent Roles: 🔹 Data Pipeline Optimization and Automation At Alaska Regional Hospital, I developed and optimized scalable data pipelines using Azure Data Factory and Azure Synapse Analytics. I streamlined data extraction and transformation by designing data models and automated ETL workflows in Python. 🔹 Enabling Real-Time Decision-Making through Dashboards Incorporating visualization tools like Power BI, Tableau, and Google Analytics, I've created interactive dashboards that provide real-time, actionable insights for business leaders. By automating these reports, I empowered teams with faster, data-driven decisions, leading to improved business outcomes. 🔹 Data Quality and Compliance Data accuracy and compliance are crucial, especially in finance. In my role at Charles Schwab, I developed data validation scripts using SQL and Python to enhance data integrity across pipelines. Additionally, I performed gatekeeping and governance on models within the SAS Fraud Management application, ensuring that all strategies were production-ready and compliant with industry standards. 🔹 Scalability and Performance Enhancement with Cloud Working extensively with AWS and Azure, I’ve implemented serverless workflows using AWS Lambda and Step Functions for efficient data processing. I also utilized Azure Blob Storage and Azure Data Lake to centralize data storage and enhance query performance, boosting data retrieval speed by 25% in critical projects. 🔹 Robust ETL Development Throughout my roles, I’ve specialized in developing and maintaining robust ETL processes using tools like Apache Sqoop, Hadoop, and AWS S3, ensuring seamless data integration from multiple sources. This effort has led to more efficient, scalable data solutions that support business intelligence initiatives and advanced analytics. 🔹 Model Execution and Governance At Charles Schwab, I enhanced execution frameworks within SAS Fraud Management and ensured rigorous governance on new models, supporting critical decision-making with accurate, timely data. 🔹 Empowering Collaboration Across Teams I streamlined data exploration by integrating Jupyter Notebooks and creating standardized templates, making data analysis more accessible and collaborative. I am excited to bring my expertise in AWS, Azure, SAS, and big data tools like Hadoop and Spark to new opportunities. If your team values data-driven insights, I’d love to connect! #DataAnalytics #CloudTechnologies #SAS #BigData #DataVisualization #ETLDevelopment #OpenToWork #AWS #Azure #NewOpportunities
To view or add a comment, sign in
-
We're Hiring for the "Azure Data Engineer" role for the #Sydney location #must have #workauthorisation to work in #Australia Role:#AzureDataEngineer Location:#SydneyNewSouthWalesAustralia Job Type:Permanent Job Description: Qualification: Bachelor of Engineering (Information Science), Bachelor of Computer Application, Bachelor of Science (IT), Bachelor of Computing. Or higher graduation in information/computer science Experience: 8-14 years Expected Skills: • Strong understanding of #Azureenvironment (PaaS, IaaS) and experience in working with Hybrid model • At least 1 project experience in #AzureDataStack that involves components like #AzureDataLake, #AzureSynapseAnalytics, #AzureDataFactory, #AzureDataBricks, #AzureAnalysisService, #AzureSQLDWH, #AzureMachineLearning, #Eventhub • Strong hand on experience in building generic Synapse ingestion frameworks • Strong hands experience with building Synapse data pipelines (Both batch and real-time) • Strong hands-on SQL/T-SQL/Spark SQL and database concepts • Strong experience in Azure Blob and ADLSGEN2 • Strong Knowledge of Azure Key Vault, Managed Identity RBAC • Strong experience and understanding of DAX tabular models • Experience in Performance Tuning, Security, Sizing and deployment Automation of SQL/Spark • Good to have Knowledge in Advanced analytics tools like Azure Machine Learning, Event Hubs and Azure Stream Analytics • Good Knowledge on #DataVisualization tools Power BI • Able to do Code reviews as per organization's Best Practices. • Exposure/Knowledge of No-SQL databases. • Good hands on experience in #AzureDevops tools. • Should have experience in Multi-site project model, client communication skills • String working experience in ingesting data from various data sources and data types • Good knowledge in Azure DevOps, understanding of build and release pipelines • Good knowledge in push/pull request in Azure Repo/Git repositories • Good knowledge in code review and coding standards • Good knowledge in unit and functional testing • Expert knowledge using advanced calculations using MS Power BI Desktop (Aggregate, Date, Logical, String, Table) • Good at creating different visualizations using Slicers, Lines, Pies, Histograms, Maps, Scatter, Bullets, Heat Maps, Tree maps, etc. • Exceptional interpersonal and communications (verbal and written) skills • Strong communication skills • Ability to manage mid-sized teams and customer interaction: If anyone is interested please share your resume with [email protected] Note:no need to provide any Sponsorship #Australiajobs #Sydneyjobs #Azurejobs #AzureDataEngineerjobs #DataEngineerjobs #AzureSynapsejobs #Pysparkjobs #Pythonjobs #SQLjobs #AzureDataBricksJobs
To view or add a comment, sign in
-
Top 5 Benefits of hiring data engineer consultant #dataengineerconsultant #dataengineerconsultantancy https://lnkd.in/e2PVa57t In this article we will explore how data engineer consultant can help us optimize and normalize business data and improve data efficiency within organization.
Top 5 Benefits of hiring data engineer consultant
technoligent.blogspot.com
To view or add a comment, sign in
4,353 followers