WHERE SHOULD AI SIT INSIDE AN ORGANIZATION? A STRATEGIC GUIDE TO NAVIGATING GEN AI
INTRODUCTION
I am extremely curious about Artificial Intelligence (AI) / Gen AI, have been consuming all I can, am testing new tools every week, and exploring, exploring, exploring.
What I know so far is – there are no absolutes given the infancy this industry is in. What I do know though is, in a world increasingly influenced by technology, understanding Artificial Intelligence and its next-generation incarnation, Gen AI, is pivotal. While AI mimics human abilities like learning and problem-solving, Gen AI takes it a step further. It focuses on real-world applications, cognitive learning abilities, and solving problems in complex environments.
Once a company decides to adopt AI, especially Gen AI, a myriad of questions arise. Where should AI reside within the organization? Who is responsible for its deployment and management? How can we nurture its potential without courting undue risk?
Knowing what I know now / have learned so far – based on companies as users of AI (vs those developing AI tools), this is my thinking...
Note: When I use the acronym AI, this is synonymous with Gen AI within this post.
WHERE SHOULD AI SIT WITHING AN ORGANIZATION?
The Centralized AI Hub Model
Having a dedicated AI hub provides centralized governance and ensures that all AI activities align with the overarching company objectives. This is where AI tools and machine learning models can be optimized, refined, and deployed in a streamlined manner. Teams can collaborate more effectively under one roof, thereby enhancing the performance of AI systems.
Imagine an ed tech company that develops a variety of learning solutions, student services, and learning platforms. A centralized AI hub could harmonize the data analytics, content recommendations, automated assessments, and identify at-risk students across all these platforms. The hub would be a single point of expertise that manages machine learning models to predict student success, personalize learning paths, and even help educators tailor their teaching methods.
Drawbacks
While the centralized model offers streamlined governance and expertise, it may struggle to address the unique needs of teams working on individual learning environments or disciplines. An AI algorithm that effectively assesses mathematics proficiency might not be equally competent in evaluating language arts skills. Therefore, the centralized model risks becoming disconnected from the nuanced requirements of specific educational contexts.
The Distributed Model
In a business context, employing a distributed AI model allows for specialized AI functions tailored to the distinct needs of individual departments. For instance, the marketing team might have its AI algorithms designed for consumer segmentation and personalized campaign strategies.
Advantages
Specialized Expertise: Having dedicated AI resources within the marketing department means that algorithms can be custom-designed for marketing-centric goals. For example, machine learning models could be trained on past campaigns' data to predict which strategies will produce the best ROI in future campaigns.
Agility and Responsiveness: The marketing team can quickly tweak algorithms in response to real-time data or current events, like social media trends or market volatility, without requiring approval from a centralized AI hub. If a new program area suddenly becomes a trending topic online, the AI system could automatically shift advertising resources to capitalize on this visibility.
Data Sensitivity: A distributed model can be more conscious of department-specific data privacy issues. Department-specific AI resources could be more easily trained to adhere to these privacy considerations.
Drawbacks
Inconsistency and Data Silos: One of the major drawbacks of a distributed model is the risk of inconsistent methodologies and metrics across departments. If the marketing team uses different tools, algorithms, or data sets than the admissions or operations teams, this can lead to incongruent strategies and inefficiencies.
Cost and Resource Duplication: Each department maintaining its own set of AI tools and personnel can be expensive and could lead to a duplication of efforts. In addition, there is the possibility that predictive models and insights could be limited as the data sets may not be connected.
Governance and Ethical Concerns: When AI is distributed, oversight can become complicated. Without centralized governance, ensuring that each department's AI models meet compliance standards and ethical guidelines becomes a more complex task.
In summary, a distributed model offers specificity and agility, particularly beneficial for dynamic fields like marketing. However, it also poses challenges in terms of consistency, cost-effectiveness, and governance, which organizations need to carefully consider.
The Hybrid Model
The hybrid model merges the best features of both centralized and distributed models. Typically, a central AI hub could be responsible for broad governance, ethical considerations, and data management. Meanwhile, department-specific AI teams would focus on customized solutions, applying specialized expertise to tackle unique challenges.
In a healthcare organization, a central AI hub could oversee compliance with medical data regulations, while specialized AI teams in radiology, cardiology, and other departments could develop algorithms tailored to their specific needs.
WHO SHOULD BE RESPONSIBLE FOR AI?
Based on what I have learned so far, three key executives should be leading the AI charge:
C-Level Ownership: CEO, CTO, and Chief Strategy Officer (CSO)
Involvement from the CEO ensures that AI is fully integrated into the company's overarching goals. The CTO focuses on the technical aspects, making sure the AI technology is robust, scalable, and secure. The Chief Strategy Officer (CSO) plays a vital role in aligning AI initiatives with long-term business strategies, ensuring a sustainable competitive advantage.
The Dedicated AI Team
This team would operate on a more tactical level, consisting of data scientists, machine learning engineers, and domain-specific experts. For example, an e-commerce company could have a dedicated AI team working on personalized recommendations, utilizing machine learning algorithms to analyze shopping behavior, thereby increasing sales and customer satisfaction.
External Partnerships
My sense is that external partnerships will be critical for organizations to truly leverage AI. An example of this would be a financial institution partnering with an AI analytics firm to create a state-of-the-art fraud detection system. This collaborative approach brings in external expertise, offering fresh perspectives that may not be available internally.
Also, given the pending talent war that is likely to ensue around corporate AI, finding external partners will be critical. What I am curious about here is, where these services will come from - the typical Tier I and Tier II consulting companies, or an entirely new break of partners that have yet to emerge. This will be important to watch.
BALANCING INNOVATION AND RISK IN A NASCENT FIELD
Ethical Boundaries and Guidelines
An ethical framework becomes increasingly necessary as AI takes on more decision-making roles. Organizations will need to consider setting up an ethics committee or making it part of their existing compliance groups. This committee could be tasked with regularly reviewing the ethical implications of the company's AI initiatives, ensuring that they align with both internal values and societal norms.
Regulatory Landscape
To date, AI has been somewhat of the Wild West when it comes to regulation (at least in the US). This week thought, major technology executives including Tesla CEO Elon Musk, Meta CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, Microsoft former CEO Bill Gates, and Amazon senior vice president David Limp met with U.S. senators in a private forum in Washington D.C. to discuss potential regulation of artificial intelligence (AI) systems. According to the Associated Press, the tech leaders broadly endorsed some form of AI regulation during the discussion led by Majority Leader
Chuck Schumer and Senator Mike Rounds. A few key points from the discussion:
Specific ideas discussed included an independent agency to oversee AI, improving transparency, and keeping the U.S. competitive versus China.
Issues debated included algorithmic bias, existential risk from AI, licensing for high-risk systems, and open source vs proprietary models.
Some senators argued the meeting should have been public, while tech firms seek to shape any regulations to enable innovation.
Schumer aims to introduce AI legislation soon, but there is little consensus on what regulation should look like.
The EU recently passed comprehensive AI rules focused on risk levels that some technologists worry will stifle innovation.
Overall, big tech companies seem open to U.S. regulation but want to ensure it allows rapid advancement of AI technology.
Regardless of how companies choose to embrace AI or where they park it inside of an organization, close attention and engagement must be made on regulatory and governance issues… which will continue to evolve.
CONCLUSION
The strategic adoption and placement of Gen AI within an organization require more than just understanding its capabilities. Where AI and Gen AI are placed, who oversees them, and how we approach their risks and rewards can make a significant impact on the organization's future. It will be critical not to set up a structure that stifles innovation, while at the same time, developing an oversight model that will insure compliance, collaboration, and flexibility.
Building partnerships at Universal Technical Institute
1yThank you for sharing your thoughts, Todd. It's a powerful tool. Interesting times to come.
National Director of Education Operations at Universal Technical Institute, Inc.
1yVery interesting read, Todd!