AI is a hot topic! ChatGPT, the popular AI chatbot, broke the records for user adoption rates reaching one million users in just 5 days. AI adoption has doubled in the last five years, giving rise to more calls for around AI ethics and governance.
There’s no denying that AI tools serve exceptionally well for emulating human behavior on most tasks that can be represented systematically. The repetitive and predictable tasks that follow a fixed workflow structure. The risk rises sharply when machine intelligence without human cognitive ability — emotional nuances and a risk-averse inclination — is used to perform complex tasks.
Let’s take a look at what happens when business and AI meets. It’s not all bad news, but it does take a lot of foresight and consideration to adopt AI in an ethical and sustainable way.
Consider the learning approach used by modern machine (deep) learning algorithms. The machine learning model creates a black-box system that learns trends and patterns in data. The model learns and characterizes the relationships between data — the given input data and the corresponding system behavior. Once the model is trained, it can approximate the system’s behavior for any new data inputs.
As a simple example, if you train a computer vision model on images of cats with the correctly given image labels, it will learn to correctly classify new cat images with high accuracy. How can you explain this behavior? Black-box models are inherently unexplainable.
While the AI models can classify data patterns correctly, the process may not be interpretable and understandable. After all, an AI model is, in simple terms, a set of mathematical equations that can approximately represent the relationship or behavior of any system.
Using artificial intelligence in a business setting is a lot more complicated than accurately classifying cat images. If you’re relying on black-box outputs and outcomes for your business operations, you can’t explain, defend or justify those operations.
Another key element of AI ethics and governance goes beyond the technology itself. It is focused on business leaders and the workforce, particularly:
For any organization aiming to replace or augment the human workforce in solving complex business problems, AI ethics and governance must be operationalized.
(Imagine what generative AI, like ChatGPT, means for cybersecurity: it's risk and reward.)
This is an important question facing business leaders who are inspired by the recent progress of AI technologies but also skeptical about the risk implications of an AI going rogue — or not being sufficiently human-like.
Most businesses start with overarching PR statements that range from “we will never sell your data”, to “user safety is our priority” and “our tools are designed to serve all customers equally, free of discrimination”. But to a black-box AI system making the decisions, the concept of safety and ethics may not hold the same value unless it is specifically trained for it.
To address these limitations, you can develop an operationalized and sustainable AI ethics and governance program built on these principles.
Start by measuring your AI progress. Give a quantifiable number to the scale of impact when transitioning to an AI-first strategy. Perform a qualitative analysis of how that transition impacts your regulatory compliance and ethical responsibility toward the society.
Model and forecast AI progress as you scale your business, grow your user base and adopt AI tools for operational tasks previously conducted by a human workforce.
Is it safe to simply replace your workforce with an AI tool? Consider the applicable compliance regulations—will you still meet existing industry compliance requirements if a human is not involved?
Think about augmenting a human workforce using AI tools, gather real-world data on the safety metrics and gradually expand the scope of your AI adoption.
Explore the ethical aspects of AI adoption, including:
Define what AI ethics entails for your organization. Establish a process that specifically vets for these limitations and unexplainable output of the AI system.
(Related reading: Shadow AI)
The healthcare industry is a prime example of driving automation across sensitive ethical aspects of its operations. The industry has been focused on governing the use of data and automation from an end-user privacy perspective. Create an ethical framework that articulates these standards and measures the ongoing effectiveness of your quality assurance and risk mitigation programs.
Every organization faces different challenges when it comes to AI ethics. Identify the KPIs and metrics that are most relevant to your own industry, organizational culture and the user base. A robust framework clearly outlines how your data pipeline – starting from data acquisition to integration with third-party AI tools and the output produced by its AI algorithms – should account for deviations and anomalies that constitute an ethical risk.
To better understand the importance of AI ethics in business, we spoke with Will Scott an instructor at Pragmatic Institute. Will is a seasoned executive with over 25 years of international experience leading product marketing and management teams across hardware, software, services, and SaaS B2B companies. At Pragmatic Institute, Will teaches a range of courses across various product functions, including intermediate and advanced AI workshops. Will has taught over 300 product professionals the importance of integrating AI prompt engineering into their work.
In this section, we've included Will's responses to our prompts.
A significant challenge for business professionals lies in the misconception that integrating artificial intelligence into their workflow is a straightforward solution that can automate complex decision-making processes. Organizations must recognize that AI tools, while powerful, require a sophisticated level of oversight.
It is crucial for professionals to grasp both the capabilities and limitations of AI. While AI shines in areas like data analysis and content generation, it may falter in tasks that require emotional intelligence or a nuanced understanding of context. Additionally, a unique challenge posed by AI is the phenomenon of "hallucinations," where AI might generate incorrect or misleading information confidently. As we teach in our Generative AI classes - AI will always answer your questions, but the user of AI has a responsibility to always question those answers.
Ethical considerations in the realm of AI are yet to be fully resolved. One primary concern is the ownership and rights related to the data training the large language models, the ownership of the data that is provided via prompts to the system as well as the content generated in response to prompts.
Before using these technologies, professionals and organizations must carefully consider the ethical and legal challenges and considerations related to data ownership, rights concerning the input and output of large language models, and the varying policies of AI service providers. These considerations are essential in navigating the complexities of AI integration responsibly.
This digital era has significantly transformed how people interact with information, often seeking out content that aligns with their pre-existing beliefs. My worry is that as generative AI becomes more sophisticated, its ability to create hyper-realistic but fake content could exacerbate these biases, leading to an increased polarization.
It is not just about the technology's capability to generate realistic images, videos, or texts; it is about the impact these creations can have on society's collective understanding of truth and reality. In the realm of business, the content generated may often feed into your brand representation, thought leadership and innovations. Therefore, organizations have a responsibility to make critical evaluation and verification a part of their decision-making processes.
Rule #1: AI will not replace you, but a person who knows how to use AI ethically, in the context of the job, will. It is no longer sufficient for a business professional to have a passing familiarity with this technology, an informed and educated position is a must.
Rule #2: The AI you are using today is the least capable AI you will ever use. I am old enough to be around for the launch of the internet. This is like that, but at 10x speed. It is just remarkable. I personally have never seen a technology evolve at such a breakneck pace.
Rule #3: The most important usage rule we emphasize at Pragmatic Institute is - AI will always give you answers to your questions. Your job is to always question the answer. That is where your value will come.
Rule #4: This is not going away, A recent report found that Generative AI tools are the number one offender on the list of shadow IT applications in use. Whether your company policy allows the use of these technologies or not, your employees will still find a way to use them.
Which leads me to my last tip...
Rule #5: Foster a culture of continuous learning. AI is a rapidly evolving field, and to stay competitive, your team must be committed to ongoing education and skill development. Encourage your employees to attend workshops, pursue relevant certifications, and stay updated on the latest advancements in AI technologies. This investment in learning will ensure that your organization remains agile and well-equipped to leverage AI effectively.
Creating the right vision requires a systematic approach to dealing with the problem of AI ethics and governance, ongoing training and education, and executive support to govern the scope of AI adoption.
See an error or have a suggestion? Please let us know by emailing [email protected].
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.