𝐘𝐨𝐮𝐫 𝐃𝐚𝐭𝐚, 𝐓𝐡𝐞𝐢𝐫 𝐀𝐈: 𝐇𝐨𝐰 𝐘𝐨𝐮 𝐁𝐞𝐜𝐚𝐦𝐞 𝐚 𝐆𝐮𝐢𝐧𝐞𝐚 𝐏𝐢𝐠—𝐚𝐧𝐝 𝐇𝐨𝐰 𝐭𝐨 𝐎𝐩𝐭 𝐎𝐮𝐭? The way we use #socialmedia today is rapidly evolving, and it’s going to look very different in the near future. The day when AI seamlessly interacts with humans is closer than we think. How will this shift occur? Through you—your data, your posts, your conversations, and even your photos. Do you know that what you share on social media is increasingly being used by companies to train their #ArtificialIntelligence #AI systems? It’s not just your words—your images, slang, and online behavior are all being collected, and you might not be aware of it For instance, #LinkedIn uses user resumes to refine its AI, while #Meta gathers posts containing informal language to improve its AI tools. Here’s the reality: if you’re posting content publicly, there’s no guarantee that third parties won’t use it for their own purposes, often without your explicit consent. At the very least, it's important to be aware of this. Don’t believe it? Here’s a look at how some major social media platforms are using your data to train AI models, and whether you can opt out: LinkedIn: This week, LinkedIn introduced an option to opt out of having your data used for training its generative AI models. To do so, go to “Settings & Privacy,” select the “Data Privacy” tab, then “Data for Generative AI Improvement,” and toggle the button off. LinkedIn may still use data for AI purposes with affiliates like Microsoft’s OpenAI but aims to redact personal information from training datasets. X : #ElonMusk platform X also requires users to opt out if they don't want their posts used for training the AI chatbot #Grok, which has faced criticism for spreading misinformation and generating graphic content. To opt out, navigate to “Settings,” then “Privacy and Safety,” and under “Data Sharing and Personalization,” uncheck the box for “Grok.” Snap Inc. : #Snapchat “My Selfie” feature allows users to create AI-generated images from selfies. Users must opt in to use this feature. To prevent your selfies from being used in ads, go to “Settings,” then “My Account,” select “My Selfie,” and toggle off “See My Selfie in Ads.” Meta : Meta acknowledges that it uses public posts from #Facebook and #Instagram to train its AI #chatbot. To prevent this, set your account to private. Private messages are not used for AI training. 𝐃𝐢𝐝 𝐭𝐡𝐢𝐬 𝐩𝐨𝐬𝐭 𝐦𝐚𝐤𝐞 𝐲𝐨𝐮 𝐫𝐞𝐜𝐨𝐧𝐬𝐢𝐝𝐞𝐫 𝐲𝐨𝐮𝐫 𝐝𝐚𝐭𝐚 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐬𝐞𝐭𝐭𝐢𝐧𝐠𝐬? 𝐇𝐚𝐯𝐞 𝐲𝐨𝐮 𝐭𝐚𝐤𝐞𝐧 𝐬𝐭𝐞𝐩𝐬 𝐭𝐨 𝐨𝐩𝐭 𝐨𝐮𝐭 𝐚𝐟𝐭𝐞𝐫 𝐫𝐞𝐚𝐝𝐢𝐧𝐠 𝐭𝐡𝐢𝐬? 𝐋𝐞𝐭 𝐮𝐬 𝐤𝐧𝐨𝐰 𝐢𝐧 𝐭𝐡𝐞 𝐜𝐨𝐦𝐦𝐞𝐧𝐭𝐬! 𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐚𝐧𝐝 𝐭𝐚𝐤𝐞 𝐜𝐨𝐧𝐭𝐫𝐨𝐥 𝐨𝐟 𝐲𝐨𝐮𝐫 𝐝𝐢𝐠𝐢𝐭𝐚𝐥 𝐟𝐨𝐨𝐭𝐩𝐫𝐢𝐧𝐭. 𝐓𝐡𝐞 𝐟𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐀𝐈 𝐢𝐬 𝐡𝐞𝐫𝐞, 𝐚𝐧𝐝 𝐲𝐨𝐮𝐫 𝐜𝐡𝐨𝐢𝐜𝐞𝐬 𝐦𝐚𝐭𝐭𝐞𝐫.
Mahboob Ali’s Post
More Relevant Posts
-
LinkedIn faced backlash for using user data to train AI before updating their terms of service. The real issue wasn't the data usage but the lack of transparency and proper communication, as highlighted by Robert Rose. This underscores the critical need for companies to prioritize clear communication and coordination when implementing AI technologies, something I believe is essential for maintaining user trust and engagement. #AI #DataPrivacy #Communication #UserTrust #DigitalTransformation
To view or add a comment, sign in
-
With AI becoming more and more prevalent and accepted in the mainstream its presence on the LinkedIn platform is no surprise, however, the extent of its presence is... This recent study found that 54% of posts on LinkedIn showed signs of being AI assisted, meaning that its more likely than not that the posts have at the very least been 'polished' using an AI tool, if not written in their entirety by an AI Chatbot. In a world where the use of technology is being increasingly adopted to assist with all aspects of our day to day, I, personally, think that for platforms like LinkedIn which exist to promote the ability of individuals and businesses to connect and share news and professional opinions on a more personal level, the use of AI is contradictory. I would rather not post, than post using AI... happy to hear a contradictory opinion though! #AI #socialmedia #technology #oilandgas
How LinkedIn opened the door to AI slop
fastcompany.com
To view or add a comment, sign in
-
It looks like LinkedIn has hit pause on using UK user data for AI training after facing complaints from the ICO. It seems all the posts and complaints on the platform about this unsolicited decision really made a difference. Walking back the choice to opt everyone into AI training without clear consent shows that transparency and trust have to be at the core of these advancements. Users deserve a say in how their data is used and monetised. Good to see LinkedIn reassessing their approach, but it’s a lesson for all platforms: user data isn't just a commodity. It’s about trust, and that needs to be earned. #AI #DataPrivacy #TrustInTech #LinkedIn https://lnkd.in/exke6jju
LinkedIn suspends use of UK data for AI after watchdog questions
bbc.com
To view or add a comment, sign in
-
I'm seeing people share how to opt out from LinkedIn using your personal data and content for training their AI - That's really good knowledge to spread as LinkedIn haven't communicated this very well. You opt out via a setting (Privacy > Data Privacy > Data for Generative AI Improvement). BUT what is confusing for many people here in Europe is that this setting just isn't there. Why? ➡️ Because the option is always off for us. "We are initially making this setting available to members whose profile location is outside of the EU, EEA, UK, or Switzerland. If you live in these regions, we and our affiliates will not use your personal data or content on LinkedIn to train or fine-tune generative AI models for content creation without further notice." For everyone else in the world, the default is to be opted in, probably without you realising it. https://lnkd.in/dMbrBQD4
LinkedIn and generative AI (GAI) FAQs
linkedin.com
To view or add a comment, sign in
-
The case of #LinkedIn now stating that they will not use UK LinkedIn users’ data for training their AI is a good start for protecting user data. Data sets for training AI and creating models need to be done on proprietary data sets, if AI is to be used for solving important challenge statements. LinkedIn user data, as with other big tech platforms, such as #YouTube, are built to continuously monetize through the selling of adverts and selling user data. This means that the natural search or recommendations are now in the control of big tech, and the engineers that they employ are solving problems that do not exist. Proprietary data sets owned by enterprises and governments retain value that can unlock benefits for all stakeholders, from customers, citizens, supply chain, and many other parts of their operations. #AI tech is designed for solving complicated challenge statements, where data sets are large, un-structured, multi-modal, and require parallel processing. Some challenge statements require real-time (i.e. outcomes every 30 seconds), and others perhaps hourly, daily or weekly. At the moment, the #bigtech platforms seem to be prioritizing platforms that add very little value, but driving the narrative of how AI should be used. Enterprises need to start to use AI for experimenting on their proprietary data sets and be less reliant on big tech platforms. Shadow AI is creeping into enterprises and will scale unless enterprises create proprietary AI tools trained on their data. https://lnkd.in/dd5xTZHN
LinkedIn suspends use of UK data for AI after watchdog questions
bbc.com
To view or add a comment, sign in
-
Many organizations are using your data to train their AI models. LinkedIn apparently is one of them (by default). This Mashable article explains how to opt-out. https://lnkd.in/dQG-KXkw It seems somewhat counterintuitive to post this on LinkedIn, but this is the Topsy-turvy AI world we live in. I wonder what the 'algorithm' is going to do about promoting this post 🤷♂️ #LinkedInAI
LinkedIn is using your data to train AI. Here's how to turn it off.
mashable.com
To view or add a comment, sign in
-
Meta have launched a new feature on Facebook and Instagram that flags up a 'Made with AI' tag any time their service spots any markers that they believe were tied in to AI in an image. Now, in principle, you might think this sounds like a good thing. It might stop people believing all these ridiculous deepfake style images being posted, or pictures of cute animals or landscapes that actually don't exist. I agree, flagging those images is important. Here's the problem, the system is taking it way too far and in doing so, is going to mean the whole thing gets ignored in the end. Let me give you an example. Let's say that I'm editing a lifestyle image in a restaurant of a group of people enjoying some food. For decades now, an image like this will always have been cleaned up, from basic stuff like contrast and colour, through to some fine details like spot removal, it's pretty normal. Over the last few years, processes like spot removal have developed into using AI in order to better predict what the area would look like without the spot, it's common sense to do it that way. The same goes for removing something like a fire extinguisher somewhere in the background of an image. Sure, I could take half and hour trying to weirdly paint over and recreate it manually, or I could circle the area and ask it to fill it in for me using AI. In this example, is the image 'Made with AI'? I would argue that it isn't. It's being edited in a non-deceptive way, that has been done for decades, but with tools that just now happen to have elements of AI included in their processing in order to help. Let's say that you were making a meal at home, and you quickly popped onto ChatGPT to ask it how to get your rice a little fluffier, would that meal then be 'Cooked by AI'? No of course not, you've just used something that's quicker to access than finding a website or a recipe book reference, as a tool available these days. Just the other day, my sister posted up a picture of my neice that I took. It then flagged up 'made with AI'. She asked me if I had edited my neice with AI, as that's something she's really not keen on at all, and understandably. I was confused, I knew I hadn't used any sort of AI filters or anything, so why was it coming up like this? Then I realised. I had used Adobe's Generative Fill feature to remove two areas of rust on a wooden door behind her. That was enough for the image to be flagged. My hope is that Meta will quickly learn from a pretty badly thought out rollout of this feature, and find a way to distinguish if the AI edits being made are something that have a meaningful effect on the image (in a deceptive way), or if they're just being used for basic and harmless cleanup. If they don't sort it out, it will start to become the case that almost everything posted will be tagged with it, at which point, there's again zero differentiation between genuine photography, and AI generated art posing as photography.
To view or add a comment, sign in
-
In our fast-paced industry, it's crucial to know how to effectively communicate with #AI. Asking the right questions can transform AI into a powerful ally, enabling us to better understand data, predict trends, and engage customers. 🤖 We've put together a must-read article, "How Performance Marketers Should Talk to AI," which breaks down how specific queries can lead to precise insights—saving time and boosting campaign success. 🔗 https://bit.ly/3vTvfpp #performancemarketing #DataDriven ##MarketingAI
How Performance Marketers Should Talk to AI
https://fluentco.com
To view or add a comment, sign in
-
Google's latest AI competitor - Here's what you need to know: OpenAI, the company behind ChatGPT, has revealed a prototype for a search engine system called SearchGPT. The system is planned to be integrated within ChatGPT itself, so all your AI needs are under one roof. And we're sure you've already seen Google's AI assistant, Gemini, which already makes searching super easy by summarising the most relevant answers to your questions at the top of the Google page. If you’re thinking what we’re thinking, you’ll want to know if this is bad news for PPC and SEO… If the search engine is already sieving through pages of information and placing the key points in a neat little package at the top of the screen, then it most likely will save you from clicking on a tonne of websites. But obviously, this means that users are receiving information from a site, without actually ever visiting it, hence, reduced site traffic. The purpose of PPC and SEO may begin to be slightly less valuable as AI is increasingly doing all the work for us. However, an alternative view would be that this actually adds to the competition and means that having your page at the top of the list is more important than it ever has been; if users are spending less time searching for the right page and are increasingly relying on AI to do it for them, making it as easy as possible for someone to click on your page is a must. How do you see the future of PPC and SEO being affected by AI? We’d love to hear your thoughts in the comments. And while we actually wrote this post ourselves (sorry ChatGPT), we've also got some pointers for how to get the best out of AI with the do's and don'ts for writing prompts - Head to the article below. #AI #MarketingInsights #PPC #SEO
The Do's and Dont's of Writing Prompts for AI - Yours Sincerely
http://yourssincerely.online
To view or add a comment, sign in
-
A very good article about the importance and evolution of prompting as a skill to effectively use AI platforms. This is good advice, and not just for Marketers!
AI prompting for marketers—lessons on how to best use generative technology
adage.com
To view or add a comment, sign in