ChatGPT can search the web and cite sources! But people don't read linked articles anymore. And LLMs hallucinate. And the web is SEO:ed to hell and back. Soooo... ...I really wonder what effect this will have on us as a society. We are currently seeing the emergence of agents that are supposed to act on our behalf. But for the factors I listed above, we're **not at all** at the point where we can just ask our computers a question and get a reliable answer back. You're getting information supplied to you from the highest bidder or the highest spender. Let's not for a second fool ourselves that OpenAI will not make sure to get paid for which search results they use to present information, like every other search engine out there. If the systems we have built for ourselves cannot be trusted to give us high-value information, does that mean we are almost back at where we were before the web got commercialized and we all had a love-hate relationship with cable TV and its ads? Did we go full circle back to where you will start to have to read people's personal web pages so that you know you are getting information from a source you have decided to trust? I don't usually post about this type of thing. I'm more about #Kubernetes and #PlatformEngineering and #CompassionateLeadership. But please follow me anyway -- in this strange new world, I at least promise to always write my own posts without AI, and a lot of people seem to find them valuable. Thanks! // Lars
And with robots talking over a lot of manual labour, what will the future look like? For example, no more taxi drivers, or even just drivers. Fewer real chefs... who will know how to cook food and prepare a meal from scratch? Automated check-in at your hotel, no people. Where's the friendly face of a real person? Automated telephone answering machines already do away with people. You can hardly get hold of a real person anymore. Chatbots on web pages and you might not be able to reach a real person. And it goes on. Eventually, will there be people in work? Will the machines take over so that the people will be spoon fed everything? Or will we have to get creative and embrace a future with servant robots and AI where we either get lazier or more productive? Just some thoughts.
Apparently, human skills are going to become expensive with the pace AI is taking over. After all AI produced content and agents would become too generic or commercial. The same way we appreciate handmade stuff /handicrafts now that the industrial revolution replaced with machine made, I'm just wondering if at some point "human made" content and programs or analysis would be appreciated the same way? Or would that be obsolete or considered a novelty perhaps. I think errors and mistakes are what make us humans. It's better to remove those but at what cost? Our privacy, long before AI entered the race, the algorithms were constantly trying to keep us hooked up and become adrenaline junkies and this AI revolution is apparently not doing any good and in some way aiding the algorithms. Dark times ahead, but let's see what this "development" leads us to.
First you need to define "high value" and whether it can be objectively obtained or not. Somebody needs to pay for search. Either through ads or a fee. Even with a fee, is that knowledge more valued than those via ads? For better or worse Google beats every other search engine and has done so for a long time. Unless knowledge itself is obtained through some other means than indexing webpages then this is the way we are going.
A darker truth, however, may lie beneath the surface. It is possible that unseen hands are manipulating the very fabric of the digital realm, orchestrating a grand illusion to control the masses. To reclaim the promise of the digital age, we must not only demand transparency and accountability from those who wield its power, but also develop a critical eye and a discerning mind.
In answer to your question, imo, Yes, and Yes. For society as a whole? Prognosis is not good. It appears increasingly likely that the self generated label of "Homo sapiens" is largely aspirational.
Hi, I think there will be great AI that are trained on a specific subset of data that you can get reliable answers from. Like a AI that only uses Goverment Docs as a soucre for questions. If you do general search the pay to win will make that feature almost useless and/or that you have to promt it like, "search wikipedia for information about the presidential election 2012"
I like the new feature. Asked how to learn in depth X and got high quality YouTube videos teaching it in depth. Saved a few hours searching for “the best” source.
Very relevant observation about the paid positining. These services operate under the same conditions as everything else.
Then I think you'll probably want to try LIORA.chat that you control over the library that you discuss with and explore, try it and hit us up if you like it:) Take care to wish a fantastic weekend
I help organisations address GRC mandates & initiatives with technology.
1moGenuine question here, because I've thought about attribution in AI governance programmes recently: Do AI users care about attribution or do they typically just want reliable answers?