PART 3 (QUESTION I HAVE ASKED CHAT GPT) Context and Variation: Sentences are often context-dependent and can be modified by tone, register, and intention. A sentence that makes sense in one context may not in another, leading to even more possibilities. 2. Finite Number of Chemical Elements? In contrast, chemical elements are a finite set of fundamental substances that combine in different ways to form compounds. The periodic table lists 118 elements, and these elements combine through chemical reactions to create molecules. While the possible combinations of these elements are vast (think of the trillions of different compounds and molecules that can exist), the total number of distinct elements themselves is finite. However, as with language, there is still an almost infinite variety of substances, because compounds can be formed in complex and varied ways. This variety is vast, but it is still constrained by the number of elements. 3. Key Differences Between Language and Chemistry: Creativity and Novelty: Language has creative potential that far exceeds chemistry, because humans can generate an infinite number of sentences, many of which can express new or unique ideas. In contrast, even though chemical compounds can be varied and numerous, the building blocks (elements) are finite, and compounds are constrained by chemical laws. Combinatorial Possibilities: While both language and chemistry rely on combinations (words and elements), language has an element of recursion and creativity that chemical reactions do not. For example, new words or phrases can be invented and understood by speakers of a language, even if they’ve never been encountered before, as long as the underlying rules of the language are followed. The Role of Context: Sentences are inherently context-dependent, which adds another layer of complexity. A sentence may be constructed with a particular intention, tone, or nuance that changes its meaning depending on the situation. Chemical compounds, on the other hand, tend to behave in more predictable ways due to the underlying physical and chemical principles. Conclusion: While chemical elements are finite and combine in myriad ways to form compounds, language operates differently. It has a potentially infinite number of sentences, driven by the creativity of speakers, the recursive nature of grammar, and the open-endedness of communication. So, no, there is not a finite number of sentences in a given language in the same way that there are a finite number of chemical elements. ChatGPT can
Robert Abitbol’s Post
More Relevant Posts
-
We place a LOT of emphasis on the developers of large language model AI's to program these things ethically but have you considered that we (users) have a responsibility to USE them ethically as well? I don't mean avoiding deliberately malicious actions (that's a no-brainer), I mean understanding what it CAN'T do so that I don't unintentionally spread misinformation. Let me ask this: If you want to know George Washington's birthday - Can you ask GPT for the answer? Now let me ask you this: If you want to know what the top 5 needs are that indicate purchasing intent for your company's products within VP's of every Marketing Agency in the Fortune 1000 are - Can you ask GPT for the answer? Can you confidently answer EITHER of those questions? What I'm trying to illuminate here is that we have a problem of misunderstanding that can have a very real impact on business and politics and the only way to prevent misinformation is to follow a best-practice guideline. Here's a good piece that talks more in depth about what tools like Chat GPT can ETHICALLY do: https://lnkd.in/g58sqWYw Their basic guideline is as follows: 1. Language learning aid 2. Educational support and explanations 3. Generating ideas for brainstorming 4. Providing feedback on written work 5. Assisting in proofreading and editing If your need falls outside of this, consider a different tool for the job.
To view or add a comment, sign in
-
Lately, it seems trendy to resist or criticize Large Language Models (LLMs), and honestly, I just don’t get it. I've found them super useful in nearly every work category. The criticism I've seen falls into two camps: 1. They always make factual mistakes. 2. They provide really generic or incomplete responses. My take on both points is pretty much the same: "Well, you’re probably using them wrong." If you expect an LLM to spit out perfect facts every time, you're setting yourself up for disappointment. But let’s be real, would you expect flawless fact-spewing from an actual human? I'd argue that conversations with LLMs are some of the most factually correct and unbiased I’ve had. Their real strength lies in their ability to cite sources and explore the logic behind their responses. When you engage with them this way, the results can be nearly perfect, whether it's coding or simple information retrieval. If your LLM is churning out generic or incomplete answers, here are a couple of tips. First, really flesh out your custom instructions in ChatGPT, or load your prompts with details. Want a specific format? Provide examples. Want it to maintain your voice? Don’t ask it to write for you—write in your voice and use it to enhance clarity and grammar. If you're still not satisfied with the results, I challenge you to send the same prompt to a person and see if they do any better. If they don’t, well, that’s on you, man. Have I been compromised by the robots? Am I crazy and totally wrong on this? Let me know what you think. #LanguageModels #ChatGPT #AI #Technology #ObviouslyIHadChatGPTHelpMeWithThisToGetMyPointAcrossClearly
To view or add a comment, sign in
-
The onset of ChatGPT and many other open-source LLMs have revolutionized the world and the way work happens in all sorts of industries. But very few of us know that transformer-based models are the mother of all these magic tools- including BERT, GPT, and all the buzzwords around large language models. I have struggled to understand the underlying concept and the working principle of Transformer, and I believe many struggle in the same way. I am going to publish a series of articles on the Transformer to simplify the concept and break down things into small pieces. I believe this will help many to understand these complex concepts easily and help them to break into this domain. To understand the transformer architecture, first, we have to understand the attention mechanism- the main component of transformer architecture and the concept of self-attention, and finally, the full Transformer Architecture. Maintaining the conceptual logic above, the following are the three articles that I am going to publish. The first article in the series is “The Birth of Attention Mechanism”.The second article in the series is “Self-Attention”, and the third article is “Transformer” Link to the first Article: https://lnkd.in/gKBpjfeB Open for feedback, comments, and criticism.
Birth of Attention Mechanism
medium.com
To view or add a comment, sign in
-
After several weeks of speculations about a new model, Mistral AI has officially released Large - its latest and most performant model- along with Le Chat, a beta of their chat UI. Large is the world’s second best language model available through an API after GPT-4. Mistral Large boasts a performance of 81.2% on MMLU (measuring massive multitask language understanding), beating Claude 2, Gemini Pro and Llama-2-70B. Large is particularly good at common sense and reasoning, with a 94.2% accuracy on the Arc Challenge (5 shot). Mistral Small was also updated on the API to a faster and more performant model than Mixtral 8x7B. Some of Mistral’s features include: 1) Training on English, French, Spanish and Italian datasets for native multilingual capabilities. 2) 32k token context window - way below Gemini’s 1M token or Claude’s 200k token ones. 3) Precise instruction-following that were used to moderate the Chat interface. 4) Native function calling capabilities for agentic capabilities similar to ChatGPT Plus. 5) The JSON format mode forces the language model output to be valid JSON. This functionality enables the extraction of information in a structured format that can be easily used by developers. Given the above features, it wouldn’t be surprising that this model is also a Mixture of Expert, similarly to its open-source counterpart Mixtral 8x7B.
To view or add a comment, sign in
-
How Algorithms Change Our Way of Writing? It’s not a secret that I love using ChatGPT to check and improve my texts. I am not a writer, and English is not my first language, so even though I am pretty good at it, there’s always a part of me that feels very insecure about the way I put words. So here’s a thing I noticed: I’ve enriched my vocabulary by using AI as a grammar checker. ChatGPT most of the time replaces my simple phrasing with fancy words that I don’t know. What I like to do is translate them, and, kid you not, I feel like sometimes I add them to my own texts. Not sure if that means that I am turning into a machine now or improving my English, but anyways, that’s how learning is happening these days 😁 P.S. I asked AI to check my grammar on this text, but only grammar, so no fancy rephrasing 😄
To view or add a comment, sign in
-
Small Language Models (not LLMs) are underrated: Ever since ChatGPT, we are hooked with the idea of large language models where it is this ultimate source of intelligence. However, after working with LLMs for a bit over a year now, I realized that while LLMs are great, it's small language model that is the hidden gem. Here is why I think they are underrated: - Fast: One of the biggest features is since they are smaller in nature, they are meant to be able to run faster than large language models, so you would get the response quicker. - Offline: As they are smaller in size, they will be able to run on a device without needing any internet connection, which also means you can use it even when you are offline! - Domain specific: Since it is smaller, it is also easier to train small models for specific tasks like on-device translation. One of the recent small language debuts has to be Apple Intelligence, where they have their small 3B (GPT-3 has 175B) parameter model to be the first model that the user will interact with, and gradually route it to their Apple server models and then to ChatGPT. Come and try it out and feel it for yourself at HuggingChat. They have a small model, Phi-3-mini-4k-instruct, that you can try now: https://buff.ly/4bwnwN1. We might see David vs Goliath stories again in this space!
To view or add a comment, sign in
-
OpenAI has developed a tool that can spot content that has been written using ChatGPT. It cannot detect AI content that is generated by other models though. They are still hesitant to release this tool. If it is eventually released, it could potentially impact all AI content creators, particularly those who specialize in AI content creation for social media platforms like LinkedIn, such as Luke Matthews, Ruben Hassid, Andrew Davis and more. It would be interesting to understand the potential effects and the strategies AI content creators might apply to adapt to this new reality. https://lnkd.in/dJ4-dpuA
OpenAI says it’s taking a ‘deliberate approach’ to releasing tools that can detect writing from ChatGPT | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
Are Large Language Models like ChatGPT or Grok just advanced n-gram models, or do they possess the ability to truly think and understand the world like humans do? I’ll explain… So, I've been reading up on Syllogistic logic and thought I'd share some cool stuff I found with you. It's actually pretty interesting and has some similarities to how Large Language Models (LLMs) work! So, Syllogistic logic is a way of making logical arguments using two premises and a conclusion. It's like a template for reasoning, and it's been around for a loooong time – we're talking Aristotle-old. Here's a basic example: Premise 1: All humans are mortal. Premise 2: Socrates is a human. Conclusion: Therefore, Socrates is mortal. Pretty simple, right? The idea is that if both premises are true, then the conclusion must be true as well. This kind of logic is used in all sorts of situations, from everyday conversations to complex legal arguments. Now, here's where things get interesting. LLMs like ChatGPT use a similar kind of logic when generating text. They’re trained on a massive amount of text data, learn patterns and relationships between words and phrases. When we give them a prompt, they use that knowledge to generate a response that follows a logical structure, just like a syllogism. For example, let's say you give the prompt: "What do cats like to eat?" it might generate a final response like this: Cats are carnivores. Carnivores eat meat. Therefore, cats like to eat meat. See how that works? It used the information (cats are carnivores) to make a logical argument about what they like to eat. Of course, LLMs are much more complex than simple syllogisms, but the basic idea is the same. So, there you have it – a little bit of info on Syllogistic logic and how it's similar to the way LLMs like me work. Pretty cool, huh? See what Dr Stephen Wolfram (Wolfram Research) thinks about it…
To view or add a comment, sign in
-
🚀 𝐇𝐮𝐦𝐚𝐧 𝐏𝐫𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞𝐬 𝐟𝐨𝐫 𝐂𝐡𝐚𝐭𝐆𝐏𝐓 𝐨1-𝐩𝐫𝐞𝐯𝐢𝐞𝐰: 𝐀 𝐒𝐭𝐞𝐩 𝐅𝐨𝐫𝐰𝐚𝐫𝐝 𝐢𝐧 𝐑𝐞𝐚𝐬𝐨𝐧𝐢𝐧𝐠! 🚀 In addition to excelling on exams and academic benchmarks, ChatGPT o1-preview has been evaluated through human preference tests. In this evaluation, human trainers compared anonymized responses from o1-preview and GPT-4o across a broad spectrum of domains, and the results are impressive! 🔍 𝐊𝐞𝐲 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐬 𝐟𝐫𝐨𝐦 𝐇𝐮𝐦𝐚𝐧 𝐏𝐫𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞𝐬: 💻 𝐂𝐨𝐦𝐩𝐮𝐭𝐞𝐫 𝐏𝐫𝐨𝐠𝐫𝐚𝐦𝐦𝐢𝐧𝐠: o1-preview significantly outperformed GPT-4o, showing its strength in technical reasoning and problem-solving. 📊 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐢𝐬 & 𝐌𝐚𝐭𝐡𝐞𝐦𝐚𝐭𝐢𝐜𝐚𝐥 𝐂𝐚𝐥𝐜𝐮𝐥𝐚𝐭𝐢𝐨𝐧: These reasoning-heavy tasks saw a clear preference for o1-preview, with human evaluators favoring its responses by a large margin. ✍️ 𝐏𝐞𝐫𝐬𝐨𝐧𝐚𝐥 𝐖𝐫𝐢𝐭𝐢𝐧𝐠 & 𝐄𝐝𝐢𝐭𝐢𝐧𝐠 𝐓𝐞𝐱𝐭: While o1-preview excels in logical tasks, it was not as highly preferred in some natural language tasks, highlighting the model’s varying suitability across different domains. These results show that ChatGPT o1-preview is a powerhouse for domains that require better reasoning, such as coding, data analysis, and mathematics, but there’s still room for improvement in certain language tasks. 🌐 𝐖𝐡𝐲 𝐝𝐨𝐞𝐬 𝐭𝐡𝐢𝐬 𝐦𝐚𝐭𝐭𝐞𝐫? AI is evolving, and the versatility of models like o1-preview is pushing the boundaries in specific areas like technical problem-solving. As AI becomes more tailored to various use cases, we are witnessing breakthroughs that help professionals work more efficiently and effectively in high-reasoning tasks. hashtag#AI hashtag#ChatGPT hashtag#Innovation hashtag#MachineLearning hashtag#HumanCenteredAI
To view or add a comment, sign in