✨ Can regular expressions determine if a number is prime? 😲 For all the math enthusiasts and regex aficionados, here’s an eye-opener: a unique, pattern-based approach to prime checking! 🧩💡 I stumbled across this fascinating read that reimagines prime checking through regular expressions—an angle you don’t see every day. Check out this article to explore how unconventional tools can redefine familiar problems. 🔗 Take a moment to dive in and see what creative solutions can emerge from unexpected places. Shared with you by Skill Alley! https://lnkd.in/eiFWehxe #SoftwareEngineering #CodingTips #RegularExpressions #PrimeNumbers #MathInTech #SkillAlley #TechInsights #Developers #InnovationInCode #skillalley
Amit Chawla’s Post
More Relevant Posts
-
The Sieve of Eratosthenes (276 to 194 BCE) is one of the best-known recipes for generating #prime numbers. In the hunt for minimalist algorithms that generate #complexity in information-containing systems, like James Conway’s Game of Life and the Stephen #Wolfram pursuit for fundamental physics generators, I was interested in how short a computer program could be (without calling subroutines, packages etc), written in any fairly widely used practical programming language, for generating prime numbers. That is in the spirit of Gregory Chaitin https://lnkd.in/epkhwRJ5, but he liked the programming language #LISP. However, so far, the shortest working code that I found in a familiar language that does anything useful and generates #primes, is in #PERL, employing a match and edit instruction called the regular expression (regex). The whole program is in 42 characters including the essential blank for useful printout. The idea of using regex and even doing basically the same thing as isn’t novel by any means, i.e. using something like (1x$_)!~/^(11+?)\1+$/&&print"$_ "while++$_ is not unknown. Indeed, I’d be grateful for being able to acknowledging the original coder. I simply condensed it. The links I found referring were dead. But it is fun to see it in operational code and see how much I could squeeze it and keep it legal and functional. I would like to see anything shorter, without digging up an obscure programming language or creating your own. PS. You do have to interrupt the program given here manually, and it does include 1 as a prime, but I allowed that because that was considered prime before the early 20th century,…and hardly difficult to ignore and mentally delete. To be fair, you can probably write any program in regex, like doing it in machine code. But that is not my idea of fun. What is impressive, and really the bottom line, is how short the regex code is for prime generation, and I am (in the night job) interested in exploring that for generating complex systems, because it is a concise formal system,.....and in practice very efficient, so it is also a useful pursuit for some industry code segments.
To view or add a comment, sign in
-
I spent the entire weekend with OpenWeb UI 🚀 and two models: Deepseek-Coder-v2:16b and Llama 3.1:8b, working on various Python coding tasks. I utilised these models for refactoring, replacements, suggestions, and coding assistance. In every scenario, Llama 3.1 demonstrated better performance and speed. Although DeepSeek Coder is trained on a broader range of GitHub data, the results from Llama 3.1 were consistently better. The performance differences are likely due to the model sizes, with Deepseek-Coder at 16 billion parameters versus Llama's 8 billion parameters. Well, back to hacking... #AI #Developers
DeepSeek Coder: Let the Code Write Itself
deepseekcoder.github.io
To view or add a comment, sign in
-
DeepSeek-Coder-V2 just got released! An advanced, open-source code language model that understands and generates code across 338 programming languages! 🤯 Key Highlights: - Supports 338 programming languages with 236 billion parameters and 160 experts. - DeepSeek-Coder-V2 was trained on a diverse dataset of 60% source code, 10% math, and 30% natural language from GitHub and CommonCrawl. - Offers a Lite version with 16 billion parameters for on-device use - Trained on 6 trillion tokens, achieving state-of-the-art results in several benchmarks - Outperforms closed-source models like GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro - Available on Hugging Face with a custom license for commercial use. 🔗You can chat with the DeepSeek-Coder-V2 on DeepSeek's official website: https://lnkd.in/dPvNyEYg 🔗Link to research paper: https://lnkd.in/dviY3wxR 🔗List of supported programming languages here: https://lnkd.in/dPTCHWmw 🔗Read more here: https://lnkd.in/dcfqnS6a #AI #ArtificialIntelligence #DeepSeekCoderV2 #CodingAssistant #OpenSource #CodeIntelligence #InnovationInTech #AIforDevelopers
To view or add a comment, sign in
-
MetaVoice-1B is a 1.2B parameter base model trained on 100K hours of #speech for TTS (text-to-speech). It has been built with the following priorities: - Emotional speech rhythm and tone in English. - Zero-shot cloning for American & British voices, with 30s reference audio. - Support for (cross-lingual) voice cloning with finetuning. - We have had success with as little as 1 minute training data for Indian speakers. - Synthesis of arbitrary length text #generatieveai #audio #texttospeech #python #ai #softwaredevelopment #softwareengineering https://lnkd.in/d8qAvv7i
GitHub - metavoiceio/metavoice-src: Foundational model for human-like, expressive TTS
github.com
To view or add a comment, sign in
-
🧠 𝐄𝐯𝐞𝐫 𝐰𝐨𝐧𝐝𝐞𝐫𝐞𝐝 𝐢𝐟 𝐋𝐋𝐌𝐬 𝐜𝐨𝐮𝐥𝐝 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐬𝐨𝐥𝐯𝐞 𝐫𝐞𝐚𝐥-𝐰𝐨𝐫𝐥𝐝 𝐜𝐨𝐝𝐢𝐧𝐠 𝐢𝐬𝐬𝐮𝐞𝐬? 🔎 A recent study introduces 𝑺𝑾𝑬-𝒃𝒆𝒏𝒄𝒉, a way to test language models’ ability to resolve software engineering problems on GitHub. The framework includes 2,294 issues from 12 popular Python repositories. 🟢 Language models are asked to fix issues by editing the code. 🟢 These issues often require understanding changes across multiple functions, classes, and files. 🟢 Even the best models struggle with some complex tasks. 💡 This research shows the need for more advanced language models that can truly understand and interact with complex code. I believe this study highlights the importance of pushing the boundaries of what language models can achieve. 👉 https://www.swebench.com/ #AI #SoftwareEngineering #Innovation #llms #github
To view or add a comment, sign in
-
Introducing Stable Code Instruct 3B This #llm is an instruction-tuned Code LM based on Stable Code 3B. w/ natural language prompting, this model can handle a variety of tasks such as code generation, math and other software development related queries.
Introducing Stable Code Instruct 3B — Stability AI
stability.ai
To view or add a comment, sign in
-
⚡️Magic methods! start and end with the double underscores(__) , call happens internally from a class, magic methods provide advanced functionalities for customizing classes. They also known as special methods or dunder methods. example: ‘+’ operator calls __add__ method we can use dir() function to list all attributes of an object, including its magic methods. ['__abs__', '__add__', '__and__', '__bool__', '__ceil__', '__class__', '__delattr__', '__dir__', '__divmod__', '__doc__', '__eq__', '__float__', '__floor__', '__floordiv__', '__format__', '__ge__', ... ] though few of those looks not intuitive, but if we can leverage them properly, we will be witnessed of their powerful features. ✍ few more useful methods are – ➡ We can call an instance of a class as a function, using __call__ ➡ __new__ method to create a new instance, this method sometimes called before the __init__ method. Example – In singleton pattern this helps to check whether a class already has one instance or not, on creation of every new instance. ➡ In object indexing and slicing: __getitem__, __setitem__ and __delitem__ methods help on customization when it set or deleted. ➡ End users can expect neat results using __str__ in class. ➡ __repr__ define better representation to understand the object for developers in debugging. ✔️ So with the help of magic methods classes behave like built-in types, which save our time and make classes more intuitive. ⤵ add your thoughts in comment #technology #future #programing #python #Training #Learning #EdTech #job #careers #Jobinterviews #ai
To view or add a comment, sign in
-
Create and Train Your Own Expert LLM: Generating Synthetic, Fact-Based Datasets with LMStudio/Ollama and then fine-tuning with MLX and Unsloth Hey everyone! I know there are tons of videos and tutorials out there already but I've noticed a lot of questions popping up about using synthetic datasets for creative projects and how to transform personal content into more factual material. In my own work doing enterprise-level SFT and crafting my open-source models, I've enhanced a Python framework originally shared by the creator of the Tess models. This improved stack utilizes local language models and also integrates the Wikipedia dataset to ensure that the content generated is as accurate and reliable as possible. I've been thinking of putting together a comprehensive, step-by-step course/guide on creating your own Expert Language Model. From dataset preparation and training to deployment on Hugging Face and even using something like AnythingLLM for user interaction. I'll walk you through each phase, clarifying complex concepts and troubleshooting common pitfalls. Let me know if this interests you! You can see the finished results of this stack here: huggingface.co/Severian Most of the datasets and models I've made have been using these scripts and my approach
To view or add a comment, sign in