Claude Artifacts - Publish & Remix! 🚀 Exciting news from Anthropic! 🎉 Claude Artifacts can now be published publicly, allowing everyone to view and remix your own creations! Here's what you need to know: 🔹 Publish: Make your Artifact publicly accessible so others can view and remix it. 🔹 Remix: Start new conversations and modify existing Artifacts to build upon the original content. 🔹 Visibility: Published Artifacts are available on a dedicated, public-facing website. 🔹 Control: Only the Artifact itself is published, not the surrounding conversation or context. You can unpublish specific versions or delete conversations containing published Artifacts at any time. Read more: For more detailed information on publishing and remixing Claude Artifacts, visit Anthropic's official website https://lnkd.in/gFuXwpxd Join me in this collaborative journey of innovation and creativity and get in touch! #ClaudeArtifacts #Anthropic #Innovation #Collaboration #GenAI #ArtificialIntelligence
Tania Leuschner’s Post
More Relevant Posts
-
I'm a big fan of the Artifacts feature in Anthropic's Claude 3.5 Sonnet release. There are a number of great advances in Claude 3.5 Sonnet so why am I so impressed by Artifacts? In its current state it's mostly a UX feature but I believe it could be headed a massive strategic direction: 1️⃣ Own the workflow to own the feedback - a thumbs up/down and comment on AI output is both rare and often lacks enough specificity to be useful. However, better quality id difficult since people take model output then leave the generation environment to work with it. Creating a workplace of versions and eventually edits can exponentially improve feedback quality by containing the entire workflow in the generation environment. 2️⃣ Ongoing research advanced in RLHF - if you've been following along you know I believe current PPO methods in RLHF are incredibly basic and am excited about methods like DPO and PERL. If Anthropic has the interface to capture higher quality feedback from users and is simultaneously researching RLHF methods which train on higher quality feedback, they can achieve big advances in model quality. For a bit more detail, Humanloop does a great job capturing different types of AI feedback ➡️ https://lnkd.in/dDVv7Q9e #ai #generativeai #anthropic #claude #rlhf #news #innovation
What are Artifacts and how do I use them? | Anthropic Help Center
support.anthropic.com
To view or add a comment, sign in
-
#Abstraction #objectorientedparadigm #Databases #Cognition #Visualization Picture credit: Gianfranco Cavallazzi I took an image posted by the artist and identified parts I wanted to use in creating another visual. At the same time, I was revisiting the process of abstraction which, in a nutshell, identifies core characteristics that are common across all objects of a class - in the object-oriented programming paradigm. As a process, abstraction is not the same as distillation or concentration. Abstraction, applied to objects, to entities, is an identification process and an abstract object cannot exist - cannot have representation or cannot materialize. As art, Abstract art uses visual language of shape, form, color and line to create a composition which may exist with a degree of independence from visual references in the world. (credit: Wikipedia). Visualization then, to me, is a perfectly valid and valuable pathway, a tool that connects a formless shapeless entity to something tangible, to something that is bounded and can serve as a core reference point for instant recognition, discussion, description. Further, the image bypasses language translations and interpretations. Call it what you will, in your preferred language, the visual communicates more clearly and meaningfully than words and languages with their deep and historical limitations spanning centuries, millennia, ages, epochs and evolutions. As it turns out, a picture may be worth a thousand words. But it doesn't stop there. A process ... is worth a million pictures. And Abstraction ... is a meta process.
To view or add a comment, sign in
-
I do see the benefits of A.I. for business, marketing, data application, etc. However, as an artist, designer, and creator it holds little to no value to me. The act of creation is the art. The choices, the iterations, the turning of what we think initially as a mistake into something that pivots the piece into a direction not originally intended… it’s all art. And the skills learned through the process are invaluable. I know I’m a better problem-solver at work because of what I’ve learned in creating art. That is all 100% human. “The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.” https://lnkd.in/eXAb9MH9
Why A.I. Isn’t Going to Make Art
newyorker.com
To view or add a comment, sign in
-
The era of Claude 3.5 Started with Sonnet 3.5 Model release. - Better than Claude Opus 3.5 - Beating GPT-4o on most benchmarks - Cheaper than Opus 3 and GPT-4o https://lnkd.in/dwVwHyvU #cluade #opus #anthropic #sonnet
Introducing Claude 3.5 Sonnet
anthropic.com
To view or add a comment, sign in
-
⚡ Claude 3 Is Now Available for integrations to improve #automation and user experience. It offers two new text generation models: Claude 3 Opus and Claude 3 Sonnet from #OpenAI's main competitor, Anthropic. 💎 Opus tackles the most complex and creative tasks: generating human-like responses, reasoning, solving problems, coding, and even composing poetry. It's the only model that surpasses GPT-4 in tests. Comparison 🔼 ⭐️ Sonnet is the basic model suitable for creating texts, code, translations, and everyday tasks. It offers an optimal balance of quality and cost. Both models recognize images and possess knowledge about the world up to August 2023 (#GPT4 — up to April 2023). #ai #aiforbusiness #aiautomation #businessautomation #aidevelopment
To view or add a comment, sign in
-
How to 𝐛𝐮𝐢𝐥𝐝 𝐚 𝐂𝐫𝐞𝐰 of Agents in 𝐥𝐞𝐬𝐬 𝐭𝐡𝐚𝐧 𝟏𝟎 𝐦𝐢𝐧𝐮𝐭𝐞𝐬 ⏰ The Spanish Poet Strikes Again 😅 ----- A few days ago, I released the final video of the "Agentic Patterns" series on my YouTube Channel, "The Neural Maze". In this video, I covered the fourth pattern, as defined by DeepLearning.AI: the MultiAgent Pattern. 💠 𝐌𝐮𝐥𝐭𝐢𝐚𝐠𝐞𝐧𝐭 𝐏𝐚𝐭𝐭𝐞𝐫𝐧 (𝐓𝐡𝐞𝐨𝐫𝐲 📖 ) This patterns aims to break down tasks into smaller subtasks that are handled by different roles. For example, one agent might be a software engineer, while another could act as a project manager, and so on. You might be familiar with frameworks like crewAI or AutoGen - both of which implement different variations of this pattern. ----- 💠 𝐌𝐮𝐥𝐭𝐢𝐀𝐠𝐞𝐧𝐭 𝐏𝐚𝐭𝐭𝐞𝐫𝐧 (𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 ✍ ) For this final lesson, I wanted to build something more elaborate. That's why I've been working on a 𝐦𝐢𝐧𝐢𝐦𝐚𝐥𝐢𝐬𝐭 𝐯𝐞𝐫𝐬𝐢𝐨𝐧 𝐨𝐟 𝐂𝐫𝐞𝐰𝐀𝐈, drawing inspiration from two of its key concepts: 𝐂𝐫𝐞𝐰 and 𝐀𝐠𝐞𝐧𝐭. Additionally, I've also borrowed ideas from 𝐀𝐢𝐫𝐟𝐥𝐨𝐰'𝐬 𝐝𝐞𝐬𝐢𝐠𝐧 𝐩𝐡𝐢𝐥𝐨𝐬𝐨𝐩𝐡𝐲, using >> and << to define dependencies between my agents. In this micro-CrewAI, 𝐚𝐠𝐞𝐧𝐭𝐬 are equivalent to 𝐀𝐢𝐫𝐟𝐥𝐨𝐰 𝐓𝐚𝐬𝐤𝐬 and the 𝐂𝐫𝐞𝐰 is equivalent to an 𝐀𝐢𝐫𝐟𝐥𝐨𝐰 𝐃𝐀𝐆. ----- If you want to see how the Crew is implemented and a practical example, check out the video below! I've also linked the full video in the comments. So, if you like this type of content, please suscribe to the channel and hit the like button - 𝐢𝐭 𝐡𝐞𝐥𝐩𝐬 𝐦𝐞 𝐚 𝐥𝐨𝐭!! 🙏 ----- 💡 𝐅𝐨𝐥𝐥𝐨𝐰 𝐦𝐞 𝐟𝐨𝐫 𝐫𝐞𝐥𝐞𝐯𝐚𝐧𝐭 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐨𝐧 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐌𝐋, 𝐌𝐋𝐎𝐩𝐬 𝐚𝐧𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈 #mlops #machinelearning #datascience #llm
To view or add a comment, sign in
-
I'm pleased to announce the launch of Libretto's new LLM-as-Judge feature. This tool is designed to significantly enhance the efficiency of evaluating generative language model outputs, something that our customers tell is a constant stumbling block when building AI applications. Most folks we talk to are still doing manual "vibes checks" when they make an LLM prompt — effective but not scalable. Our new LLM-as-Judge feature aims to address this by automating the evaluation process, thus enabling more scalable and efficient prompt testing. Key Features of Libretto's LLM-as-Judge: Automated Evaluation Process: Having an automated vibe checker saves time by automating the evaluations of LLM outputs and letting you iterate faster on your prompts. Human-Integrated Calibration: If the LLM-as-Judge isn't correctly grading LLM outputs, you can grade some samples to get it back on track. This combines automated assessments with human expertise to ensure the evaluations are both accurate and practical. Continuous Improvement: As you iterate on your LLM prompt, you often find new criteria that are useful for judging. Libretto's LLM-as-Judge is designed to make it easy to update the criteria for best results. Starting with LLM-as-Judge is straightforward. Libretto automatically suggests evaluation criteria tailored to each new prompt, integrating them into your existing workflows seamlessly. Our interface facilitates easy adjustments and offers real-time feedback, allowing continuous refinement and improvement. For those interested in exploring this new feature and seeing how it can enhance your work with language models, join our beta program! For more details, please visit our blog: https://lnkd.in/e4Ff73XE We're excited about the possibilities of LLM-as-Judge and look forward to your feedback!
Vibes at Scale: Libretto Makes LLM-as-Judge Easy, Understandable, and Dependable | Empirical Prompt Engineering
getlibretto.com
To view or add a comment, sign in
-
Incredible tool I've discovered: Claude AI! Claude has the remarkable ability to convert images directly into code. This is a game-changer for developers, designers, and anyone working with digital content. Imagine the time saved and the precision achieved by this process. Claude is a tool you don't want to miss. Check it out and experience the future of coding! link : https://claude.ai/new #TechInnovation #Coding #WebDevelopment #Design #AI #Claude #Productivity
To view or add a comment, sign in
-
Artifacts allow Claude to share substantial, standalone content with you in a dedicated window separate from the main conversation. Artifacts make it easy to work with significant pieces of content that you may want to modify, build upon, or reference later. When does Claude use Artifacts? Claude creates an Artifact when the content it is sharing has the following characteristics: It is significant and self-contained, typically over 15 lines of content It is something you are likely to want to edit, iterate on, or reuse outside the conversation It represents a complex piece of content that stands on its own without requiring extra conversation context It is content you are likely to want to refer back to or use later on https://lnkd.in/dNyMz5rK #claude #ai #genai #hybridui #uiux #shahzaibnoor
What are Artifacts and how do I use them? | Anthropic Help Center
support.anthropic.com
To view or add a comment, sign in
-
#Sora: Where words become worlds, frame by frame! Imagine feeding a text prompt to an #AI, and out pops a captivating video! That's the magic of Sora, powered by a revolutionary technology called #DiT. Think of it as a translator, bridging the gap between your words and a vibrant video world. But how does this wizardry work? Buckle up for a dive into Sora's secret sauce: 1. From Text Chaos to Crystal Clear Videos: 🪄 Instead of starting with a blurry mess, Sora uses latent diffusion. Imagine a compressed video blueprint, where the magic happens. Step-by-step, the model refines this blueprint, transforming it into a stunning video, like a sculptor chiseling away at a masterpiece. 2. Attention to Detail: Every Pixel Matters: Think of a conductor guiding an orchestra. That's what the attention mechanism does! It focuses on crucial parts of the video while generating each frame, ensuring smooth transitions and meaningful interactions between objects. Plus, the text prompt acts like the sheet music, directing the entire composition. 3. Big Dreams, Bigger Videos: Sora's potential is vast! With more data and computing power, it can create higher resolution, faster frame rate videos, making them even more lifelike. And the best part? It can adapt to different styles, from animation to live-action, making it a creative chameleon! 4. The Future is Now, but Responsibly: ⚖️ While Sora is awe-inspiring, challenges remain. Generating long, complex narratives is still a hurdle. More importantly, ethical considerations are paramount. We must ensure responsible use of this powerful tool, fostering open discussions and collaboration to shape its bright future. Ready to see the future of video creation? Keep an eye on #Sora – it's just getting started! Link: https://lnkd.in/d4-a-7AS The video created by Sora:
To view or add a comment, sign in