🤖 Do your chatbots follow copyright laws? ⚖️ Right now, fair use for AI is a bit confusing, and the rules are still being figured out. Some AI outputs look just like human content, so who should really get credit for what a machine creates? The good news is, big AI companies like OpenAI and Anthropic allow users to fully own anything made with their models. It makes sense—just like how Microsoft doesn’t own everything written in a Word document. But what if you’re using someone else’s data to train your model? Most models use such massive amounts of data that it’s nearly impossible to know who contributed what. That’s why using public data for training is generally considered fair use. However, if an AI is specifically trained to imitate one artist’s style, that’s where things get tricky. At a certain point, it's just ripping off someone else's work. So, how close is too close to the original? Should all data be fair game for training AI? This post really just scratches the surface of some big, unanswered questions. But as AI developers, it’s important to consider what kind of rules should define how AI models can operate. Some useful sources: https://lnkd.in/ex9FaBwY https://lnkd.in/eVurijf8
Fix AI
Technology, Information and Internet
We will improve the efficiency of your business with AI agents
About us
Your provider of smart, AI-driven automations. Our chatbots go beyond basic functionality—they’re equipped with a range of tools, fully customizable to meet your specific needs. Whether it’s capturing leads from every website visitor, guiding customers to the perfect product, or creating a personalized sales funnel for each prospect, we’ve got you covered. Just as the internet revolutionized business, AI is quickly becoming the most powerful tool available to enhance operations and expand your reach. We’re here to help you seize this opportunity, guiding you through every step of the process. In this rapidly evolving industry, you need a partner who stays ahead of trends and delivers lasting results. We are that partner. If you need an AI solution asap or want to talk project ideas, book a free call
- Website
-
https://calendly.com/fixai/solution-discovery-call
External link for Fix AI
- Industry
- Technology, Information and Internet
- Company size
- 2-10 employees
- Type
- Privately Held
- Founded
- 2023
Employees at Fix AI
Updates
-
Prompt engineering is quickly becoming a must-have skill in today’s world. Anyone working with AI systems must be able to write prompts that handle complex instructions and adjust them as demands shift. Start by structuring your prompts clearly. It helps to create a rating system based on your specific needs. For example, Agentive offers a 0-10 rating scale for prompts. Rating a prompt based on the text alone can give you a good sense of whether you're on the right track, but remember, these scores are just guidelines. Small changes to a prompt can have a disproportionate impact on the final score. Agentive: https://agentivehub.com/ The real test of a prompt's effectiveness is to see how it performs with real questions. Langsmith is a useful tool that lets you compare prompts side by side and track how different versions of your chatbot handle the same question. You can create a list of FAQs and test how your prompt responds to each one. Langsmith: https://lnkd.in/g5fAab_Z Another good practice is to save your previous prompts. Keeping a record of each iteration helps you track changes, identify what works best, and quickly backtrack if you're getting unexpected results. This can be as simple as storing them in a Google Doc, or an Airtable spreadsheet if you need more detailed notes. All things considered, don't just eyeball it! Developing a rating system for your prompts, getting side-by-side comparisons, and tracking your progress are essential if you want your chatbots to improve over time.
-
Using Instagram to get new leads is a great idea, especially when you use automation to talk to people for you. But if you’ve been automating your Instagram for a while, you might have run into problems with Meta’s strict rules. To help you avoid common mistakes, here are the two biggest issues we’ve faced—so you don’t have to! ⏰ The 24-Hour Messaging Rule Instagram only allows you to send an automated message to a user if they’ve sent you a DM within the last 24 hours, meaning any follow-up messages must be sent within that time. The good news is, if the person replies, the 24-hour window resets—so the entire conversation doesn’t have to happen within that initial 24 hours. Keep this in mind to ensure your automations run smoothly and stay within the rules. 💬 Limited Comment Follow-Ups When someone comments on your post, you can only follow up with them once—either by replying to their comment or sending a DM. Make sure that message counts, because if they don’t respond, you won’t be able to send any more automated messages. These are the latest Instagram rules for automatic messaging. Since their policies change often, it’s important to stay updated and adjust as needed. What challenges have you faced with Instagram automations? Let us know in the comments!
-
Using AI assistant functions can be a game-changer—if you know how! Here's how to truly harness their power and avoid common mistakes. Misconceptions: 1. Some people think that if you tell the model what your function does, it will just perform that function. Wrong. LLMs can only output tokens, and functions are no exception. Assistants are really just generating the ARGUMENTS to a function. 2. Using functions can cut down on token costs. Wrong again. Your assistant needs to know which functions are available, so every function schema adds to the input tokens. 3. Functions vs response format. While these two features are similar, they serve distinct purposes. Specify a response format when you need your assistant's outputs to have a consistent structure. Use functions when you want your assistant to generate inputs and interpret outputs from your app. Function schemas also come with parameters that can be confusing. Here’s a quick breakdown: "additionalProperties" - Set this to true if you’re okay with the model including extra parameters you didn’t specify. "strict" - Ensures the model sticks exactly to your schema. Unless you're experimenting, this should be set to true. "parallel_tool_calls" - Enable this to allow multiple functions in a single run. "tool_choice" - Options include: auto (letting the assistant decide when to call a function), required (the function must be called every time), or none (no functions are used). Feeling overwhelmed? Check out one of the functions we've been using in the image below! This allows your assistant to pause the conversation for a specified reason.
-
Chatbots are great and all, but what if you need a bot that does more than just talk to customers all day? Learning how to use the function call feature in OpenAI assistants is one of the best ways we've been able to give our chatbots the ability to perform real-world tasks. With function calling, your assistant can output an exact JSON schema, allowing you to define any structure you need! JSON outputs give you much more flexibility to extract data or add functionality. Here are some of the ways we’ve been using functions to enhance our bots: 1. Generate Inputs for API Calls 🌐 Imagine your chatbot collecting a user's information and sending it to your CRM or triggering another automation entirely. 2. Manage Engagement Levels ⏸️ Sometimes conversations need to be paused or even ended. You can equip your bot to call a function when user engagement drops. This ensures a smooth and respectful interaction, especially when the user needs a break. 3. Access Real-World Information 📈 Need current stock prices in your conversation? Set up a function for the bot to call. This way, your bot tells your application which stock price it needs, your application finds it, then sends it right back to your bot. 4. Handle Complex Logic 🧠 LLMs can do a lot, but they aren't consistent with math. Don't let your bot struggle with complicated calculations. Instead, let it call a function to perform extensive math or execute a block of code. Learning to use function calls takes time, but the payoff is absolutely worth it. Don’t settle for a basic back-and-forth conversation with your customers. Enable your bots to get things done!
-
Here's how we upgraded our sales funnel with personalized pitches for every customer. Most businesses have a wide range of offers, so how can a single chatbot understand your customers, recommend the best solutions, AND be an expert objection handler? This 2 agent framework has been working wonders for us: 1. The Qualifier The first agent in the funnel has a single mission: to qualify customers and build rapport. They uncover the customer’s pain points, preferences, and needs. This helps in determining which of your offers will deliver maximum value. 2. The Closer Once the best offer is identified, a second, specialized agent takes over. This closer is programmed to manage objections and deliver precise, targeted pitches for the offer they know best. By dividing the roles, you don't have to overload a single agent with your entire sales handbook. This approach not only keeps token costs low but also gives you much more flexibility to fine-tune your strategies.
-
We’ve spent all summer building AI Agents. Mix these 5 concepts together to put together a POWERFUL prompt that actually works 1. Follow Action Words (LIMITS HALLUCINATIONS) Using direct action words is much better than giving the bot room to make its own decisions. If you allow uncertainty because you’re uncertain, the bot will be even more uncertain than you. Examples: - Bad: “If the user responds negatively, you could try and ____” - Good: “If the user responds negatively, tell them ___” - Great: “If the user responds negatively, your response must meet the following criteria:” 2. Examples + Criteria > Many Examples (ACCURATE EXECUTION) For AI Agents, since user input is unpredictable, we achieved better results by providing one or two example responses with key criteria. This works a lot better than trying to cover every scenario and bombarding the instructions with examples. It also helps reduce token costs. 3. Scenarios (CONTROLLING PATHWAYS) User responses often dictate the path your AI agent should follow to achieve the desired outcome. Similar to criteria, create an outline for the bot’s path based on user input. Example criteria: - If the user responds with a question, comment, concern or isn’t satisfied, engage in brief objection handling with them, then proceed to make your pitch. - If their response is positive and they found it helpful, proceed to the pitch. - If the user solely expresses gratitude or says nothing, ask them ___ 4. Chronological Order (LLM THOUGHT PROCESS) Writing chronologically aligns with how LLMs processes information. For example, changing the order in criteria or scenarios can mislead the agent into sending the user down one path before considering the other (correct) path. Writing prompts chronologically has worked a lot of our major problems. 5. Token Reduction (INCREASE PROFIT MARGINS) All of these methods combined allow you to reduce the length of your prompts, reducing the token cost per message sent. When deploying these AI agent solutions for businesses, this is a huge factor to consider while writing agent instructions. Instead of writing long paragraphs trying to get the LLM to interpret something, condense it into criteria, put it in the right order and don't allow room for the bot to make uncertain decisions.
-
FOR ANYONE BUILDING ON MANYCHAT.... I'm sure you know how frustrating it can be dealing with cases where customers send multiple messages in a row. If Manychat has a feature to handle this then I definitely can't find it. So I made this loop. Step 1: Get your bot to send the message where you're expecting multiple replies. Make sure this block saves the user's response to a specific field. Step 2: Use a POST request to store the user's response somewhere for later. I post messages to an Open AI thread, but an array should work too. Clear the "user's response" field afterwards. Step 3: The next step is another Instagram message block. Send the user a space bar (this just acts as a "wait for user's response" block because Instagram doesn’t let you send empty spaces). When the user responds, their message gets saved, submitted, and cleared. Don't forget the time limit. You can think of the "If contact has not responded" field as a "user can send messages for this long" field instead. Step 4: Once time is up, the automation moves on. You can now run the thread, set a condition for # of messages or just look at all of them later. I mainly use this for adding all a user's messages to a thread but hopefully you can apply this loop to your own automations!
-
What's up everyone! Cheers to the start of Fix AI's LinkedIn page, we're excited to dive deeper into the space. We're an agency that specializes in automation, sales funnels and AI chatbots. Looking forward to sharing insights we come across, letting you get to know our team better and connecting with other agency owners. Stay tuned for updates or hit us up if you're looking for help with a project 🤝