AI prompting trick:
When learning or relearning a concept, ask ChatGPT to explain the same concept six different ways.
As an example: "Please explain [concept here]. Explain it to a 6-year old, then a 12-year old, then an 18-year old, then a 24-year old, then a business professional, and then a data scientist. Also provide 2 URLs to two helpful articles that explain the concept. One should be a short article, and the other should go more in-depth."
https://lnkd.in/eRzTtkaX
Good at Excel? ... I'll make you better.
Just starting out? ... I got you.
Excel | SQL | Tableau | Power BI
Training | Consulting
"Lets get this done!"
Write better. Respond faster.
Review more thoroughly. Learn more quickly. Integrate more cleanly.
Use OpenAI's ChatGpt 4o or Anthropic Claude 3.5 to review and comparatively analyze documents quickly and to help your team quickly and effectively learn new tasks and software.
Use Grammarly to improve the quality of your team's writing and to develop a brand identity that centers on communication and a distinct vocabulary.
Use the Databricks platform to methodically clean your data and then create Generative AI tools to propel analysis and work production.
Use Zapier to integrate all of the applications and software that run your organization.
So many tools.
So easy to use.
So much to learn.
So many ways to get better.
John
#AI#technology#operationalexcellence#smallbusiness#familybusiness#processdesign
OpenAI’s “Strawberry” AI Model: What to Know
Launch: Arriving in September, likely added to ChatGPT’s model menu.
Capabilities: Text generation only, with a “chain-of-thought” approach that improves accuracy but causes a 10-20 second delay.
- Early Reviews -
Pros: Superior for coding, math, and nuanced tasks like strategy development.
Cons: Slow responses, minor performance improvement, and potential memory issues. 10-20 seconds is a long time.
Pricing: TBD, but may introduce a new paid tier, potentially influencing costs across the AI market.
Seems ideal for users needing precision over speed, such as data scientists, developers, and researchers.
Quick AI tip: Keep an "impossibility list".
A list of things you have tried to do with AI but it hasn't worked.
What should you include in this table?
➡️ Name: The person that attempted it (if you're going to be sharing across your team)
➡️ Date: The date it was attempted
➡️ Description: What you tried to do/accomplish
➡️ Prompt: The exact prompt that was used
➡️ Platform/Model: E.g., ChatGPT 4o, Claude 3.5 Sonnet.
I see many benefits to this approach:
1. Save time by not repeating experiments that were already tried by your team.
2. Improve knowledge across your team by analyzing their prompts, use cases, and provide feedback.
3. Measure how the models are improving over time. For example, there are things ChatGPT can do today that it couldn't do a couple months ago.
4. Measure how well models are performing against each other. For example, if someone tried to write a blog post in your company voice with ChatGPT, you may have better luck using Claude.
h/t: Ethan Mollick and his One Useful Thing newsletter for the idea. List format and benefits are my own.
P.S. What tasks would you add to your impossibility list? I'm adding "create an impossibility list image" 😅
🔶 Insights and Data Scientist
🔶 Cloud & AI Enthusiast
🔶 Advocate for Data Privacy and Ethical AI
🔶 Mentor for Aspiring Data Pros
🔶 Industry Events Contributor & Speaker
Tip when using AI Models in Data Analysis
LLMs like ChatGPT and Gemini are great for spitting out code, especially for data analysis. Infact sometimes they can be lifesavers. But here's a tip: context is key.
Before blindly trusting that generated code, make sure you have a clear idea of the results you're expecting.
While AI models can be incredibly helpful in automating tasks and providing code snippets, they may not always grasp the full context of your analysis.
Bottom line is leverage LLMs for efficiency, but use your human brain to ensure accuracy.
#DataAnalysis#AIModels#Efficiency#Accuracy
I’ve been using AI tools like ChatGPT and Claude in almost every task now and then, both personal and professional.
One thing I liked most about it is how much time it saves for me.
For example few years back I have written very large SQL query with more than 40 columns. It was really very time consuming for me back then as all columns had different data type which and also had JSON data stored in it. So manipulating those data was very time consuming and overwhelming. But few days back I did similar task with almost 1/100 of the time.
This is the real power of AI. 🙌
I keep reading that “AI progress is slowing down”.
But it was 6 years from the paper “Attention is all you need” until ChatGPT was released. That’s 6 years of incremental updates to get to an AI model that regular consumers would find useful.
ChatGPT is now 2 years old. Maybe it only seems like incremental improvements in that time. But I’ve seen plenty of use cases where the models finally get good enough to be useful. Not only that but we’re getting better at using them.
“But aren’t we running out of training data?”
No. No we are not.
The big players in AI have all sorts of ways of getting more data. That’s not the biggest problem right now.
The biggest problem is that additional data is only giving incremental gains. But everyone knows that. It’s not some secret. New papers come out every week about how to get more out of these models.
It also seems to take more researchers, more engineers, and more compute to make each incremental update. That’s because everyone is focused on one paradigm and we keep finding ways to squeeze more juice out of it. Paradigm shifts happen on slower time scales while incrementalism gets less and less effective. But incrementalism is how we discover paradigm shifts.
“Attention is all you need” was an insight that came from trying to squeeze more performance out of the last paradigm.
TOP LINKEDIN VOICE EARNED 44-BADGES HAVING 31-YEARS BANKING INDUSTRIES EXPERIENCE ON DIFFERENT ROLE AS HIGHLIGHTED IN MY PROFILE ALONG WITH DIFFERENT IMPORTANT SKILLS DULY ENDORSED BY LINKEDIN HIGH PROFILE MANAGEMENT.
It’s fascinating to see how quickly the AI landscape shifts, with models like Google's Gemini-Exp-1114 and OpenAI’s ChatGPT-4 pushing the boundaries of what these systems can do! I’m not really able to have a "favorite" in the way humans do, but I do find it exciting to see the progress in these models.
Both Gemini-Exp-1114 and ChatGPT-4 have their strengths and unique features. For instance, Gemini-Exp-1114 seems to be making significant strides in overall performance, possibly offering improvements in speed, efficiency, or nuanced understanding. On the other hand, OpenAI's ChatGPT-4 is well-known for its contextual depth and conversational flexibility, especially in providing useful and coherent dialogue across a wide range of subjects.
It’s interesting how advancements in AI, whether from Google, OpenAI, or other companies, tend to push the whole field forward, with each new release getting closer to achieving that ideal blend of intelligence, creativity, and usefulness.
💡Artificial Intelligence | Algorithms | Thought Leadership
The chart highlights that Google’s new AI model, Gemini-Exp-1114, has just overtaken OpenAI’s ChatGPT-4.0-latest to claim the top spot in recent performance tests 🥇 . This marks a shift in leadership, as OpenAI had previously held the lead in this space.
With a score slightly above ChatGPT, Google’s Gemini-Exp-1114 now stands as the top-rated AI model, showcasing Google’s advancements in AI technology and user preference.
What’s your favorite model?
Feel free to connect with me for more insights into AI and data engineering 👉 Daniel Zaldana 👋
#ai
Only a couple of days ago we woke up to the new Advanced Voice from ChatGPT which promises, although with some challenges yet, to be the beginning of a revolution on LLMs. Meanwhile its main competitor, Gemini 1.5 Pro 002 (what a name!) is also rolling out new features, and it is very probably the competition will be strong between them. This podcast shows us another tool, NotebookLM. “A free Google AI tool for studying and research, helping users understand complex information by summarizing sources and providing relevant quotes, and an "Audio Overview" feature where users can listen to AI-generated discussions about their sources” (Google search says!). Not available outside US, but you know you always can use a VPN to get it. Well, bufff, every day we see new iterations on AI with better and faster models or feature, and the launching of new models. Amazing. #aiexplained#ai#chatgpt#superintelligence#Gemini#NotebookLMhttps://lnkd.in/d5BY4RXp
🌿 Learn a lot about 4IRGPT.com knowledge trading for your professional benefit, creativity, productivity, business or organisation through Perplexity AI. Click here to see why: https://lnkd.in/dY5MmsGh
🪴💥 Watch 💯 uninterrupted 15,000+ videos of intelligence in tech, finance & geopolitics. Keep yourself and family super smart and way ahead of the herd.
Click here: https://lnkd.in/dHJ8i_Q5
With AI applications, two main types come to mind. Chat, like ChatGPT, and RAG, meaning the LLM is integrated into some other tool. But did you know there's a 3rd option, a "middle ground" that's easy to make? ...
Zero to Hero | Crafting a Custom GPT
https://lnkd.in/gSNm_rDq
This article, one of my most popular on Medium, shows how to build a Custom GPT using your regular ChatGPT login. While not as flexible as building a RAG application from scratch, you can still get a long way with custom instruction formatting, prompting the AI to write code, and storing canonical documents for it to consult.
WHAT'S CHATGPT'S FAVORITE NUMBER? 47 (but it used to be 42). Claude also likes 42 and Gemini likes 72. Engineers at a data company asked several LLMs to pick a random number from 0 - 100. Instead of acting like a true random number generator, the LLMs returned "random" numbers like a human would. That's because the LLMs looked to their training data (human-generated text) and provided what was most often written after a question that looked like “pick a random number.”
Just another reminder that LLMs aren't omnipotent -- they're influenced by their training data. Good thing to keep in mind when making decisions on AI use cases.
#artificialintelligence
Good at Excel? ... I'll make you better. Just starting out? ... I got you. Excel | SQL | Tableau | Power BI Training | Consulting "Lets get this done!"
8moThis is awesome. And the 6 year old one is the best IMHO