Ian Armstrong’s Post

LLMs are trained to do complex things and tokenize language strings for maximum efficiency. This is important to understand when you are writing prompts. This example with Gemini will show you what I mean. Keep this in mind if you are struggling with detecting hallucinations in your work.

  • No alternative text description for this image

ChatGPT 4o cheekily implies that Gemini isn't as good at understanding questions or it would be able to do this correctly. Gemini may be able to process a million tokens now but you have to prompt it much more specifically to get a good result.

Like
Reply

To view or add a comment, sign in

Explore topics