I'm exploring the idea of visually modelling the benefits of incorporating process improvements in legal service delivery. Using Claude, I built a little game this morning to serve as a baseline. Watch, try and remix at your pleasure. (Desktop only - not designed for mobile) The baseline is a lawyer accomplishing a randomized set of tasks to deliver on a client objective. Speed and completeness gets the lawyer a reputation bonus, delivery gaps and billing substantially above quote result in a reputation penalty. Bonus and penalties expressed in dollars, with impact on business expressed in the aggregate and not impacting the $$ and hours billed to the original client. I think if I treat a few of the tasks as being amenable to improvement (whether to a different person in the firm, document automation, AI or otherwise), I could represent them in the model through a time reduction. The impact of the time reduction would be an increased probability of getting the reputation bonus for early completion and decreased probability or extent of reputation damage through incomplete or above-quote work. https://lnkd.in/eJ3G_xXw
Colin Lachance’s Post
More Relevant Posts
-
As a super quick overview, this guide of tips and tricks is really useful to get you underway with AI - I highly recommend a quick read through.
To view or add a comment, sign in
-
Doubling the excitement! Legal analytics users are riding the generative AI wave with twice the enthusiasm. Dive into the data-driven buzz with Lex Machina's survey insights and request your own copy of the report today! https://bit.ly/433W4mX #LegalAnalytics #LexMachina #AI #LegalAI
Get Lex Machina's 2024 Legal Analytics Survey Report!
pages.lexmachina.com
To view or add a comment, sign in
-
Evaluate your fine-tuned LLMs. 🚀 A powerful approach to improving the applicability of LLMs to specific tasks is through fine-tuning. Once you've fine-tuned LLMs on specific data, it's essential to evaluate their performance and effectiveness in real-world scenarios. You can now easily fine-tune LLMs using the Clarifai Platform. After fine-tuning, you can also assess their performance using the newly introduced LLM Evaluation module. The module allows you to evaluate LLMs against standardized benchmarks and custom criteria. This guide covers a step-by-step process to fine-tune and evaluate your LLMs: Read more: https://lnkd.in/gDienp7w
To view or add a comment, sign in
-
Special treat this time (🐶) Dean Pleban from DagsHub will join us next week (29th August) to talk about how to evaluate LLMs. Agenda: - Main challenges in customizing and evaluating LLMs for specific domains and applications. - Review workflows and tools to help solve for those challenges. https://lnkd.in/dwnS_4Dw #genai #LLM #ai #LLMEvaluation
Customizing and Evaluating LLMs, an Ops Perspective, Thu, Aug 29, 2024, 11:00 AM | Meetup
meetup.com
To view or add a comment, sign in
-
🚀 New Blog Alert! 🧠 I just published a blog on how to streamline LLM testing with a test-driven approach using Promptfoo. If you're looking to improve your prompt engineering, evaluate multiple LLMs, or ensure reliable outputs using local models, this is for you! I explore how to test LLMs efficiently using both custom-hosted and locally-run models, providing more flexibility and control over your AI workflows. Check it out and let me know what you think! 💡 #AI #LLM #MachineLearning #PromptEngineering #AItesting #TechBlog
Promptfoo: A Test-Driven Approach to LLM Success
link.medium.com
To view or add a comment, sign in
-
How to convert vectors having different values into one hot encoding to use that data to train classification models. #AI #ML #Datascience #innovation
Data-Handling/3-7-exercise-one-hot-vectors.ipynb at dcd311c3a6832327dc8964c95389901b3c38e554 · hamzabeig/Data-Handling
github.com
To view or add a comment, sign in
-
One of my most complete pieces, Document Intelligence for LLMs, explained with Claude 3.5 stack: https://lnkd.in/d_w_vsy5
Claude 3.5 — The King of Document Intelligence
levelup.gitconnected.com
To view or add a comment, sign in
-
Can GENAUDIT Enhance Fact-Checking for LLMs? HIGHLIGHTS GENAUDIT addresses LLM inaccuracies effectively. The tool provides document-based evidence validation. It maintains a balance between error detection and precision. #GENAUDIT #LLM Learn More: https://lnkd.in/dqy52XVh
Can GENAUDIT Enhance Fact-Checking for LLMs?
https://newslinker.co
To view or add a comment, sign in
-
LLMs (chatbots) can’t be trusted for financial advice - econ prof Gary Smith The LLM responses demonstrated that they do not have the common sense needed to recognize when their answers are obviously wrong https://lnkd.in/grfbkMDq It takes an experienced financial planner to distinguish between good and bad advice, so clients may as well skip the LLMs and go to the knowledgeable human.
LLMs Can’t Be Trusted for Financial Advice
https://mindmatters.ai
To view or add a comment, sign in
-
Alongside the well-known RAGs, agents [1] are another popular family of LLM applications. What makes agents stand out is their ability to reason, plan, and act via accessible tools. When it comes to implementation, AdalFlow has simplified it down to a generator that can use tools, taking multiple steps (sequential or parallel) to complete a user query. https://lnkd.in/ez9fR7iY
LLM Agents Demystified
towardsdatascience.com
To view or add a comment, sign in
solution finder | OBA Innovator in Residence | legal AI consultant and guide at PGYA.ca
4moUpdated version with efficiency/innovation elements: https://claude.site/artifacts/1f13505e-5972-4d05-876b-f98571ace209