If you're going to do an MMM, you should think a lot about how you are going to validate the model. How are you going to build trust in it? How are you going to know that it's right? The reality is that, with a flexible modeling methodology, there are often millions of ways that the model can go wrong and only one way that it can go right. There are a few different ways we think about validating MMMs at Recast:
Recast’s Post
More Relevant Posts
-
Excellent video and summary on how to validate your MMM - don't rely on any statistical data fit metrics to assess how good your model is if you want to optimize campaign spend. There are only two ways to do this: 1) Compare the MMM lift to the lift of an experiment (which we trust more) 2) Make a SINGLE change based on the MMM recommendation and see whether the expected incremental sales change really materializes. This is different from being able to just predict tomorrow's number right = out-of-sample prediction. That's great for inventory planning but often very misleading for causal inferences.
I'm helping brands eliminate their wasted marketing spend 🔥. Follow me for essays on marketing effectiveness in an omni-channel world.
If you're going to do an MMM, you should think a lot about how you are going to validate the model. How are you going to build trust in it? How are you going to know that it's right? The reality is that, with a flexible modeling methodology, there are often millions of ways that the model can go wrong and only one way that it can go right. There are a few different ways we think about validating MMMs at Recast:
To view or add a comment, sign in
-
I'm helping brands eliminate their wasted marketing spend 🔥. Follow me for essays on marketing effectiveness in an omni-channel world.
If you're going to do an MMM, you should think a lot about how you are going to validate the model. How are you going to build trust in it? How are you going to know that it's right? The reality is that, with a flexible modeling methodology, there are often millions of ways that the model can go wrong and only one way that it can go right. There are a few different ways we think about validating MMMs at Recast:
To view or add a comment, sign in
-
If you're going to do an MMM, you should think a lot about how you are going to validate the model. How are you going to build trust in it? How are you going to know that it's right? The reality is that, with a flexible modeling methodology, there are often millions of ways that the model can go wrong and only one way that it can go right. There are a few different ways we think about validating MMMs at Recast:
To view or add a comment, sign in
-
The Modo tools you love just got faster — find out how! Watch our latest video to see significant enhancements to direct and procedural modeling tools thanks to the ongoing incremental tool updates. That means faster tool updates, boosting speeds by up to 40x for specific tools. https://lnkd.in/dH9QkDn2
Modo 17.0 | Powerful Acceleration to Direct & Procedural Modeling Tools
https://www.youtube.com/
To view or add a comment, sign in
-
Someone asked me for an update on the portfolio template. I am working on trying to speed up the processing time, that's really the only thing holding it back. by processing time, I mean when you update the variables, some of the graphs and data won't change until later, and the problem is due to using volatile functions, such as rand(), today(), indirect(), and so I have to figure out a way around some of those function in order to make it quick. What is funny, is if I remove the monte carlo simulation, totally fine, but because you are doing 10,000 iterations essential with the [not a link] norm.inv(rand(), average(), Stdev()), it's going to slow down, so I'm trying to find a way around it.
To view or add a comment, sign in
-
successfully completed the assessment after participating in the webinar on the topic Thread Modelling
To view or add a comment, sign in
-
I'm documenting the Boston Object-Role Modeling metamodel and describing the atoms of a Knowledge Index and Knowledge Graph at the same time. This article describes how Value Types are stored within the Boston metamodel, a Fact-Based Modelling metamodel.
To view or add a comment, sign in
-
📘🎓🌟 Teach-in Tuesdays by Mosaic: 🌟🎓📘 This Week's Topic: Anatomy of the Deal Model 👉 Read Now: https://lnkd.in/eFeKWCgR Hello Mosaic Community! This week we're going back to first principles and laying out a series of articles comprising the foundations of deal modeling. We're presenting a concept that has been taught by countless training programs and bootcamps across the world but with a different approach - the "modular" approach that is central to how Mosaic was built (and why it can be expanded upon so quickly and easily). Instead of walking through a a monolithic, 50-tab Excel - this series of linked articles covers six core calculation schedules underlying all deal models - and will serve as a foundation and shared vocabulary for our future teach-ins on Model Extensions (e.g., Dividend Recaps, M&A, Tax Shields) all which tie back to and impact each of these six core schedules: The Six Core Schedules of Deal Modeling: ➡️ Sources & Uses ➡️ Operating Model(s) ➡️ Free Cash Flow ➡️ Debt ➡️ Tax ➡️ Exit & Returns 🧠 Building a Strong Foundation: Our series is more than just a walkthrough. It's a deep technical resource for those who want to master the art of deal modeling. Whether you're a seasoned professional or a curious newcomer, understanding these fundamental components is crucial. 💡 Future Insights: Stay tuned as we explore how each schedule interconnects and the impact of Mosaic's vast Model Extensions library. We'll also delve into customizing models for specific deal nuances, ensuring you're equipped for any scenario. Join us next Tuesday for more! #TeachInTuesdays #DealModeling #FinanceEducation #MosaicPe #LeveragedBuyout
Anatomy of the Deal Model
mosaic.pe
To view or add a comment, sign in
-
Estimating 𝗩𝗥𝗔𝗠 for training, inference, and fine-tuning can be a complex task 😦. It requires collecting information like internal model representations, batch size, precision, context length, and activations to get an estimate of the VRAM size. I found this task challenging, so I looked for a tool to help. This tool 🕵https://vram.asmirnov.xyz/ allows you to easily 😊 estimate the VRAM needed for training and inference.
To view or add a comment, sign in
-
Ever wondered how businesses tackle complex decisions and optimize processes? Check out our latest blog post: "Simulation Modeling: A Beginner's Guide"! Discover how organizations can experiment, analyze, and make informed decisions in a virtual environment before taking action in the real world. 💡 Don't miss out on this insightful read! 📖 https://hubs.ly/Q02l5-m30 #SimulationModeling #BeginnersGuide #360RailServices
To view or add a comment, sign in
2,098 followers
Analytics & Product Enthusiast | Data Nerd | Jersey Boy
6moSame energy