Shira G.
United States
1K followers
500+ connections
View mutual connections with Shira
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Shira
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
About
Using first-party data and machine learning, RSLT helps brands, agencies, and networks…
Publications
-
Saturation Seems to Be the Hardest Word
The Performance-Driven Marketing Institute's quarterly publication
-
What’s the Safest Data Collection Method to Measure Your Ads’ Success?
The Performance-Driven Marketing Institute's quarterly publication
View Shira’s full profile
Other similar profiles
-
Andrew W.
Data & Analytics Architect | Transforming Complex Data into Business Intelligence | Full-Stack Development | Cloud Solutions
Glendale, CAConnect -
Saurabh Gupta
GurugramConnect -
Amy Hebdon
Google Ads Conversion Expert | We help established businesses grow with paid search | Improved ROI on $100MM+ spend
Clarksville, TNConnect -
Daniel Razumov
New York, NYConnect -
Josh Mangum
Charlotte, NCConnect -
Lauren Snyder
New York, NYConnect -
J Scavo
Los Angeles Metropolitan AreaConnect -
Shrikant Latkar
Cupertino, CAConnect -
Blake Park
San Francisco Bay AreaConnect -
Jason Hobbs
Los Angeles Metropolitan AreaConnect -
Jennifer Houston
Los Angeles, CAConnect -
Daryl Eames
Manchester, NHConnect -
Max Ade
Atlanta, GAConnect -
Hamid Ghanadan
Boulder, COConnect -
David Murphy
Phoenix, AZConnect -
Josh Pierry
Redwood City, CAConnect -
Karl Isaac
Los Angeles Metropolitan AreaConnect -
Dawn Sandomeno
Belmar, NJConnect
Explore more posts
-
Seth Hirsch
The right testing strategy can uncover what's actually working. We helped a client tackle this exact problem with a targeted geographic test. We divided their trade areas into 3 groups: 1. Group A: Doubled Meta spend 2. Group B: Kept Meta spend flat 3. Group C: Removed Meta spend entirely The results were eye-opening! Doubling Meta spend led to an incremental net positive ROI. With concrete data, the client secured approval to double their monthly Meta spend going forward. This is just one way we help clients identify their most impactful growth levers - and execute on them with confidence. No more flying blind.
7 -
Trevor Sumner
📺 📺 📺 75% of in-store retail media will be deployed in the next 18 months!!!! This Grocery Doppio research paper is a wake up call to grocery, big box and large #retail chains. The time is now to get to moving or be left behind on one of the greatest profit drivers for brick-and-mortar retail. Some takeaways: --- Retail media is good for shoppers, brands and retailers --- 🛍 There is a 38% increase in shopper engagement with in-store media with 61% of shoppers saying the media content was useful! 💚 27% increase in impulse buys at grocers who enable in-store media 🛒 97% of grocers intend to adopt white-label or thirds party rather than build 📊 93% of #CPG brands want to to tie digital and in-store engagement. 📈 The number of #retailmedia networks will double in next 18 months --- Challenges to scale --- 👩💼 79% of retailers say they don't have the talent to scale their #retailmedia biz 💰 Only 3% of CPGs have significant additional budget for grocery media in 2024, and 89% want greater data accuracy for that market spend. Download the report here --> https://lnkd.in/ejJQmeSu Thanks for sending my way Lisa Goller, MBA
2814 Comments -
Pravin Shivarkar
Understanding Key Concepts in Google Analytics 4: Sampling, Threshold Limits, Cardinality, and Engagement Metrics Starting on July 1, 2024, you will lose access to Universal Analytics data in the interface and the API, and via any product integrations like Google Ads, or Search Ads 360. Google Analytics 4 (GA4) introduces several new concepts and metrics that are essential for understanding and optimizing your website’s performance. Among these, sampling, threshold limits, cardinality, and the distinction between bounce rate and engagement rate are critical. Sampling vs Threshold Limits Sampling occurs when GA4 processes only a subset of your data to generate reports. This usually happens with large datasets to ensure faster processing times. While sampling can provide quick insights, it may lead to less accurate data compared to unsampled reports. Threshold limits, on the other hand, are applied to ensure user privacy. When viewing data, especially in segments or with low user counts, GA4 might apply thresholds to prevent the identification of individual users. This ensures compliance with privacy regulations but can limit the granularity of your insights. Cardinality Cardinality refers to the uniqueness of values in a dataset. In GA4, high cardinality can occur when there are many unique parameter values, such as user IDs or custom dimensions. High cardinality can complicate reporting and analysis by creating an overwhelming number of unique entries. Managing cardinality is crucial for maintaining clear and actionable reports. Bounce Rate vs Engagement Rate In GA4, bounce rate has been replaced by engagement rate. Bounce rate traditionally measured the percentage of sessions where users left the site without interacting. Engagement rate, however, provides a more nuanced view by tracking the percentage of sessions that last longer than 10 seconds, have at least one conversion event, or involve multiple page views. This shift allows for a better understanding of user interaction and engagement with your content. Mastering these GA4 concepts—sampling, threshold limits, cardinality, and engagement metrics—can significantly enhance your ability to analyze and improve your website’s performance. #GoogleAnalytics4 #GA4 #WebAnalytics #DigitalMarketing #UserEngagement #DataSampling #ThresholdLimits #Cardinality #BounceRate #EngagementRate
4 -
Himanshu Sharma
💡 The correct methods to analyze and report average metrics in #GA4 Nobody wants to be average, and yet we all love averages. That is why our analytics reports are all jam-packed with averages. We have 'average engagement time', 'engaged sessions per user', 'events per session', 'engagement rate'... and the list of average metrics goes on and on. To analyze and report above average, we first need to stop being obsessed about all the average metrics and take the insight they provide with a huge grain of salt. . . Any set of measurements has two important properties: 1) The central value 2) The spread about that value. We calculate the central value to determine a typical value in a data set. We measure the spread to determine how similar or varied the observed values are in a data set. . . If the set of observed values is similar, then the average (or mean) can be a good representative of all the values in the data set. If the set of observed values varies by a large degree, then the average (or mean) does not represent all the values in the data set. We calculate the central value through Mean, Median and Mode. We measure the spread of data values through Range, Interquartile Range (IQR), Variance and Standard Deviation. . . Outliers impact mean and mode. Outliers are extreme values that are significantly different from the majority of data points in a set. They can significantly impact the mean and mode, two measures of central tendency commonly used to summarize data. . . The mean is sensitive to outliers because it involves the sum of all values. An extremely high or low outlier can skew the mean, making it a less reliable measure of central tendency in these cases. The mode is the value that appears most frequently and is not affected by the magnitude of the numbers. . . Therefore, outliers do not impact the mode unless the outlier itself is the most frequent number or it creates a new set of frequent numbers. In summary, while the mean is sensitive to outliers and can be significantly altered by their presence, the mode remains unaffected unless the outlier changes the frequency distribution of the data set. Calculating the median of every data set all day long can be very time-consuming and not practical for many. This is particularly true for large datasets. So what is the solution? . . The solution is to first measure the spread of data values in a data set and then decide whether to trust the average value reported by your analytics tool, like GA4. . . There are two ways of measuring the spread: 1. You look at the distribution of values in a data set and find and eliminate outliers (or extreme values). 2. You calculate spread through IQR, variance or standard deviation. Visualize the data using looker studio and show data distribution instead of relying on the average metric.
792 Comments -
Jayesh Easwaramony
Insightful post on attribution models by Ruler Analytics! I find the U-shaped and Time Decay models especially reflective of consumer behavior from my own purchasing experiences. Curious to see if anyone has explored attribution models indexed to expected funnel journey times. Thoughts? #attribution #marketing #martech #data
91 Comment -
David Mihm
Excellent study from Damian Rollison and the SOCi, Inc. team which seems to confirm my longtime hypothesis around citations (https://lnkd.in/gxZQwdx3): the only citations that are valuable are the ones that rank in Google for the keywords you want to be known for. https://lnkd.in/g49DMSeN
3811 Comments -
Scott Zakrajsek
Ecom Brands: 10 Common Data Issues We Come Across 1/ Attribution Missteps Relying on outdated MTA tools or last/first touch attribution instead of contribution and incrementality models. This can lead to wasted ad spend. Start with simple geolift (MMT) and hold-out tests to measure incrementality. Understand that not all channels can be tracked w/ click-based tools. 2/ Pixel Problems Incorrect, duplicative, or missing ad pixels lead to inaccurate data collection. Audit your pixels monthly for accuracy. GA, Meta, Google, TikTok, Affiliate platforms, etc. 3/ Server-side Tracking Not using server-side tracking, resulting in signal loss and hindering ad optimization. You can be losing 20-30% of your conversions w/o proper SS setup. 4/ Excel Hell & Reporting Inefficiencies Manual and infrequent updating of reports, often trapped in spreadsheets. Automate common reports. Even basic KPIs updated daily for the team will be a big win. 5/ Product Catalogs are Stale Bloated or incorrect product feeds with outdated information and mismatched pixel data. Audit your catalogs frequently. Segment your feeds. Setup alerts. Use a centralized feed source or vendor to streamline updates. 6/ UTM Inconsistencies Poor campaign and UTM naming standards make performance breakdowns difficult. Create a standards document and ensure that all your marketers and agencies use it religiously. 7/ Customer Data Fragmentation Lack of a single source of customer truth, with customer and 1P data spread across various systems. Start to ingest your 1P data into a data warehouse. You should own and know how to govern your customer data. 8/ KPI Blindness Limited understanding of critical metrics like New Customers, LTV, MER, Contribution Margin, CAC... Knowing which to use and when. Understanding driver trees and ensuring your team knows the key metrics for their channel. 9/ Inaccurate Forecast Failures Poorly constructed forecasts or a complete lack of forward-looking projections. Even a basic forecast, shared across the team will help. 10/ Data Silos Isolated data and poor transparency across different teams and departments. Centralize datasets, KPIs and reports where possible. Better communication and data access. The good news is that fixing these issues is 25% technical cleanup and 75% education/process improvement. What other data challenges do you face in your e-commerce operations? #ecommerceanalytics #dtcmarketing #measure #ecommerce
415 Comments -
Claudia Natasia
🌉 Reflections from #SFTechWeek 💡 1. Having too much data and not knowing what to build is a significant problem The current tools and processes do not deeply analyze disparate sources of data or help product, research, and data leaders prioritize insights in a way that meaningfully reflects their customers’ changing worlds. Whenever I discussed this problem at events this week with a small group, more and more people joined our conversation, showcasing just how painful this problem is and how large this opportunity can be. We hope to continue solving this with Riley AI. 2. Building for a community you care about is a superpower I started Riley because I faced similar pain points as a data scientist, researcher, and later, product leader. I began sharing my experiences and how I addressed these challenges within the organizations I’ve worked for in the past—at conferences and with people in the community. Since Riley started 6 months ago, these people have been helping us shape the product and are our strongest advocates. I feel so humbled and lucky that I get to spend my days learning and growing with our community. Building something for a community you are part of and care deeply about as a founder is a differentiator; you have an authentic voice that everyone trusts! 3. Increased Funding Velocity #Founders, compared to last year, I’m seeing funding interest picking up at a stronger velocity. It was exciting to hear and witness the inspiring pitches and success stories from many founders I met this week. 4. The Importance of Empathy with Your Co-Founder One of the questions I was asked most often this week was, “How did you choose your co-founder?” Kevin Ma and I have a unique relationship where we can be truly open and honest with each other. When things are going well, we’re the first to celebrate our successes. When things get tough, we step away for a few minutes, go for a walk, and say, “This sucks! How are you doing?", before collectively deciding what to do next. Find someone with whom you can be the full versions of yourselves and continue to nurture your empathy for each other. This support helps you navigate the difficult early days. -- Thank you NextView Ventures Garuda Ventures Recall Capital Zendesk for inviting me to speak at the AI Founders panel. Special thanks to Lenny's Newsletter, Andrew Yeung (#LumosHouse), Silicon Valley Bank, The GenAI Collective, and TECH WEEK by a16z for hosting such wonderful events! Stay tuned for an exciting announcement in November with Riley AI!
683 Comments -
Navneet Gill
Why does your Google search look so good in Marketing Mix #MMM? There are some reasons: 1. Most MMMs are performance focused As in performance based metrics, like sales/dollars, are usually modeled as dependent variable. The closer the media is to the lower funnel, the higher the probability of its impact. Search is usually closer to these tactics and will continue to show its strong ROI in most MMMs. 2. Search is self selection Anyone who is searching for your product, or your category, is already motivated to buy this product. You just happened to show your ad. This is called "self selection bias". I am self selecting the purchase I am going to make. Like we have our eye on specific product on #BlackFriday deals. Most MMMs do NOT correct for this bias, which over estimates search's true incremental impact. 3. It does correlate well with TV and other media tactics MMM accounts for interaction effects. As in, media A (say TV) interacting with another media (search) which amplifies its impact. Search also benefits from these interactions, on top of self selection. Does it mean your Search ROIs are inaccurate? Maybe. Maybe not. But its highly probable, they are inflated. #mmm #MTA Naavics
4824 Comments -
Eric Seufert
AdAttributionKit: Unpacking Apple’s new ad attribution framework AAK unifies Apple’s various advertising attribution frameworks under a single umbrella product name and extends SKAdNetwork in meaningful ways, including with support for re-engagement and alternative app marketplaces. These are substantive improvements. But impediments to adoption remain: IP-based attribution is still widely viewed as a suitable alternative to Apple’s native iOS attribution framework, meaning that platform tools like Conversion APIs will take adoption priority.
1015 Comments -
✅ Yury Vilk
🎥 Is Thumbstop Ratio (TSR) the secret sauce for video ad success? 🤔 After analyzing $1.4M+ in ad spend across 400+ ads, here's what I discovered about this buzzworthy metric: 📊 Key findings: -TSR doesn't strongly correlate with ROAS, CPA, or revenue -There's a weak positive relationship between TSR and CTR -38% TSR appears to be a good benchmark for engagement 💡 Takeaway: While TSR can be a useful secondary KPI for creative engagement, it shouldn't be your north star for performance. 🔍 Want to dive deeper? I've broken down the data, methodology, and insights in a full article: https://lnkd.in/gGrntvv3 👇 What's your take on TSR? Have you found it valuable in your campaigns? Share your experiences below! #VideoAdvertising #MarketingMetrics #DataAnalysis #DigitalMarketing #FacebookAds #MetaAds
2 -
Ray Jang 🐰
Want to create winning ads with AI? Learn more: https://tryatria.com Creative is the new targeting. The new data is clear: The FB Data Science team revealed: → Creative is the SINGLE most important factor in delivery optimization → 56% of ALL action outcomes are driven solely by creative → Bid price & audience targeting? Secondary to creative Creative refreshing is a MUST → Fatigue sets in after just 1 (yes, ONE) view → By view 4? 📉 40% drop in CTR, 60% in conversions Winning in 2024 means: 1. Diverse ad formats 2. Constant creative refreshing 3. Matching each platform's unique vibe Invest more time on your creative. It's your top growth lever today. — 👋 Follow Ray (지범) Jang 🐰 for more ad content. Helpful? ♻️ Repost to share!
8,144175 Comments -
Ethan Decker
SHOULD YOU TRUST IN-PLATFORM ATTRIBUTION MODELS? TL;DR: probly not. Attribution is tricky. Last-click attribution is also, uh, sketchy. Why? 1) It often confuses correlation with causation. 2) It rarely accounts for 10 or 12 other factors that are likely involved. 3) It’s got conflict-of-interest built in: of COURSE a platform wants to claim brand lift or sales impact or whatnot. Econometric models are better (the good ones are). They factor in LOADS of things to tease apart effects. Is it using correlation to assume causation? Mostly yes. But it’s miles better than naïve attribution models. magic numbers compared 5 years of Google attribution model data to their own econometric model. According to Google, ~33% of sales could be attributed to someone clicking a google ad. Woop! According to magic numbers, only ~9% was rillly incremental. Oopsie. The rest was people who were about to buy anyway, but The Goog was taking credit for them. SOME LESSONS: 🔸 Be wary of what a platform claims it delivers: Mandy Rice-Davis Applies 🔸 Use a good econometrics model to get a better picture of incrementality & attribution. 🔸 Look up Mandy Rice-Davis. #marketing #marketresearch #dataliteracy
8117 Comments -
Peter Caputa
Being the first analytics tool (that I know of) to offer up-to-date benchmark data (from 70+ different software tools) built right into the platform, we had to come up with a way to show benchmarks on our dashboards (that made sense to our users.) Feedback very appreciated... Below is the visual. Here’s the math and logic that goes into creating that visual. 1️⃣ First, we calculate benchmarks for the cohort that the user selects (company size, industry, etc) based on the sample of companies in that cohort. The benchmark shown on a line chart like the one below that shows "This Month's" data is calculated from the previous month's data, since it is impossible to calculate a benchmark for a "this month" time range, given that the month isn't over. 2️⃣ To visualize a monthly benchmark on a daily line chart, we divide the previous month’s benchmark by the number of days in the current month. This gives us a “daily average benchmark” line. 3️⃣ To help someone compare their daily performance to the "daily average benchmark line", we added a "median" line option for the current month's performance. Why? Most metrics visualized over a 30 day period fluctuate quite a bit each day. As you can see in this chart, the value for sessions varies a lot on a daily basis .So, to get a good comparison, it's best to compare the median value for the company’s monthly sessions to the 'daily average benchmark'. 4️⃣ What's the takeaway? This company is outperforming the benchmark. As you can see by the daily values of the current month's performance, it's pretty much always higher than the benchmark line for all but a few days in the beginning of the month. So, you’d expect the median value to be above the daily average benchmark too, as it is. In this case, it might makes sense for the company to compare themselves to a cohort of bigger, more successful companies, if they want to challenge themselves. Or just take the win and keep trucking. What do you think of our logic and how we're displaying benchmarks on line charts? PS. Credit to the team for coming up with this: Katja Pozeb, Mateja Verlic Bruncic, Gasper Vidovic among others, I'm sure. PPS. If you'd like to try this for yourself, here's more about how it works: https://lnkd.in/eRN-xnEB
3711 Comments -
Chris Walker
Some big news…. We just launched DemandGPT, a niche-LLM trained specifically on experiment data, advertising playbooks, forecasting models, step-by-step implementation guides, and much more from all of the collective learnings at Refine Labs over the past 5 years. Get instant answers to your most important Demand Gen questions and priorities, supported by data and detailed documentation. -How should I plan my Marketing budget for next year? -How do I structure my Google Ads account? -How do I forecast my Demand Gen spend and impact for next year? -What are the most important KPIs to present at my Marketing QBR? -What metrics should I use to optimize my LinkedIn ads? -How do I launch Connected TV ads? -What are the best creative formats for Reddit ads? -How do I launch a podcast in less than 30 days? -How do I implement all of the Demand Gen data and tracking in my CRM? The questions and answers are literally infinite. And the library of data, research and information is growing every day. __ Now, you don't need to bet your career or next year’s financial performance on a random blog you found in Google or anecdotal answers you got in a Slack community. Most of the information you find on the internet today about Demand Gen is outdated, written by someone in the Philippines (or now written by AI), anecdotal at best, not backed by data, and likely irrelevant to your situation. DemandGPT gives you instant answers, frameworks, best practices, implementation roadmaps and more with: -Contributions from more than 100+ leading Demand Gen experts like Sidney Waterfall, Sam Kuehnle, Ashley Lewin, Christian Williams, Tara Panu, Chris Walker, and so many more. -Data and learnings published from work at over 200+ of the leading B2B companies. -Learning and insights gathered from more than 450,000 collective hours of work, tests, and data analysis There’s never been a more comprehensive hub of information to plan, launch, and optimize your Demand Gen strategy. #marketing #demandgen #AI #strategy p.s. You can get 15% off all Vault plans powered by DemandGPT using the coupon code "Chris15" at checkout. Offer valid until May 15, 2024 ✌️
685151 Comments -
Mikko Piippo
I don't think hourly billing is a good idea for clients buying digital analytics services. Shifting from hourly billing to fixed retainers could align the incentives of analytics agencies with the client's goals. Retainers encourage agencies to focus on delivering value and results, not on accumulating billable hours through complex setups. The focus on lowest possible hourly fees just multiplies the hours needed. This is a bullet-proof method of creating misaligned incentives - created by the procurement. ❤️ Follow me for more posts about #digitalanalytics in the #realworld!
4920 Comments -
Raphael Paulin-Daigle
Don't let your CRO strategies fall short! If your CRO efforts aren't delivering results, you might be overlooking a critical element: qualitative research. Quantitative data shows you what users are doing, but qualitative insights: ✅ Uncover user motivations & pain points ✅ Identifies root causes of behavior ✅ Guides impactful optimizations Start combining qualitative research techniques (user interviews + usability testing) with your analytics data. It allows you to deeply understand your audience, prioritize high-impact changes, and create user-centric, conversion-driving experiences. How will you implement qualitative research to drive real results?
2 -
Michael Kaminsky
Misunderstanding uncertainty can be a real problem. Here’s an example: let’s say that we run a lift test showing that the impact of a Linear TV campaign is statistically significant, but has an ROI range between 0.2x and 15x. This is still potentially useful information, but it's very different from saying “The ROI of Linear TV is 8x and it's statistically significant.” This is why I always recommend reporting uncertainty from any approach to marketing measurement. Whether they’re results from a geographic lift test or from a media mix model, you need to understand uncertainty in order to make good decisions. I’ve found that it’s so, so important for data scientists and marketers to have a deep understanding of what uncertainty means and how it should be used in a decision-making framework – and this is definitely not always the case. Thomas Vladeck and I are putting together a live Office Hours session where we’ll discuss uncertainty in marketing measurement, common challenges we see, and how to communicate uncertainty to various stakeholders. I’d love to see you there – I’ll put the link in the comments below.
155 Comments -
Mike Black
Last week, I presented this framework for how brands should be using advanced #analytics, automation + #ai to scale PDP optimization. (If you missed it on firstmovr's Future of Content Summit, here's the gist) The #ecommerce content game used to be one of "checking the box on compliance," but shoppers and retailers have raised the bar and we must be winning on "effectiveness." (actual impact) ✅ --> 🎖 As such, we must treat #content more like a science than an art -- which is more achievable thanks to #technology. 🎨 --> 🔬 1. Using predictive models, you can reverse engineer the science of what drives conversion and #SEO on most retailer sites, thereby prioritizing optimizations. And you can also reverse engineer the keywords shoppers are most searching to navigate to your brands and competitors. This can all be done at a specific retailer and category level vs just using Amazon as a proxy. 📏 2. Using science as the engine, you can train #genai to create optimized bullets, titles and descriptions that get you to a retail ready PDP 6-7x times faster. We're talking minutes, not hours. ⏱ This unlocks the ability to scale optimization and more sales across your whole portfolio, not just your "favorite children." ❤️ Since copy is just one element that drives conversion, you must apply a scientific approach to optimizing images too, via tools like Vizit 3. These new capabilities should tie into your #digitalshelf analytics both to protect content accuracy and to measure outcomes -- such as increases to organic search, traffic, conversion and sales. Having gen ai point solutions that don't tie into your digital shelf data and connect via an end to end way may add unnecessary complexity. ➰ If you'd like to the watch the full recording, leave a comment below. 👇 Shout out to Lauren Livak Gilbert, my research partner at the The Digital Shelf Institute who's been instrumental in helping the industry elevate beyond the basics of content.
624 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More