Phase II of MFI is Chrome - Response to Comments from Pedro Dias

Hey Pedro - Thanks for the thoughtful feedback, and the respectful tone in which it was shared. I definitely respect you and your opinions, and am happy to engage with people like you can disagree without being disagreeable! :P My response is too long to be a comment, so I have posted it here as an article: [you are here - on Pedro's post, this is a link.]

You make some good points here, but in my opinion, many of the responses you offer are simply your own claims and perspectives that are similarly lacking in absolute proof - like the claims of mine that you are objecting to. That is fine - it leads to a great conversation, but in those cases, I don't think that your perspectives necessarily carry any more weight than mine. We each have perspectives that are fed by our own experiences and background, and I want to honor those differences - If everyone always agreed, life would be boring.

I knew that this would be a controversial topic that would require people to suspend some disbelief to consider - Hence the UFO theme. The metaphorical point of that theme plays out here, because I am saying that you shouldn't just listen to what Google wants us to believe, but instead consider what you actually see. Some of your responses are simply an appeal to Google and what Google wants us to believe. The talk suggests that you should avoid simply accepting what Google says and that you should intentionally question it; It suggests that you should actively consider what Google could be doing, that they are not saying. FYI - I do now have at least one person who has designed and launched a test to help prove the theory out further. If other people would like to set up different tests, I would love to talk to them, and help. I didn't set up my own test because I am not a developer, and would never test theories on client sites. I don't want to spend my own money to prove that Google is covering something up, when Google could add clarity and transparency for free. I have added some quick responses to your points below. My plan is to write a companion article, to go along with this video, so I will include responses there too, when it is ready. 

Point by point responses - I removed Pedro's comments for brevity and clarity, but please do read them, if you get lost and want to understand what I am responding to. Pedro's post is here: https://www.linkedin.com/posts/pedrodias_i-watched-cindy-krums-presentation-and-activity-7247172051080212480-IEUr/?utm_source=share&utm_medium=member_desktop

Direct responses - (with core arguments in bold italics for anyone who is just skimming):

5 min

- Histograms: You say that they are not necessarily used for tracking, but then what are they for? They are certainly tracking something here, even if it is ostensibly for UX and performance and not user behavior, but why would we assume that it is limited to the things that Google can justify, when it is clear they could be capturing more. Based on the histograms data labels, it did seem like Google was tracking both Clicks in Chrome and Clicks from search in the histograms.

6 min

- MFI: We simply have different perspectives here. You thought it was a good launch but then acknowledge that Google struggles to communicate externally. I agree that Google struggles to communicate externally, and my theory here is that they struggle, because they were trying not to say certain things. Being clear is tough when you don’t want people to figure certain things out. This is also inconsequential - if they did a good job or a bad job, it does not change what I am suggesting.

7 min

- “All of a sudden”: Again, this is your perspective, and we simply disagree. I definitely agree that ‘mobile’ created a significant challenge for Google, and they had tried a few things before launching MFI. I think this point is a bit inconsequential though. Even if it was not all of a sudden, that does not change anything in the core argument.

The Merj/Vercel test did seem a bit suspect, and that is why I tried to verify the methodology. I can’t speak to anything beyond that, but I found the 100% JS rendering quite surprising. I will leave it to them and to Malte to defend the test.

As far as the test that I mentioned from Tom Anthony, it was something that was not published, but as I understand it, is still ongoing. He mentioned the preliminary testing to me at a dinner in early 2024. It is possible that the final results will be different when it is published - I don’t know. It was included for contrast, and to illustrate the question I wanted to answer. It prompted the thought experiment of this investigation - How could these two very smart people have such different results? Is there a way they could both be right?

11 min

-Stateless Rendering: I think that this is an assumption, based on how Google crawlers historically worked , but is not necessarily true now. I think Google using renderings with cookies and potentially personalized results is why Google would stop allowing us to see cached pages with the cache link and cache operator. I do think that if a page is rarely or never visited, pages rarely get rendered because Google deems them less important. It is a natural filtering system for Google - a feature, not a bug. Tom’s test said that Google Bot did render JS 2% of the time, so maybe this is where that comes into play. I am not saying that Google can’t render JS - I am saying that they don’t need to waste their resources to do it in this proposed model.  

15 min

- Ben Gomes quote: I disagree that it is far-fetched. Did Google have a high number of low quality computers in 2018? Chrome launching updates to fix security threats does mean that all Chrome updates are for security reasons. Both can be true - Chrome can be updating to patch security threats and to adjust for Google's data collection/rendering/algorithmic needs. One does not preclude the other. 

17 min

- New GSC: I always wonder when software is replaced rather than fixed - What was so broken about Old GSC that it could not be used any more? Google did not offer any explanation here. Google significantly softened their communication about cloaking - going from “You could get a manual penalty for showing something different to the bot than you show to users” to “As long as you are adapting the page for the benefit of users, it is ok with us.” Robots.txt may still officially be considered a ‘directive’ but Google has said that if you want it to work, you need on-page robots instructions. The concept of 'being a directive or not' is inconsequential here. The point is that Google needs it on-page because they are not crawling in the same way that they used to, and they are fetching rendered pages in a more ‘one-at-a-time’ method. This is inconsequential to the core argument though - just an interesting observation that should prompt us to want more information.

18 min

- Core Web Vitals: We agree. Google actively communicated that CWV was using synthetic and field data. Use of the phrase “field data” seems like an intentional choice that was beyond just passive, captured RUM metrics - Click data with loading behavior like INP is a rich metric that has to be computed once the user's interaction on the page is complete. This is more active than just capturing ‘Document Complete.’  My point is that opening the door to field data/RUM data from Chrome is an important change that potentially opens the door to other things too.

22 min

- Chrome's RAM problem: We agree. Didn’t you ever wonder why Chrome needed so much RAM?

24 min

- Chrome as rendering resource: The pages were viewable without a login, and they could have been linked to, but the platforms should not have been accessible by a bot at all. Indexing and ranking of private documents is a huge concern, and Google chose not to address it at all. The lack of transparency here is quite problematic. Yes - those pages could have been linked to, or they could have been accidentally captured and cached from logged in users computers/phones - Until we know more, either is possible.

30 min

- Google collects Chrome data: You don’t think that Chrome data is used in the way I laid out in the presentation. This is fine - we disagree; but you offer no more proof than I do, other than appeals to what Google has self reported about what they are doing. You believe them and I don't - this is fine. Let's see what we can find out. 

We are both entitled to our own perspectives, and I appreciate the skepticism. Even if I was not able to convince you beyond a reasonable doubt about the theory, the presentation brings up interesting and important questions about exactly how Google is using Chrome and exactly what part of their business is being supported by processing power on our own computers. We know that they are using Chrome for Core Web Vitals - they have been clear about that, as you said, but what other data is being captured and shared? Who is monitoring this, to make sure that is all they are taking? Should we just be taking Google at their word about what they are capturing - especially when they have been found to be so deceptive and deceitful with evidence in their court proceedings? 

Andrew McGarry

Marketing Consultant for the AI generation | Current: Head of SEO at PowerPlay.com

2mo

This is the kind of civil disagreement we need more of in our industry. 👏

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics