We're taking Codecov beyond Coverage with Flaky Test Detection, AI Test Generation, and AI powered code review. See how to get started here: https://lnkd.in/gXgwd5yQ Or watch 👇
Beyond Coverage: Flaky Test Detection, AI Test Generation, and More
Transcript
Hi everyone, my name is AJ. I'm a Product Manager working on Codecov here at Sentry and the pursuit of code quality, developers often waste time in two areas searching for hidden information and doing mundane but necessary tasks. This year Codecov is working to help eliminate that wasted time, allowing developers to spend more time building while still maintaining high code quality. We've expanded beyond just code coverage to offer a few new time saving, code quality enhancing tools, Bundle Analysis, Test Analytics, AI PR review, and AI Test Generation. We spoke about some of these in the spring, and we've been hard at work improving and enhancing it. This isn't just for paying Codecov users, it's for anyone who uses the product. Everyone will benefit from the improvements we've made this year. So let's dive deeper into what we built for you, starting with Bundle Analysis. Hardly anything alienates customers or users faster than a slow loading and unresponsive front end. The culprit is often a JavaScript bundle that's just too big. That's why we've continued to improve Bundle Analysis. It's specifically designed to help JavaScript and TypeScript developers avoid shipping bundles that would degrade site performance. Let's dive into a demo to show exactly how this works. In a PR comment Codecov We'll show you the change in total bundle size. It also shows you which specific bundles if there are multiple changed. Their total size and whether the changes within your configured threshold. Optionally, you can set a threshold for bundle size changes that will trigger an alert both in the PR comment and the status checks at the bottom of your pull request. You can set a status check as informative or blocking to keep an eye on bundle changes that might impact your app's performance. Moving from the PR comment to an optimization workflow, you'll see the bundles tab within Codecov. This tab now shows bundle size changes over time. It displays all the different assets and modules within those assets, but it also allows you to filter by different asset and loading types. A bundle can contain a variety of assets such as JavaScript, CSS, and images. You can now choose which asset types or loading types you want to focus on and filter specifically for those. For instance, if you want to focus on entry files to optimize your app's initial load speed, this tool allows you to do that. When I filter by an asset type, in this case, images, you'll notice that everything changes. The trend chart over time in the top summary bar update to reflect the selected filter. This makes it really easy to see how much of your total bundle size is taken up by images. For example, you'll notice that at the top, the blurred table image makes up a third of all images within this bundle. This filtering makes it easy to identify the largest assets and where you might want to start. If you're looking to optimize bundle size, improve app speed, and boost search result performance. All of these changes we've made, the Bundle Analysis makes it easier to monitor your bundle size and help optimize it, ensuring your app remains fast and performs well in Google Search rankings. Whether it was a demo that did it, or a desire not to be blamed the next time your site has performance problems, I hope you give Bundle Analysis a try. Moving right along now, Codecov Engineer Ajay is going to tell you about Test Analytics. While testing your code, you've undoubtedly experienced the frustration of wasting time sorting through test failures, trying to figure out which ones point to a code issue and which ones might actually be flaky. Codecov Test Analytics solves that frustration by helping you streamline your CI pipeline and focus on real issues, instead of chasing down sporadic test failures. We'll dive into exactly how this works in a demo. Setting up the Codecov Test Analytics is as simple as generating an XML file of your test suite, linking up the Codecov test results, actions to your CI, and finally running your test suite to generate your results. If you've already been using Test Analytics, there are a few minor changes to Python and Jest setups. Check out the docs to learn more. The latest iteration of the pull request comment differentiates between failed and flaky tests, telling you when it's time to rerun the test due to flakiness, or when you need to spend time fixing your code because of legitimate failures. You can click into the dropdown to view three of your failed tests ordered by the shortest runtime. We've made it easy to copy the test suite and paste it directly into your terminal. You can also view the stack trace of each relevant test failure with the next drop down. We've also added some major updates to Test Analytics in the Codecov UI. In our first iteration, we shipped some basic metrics for your test instances, including the ability to view test results across all your repositories branches. In this iteration, we've added more information to help you identify your biggest testing problems, starting with the three additional selectors at the top of the page. Historical trend allows you to view your test runs over a period of time up to 30 days prior. The test suite selector lets you filter by specific test suites, if your organization has them configured. And flags allows you to filter on your Codecov configured flags. In the middle section, we have several metrics, including total test run time, slowest tests, total tests failures, and skip tests. And for pro and enterprise customers, total flaky tests and average flake rate. The numbers highlighted in blue are clickable, filtering your results to that category. The percentages next to each of the metrics show your trend over time, green signaling a decrease, and red an increase. One thing to note is that since these metrics are aggregates for your entire repo, this metric section is only available on your organization's default branch and will disappear when you use the branch selector to navigate to a new branch. Finally, the table view below now shows a much cleaner version of your test name and a new column for the flake rates of each test, which, when highlighted, shows the total number of passes, fails and skips for that particular test. So that's Test Analytics. We hope you can see how easily it slots into your existing workflows, speeds up development and unblocks your CI. Now to Rohit to talk about our brand new AI Tools. A few months ago, we introduced new AI powered features to help with code review. Today I get to share some updates and two new features. First, AI PR review, which is now an open beta, uses AI to provide code improvement suggestions on your PR. Think of it as an automated teammate that catches common issues, making life easier for you and anyone reviewing your PR. Let me show you how this works. Let's start with AI PR review. Here we have a pull request that proposes a few changes on the UI. It adds an event handler for a form submission and some logic that details what happens when it is invoked. From here, we can call the codecov of AI assistant to review our PR by simply mentioning it in a comment. Within a few seconds, the reviewer will have feedback on these changes ready to go. It correctly points out logical inconsistencies in this function, and that we're missing a few edge cases. The neat part the assistant also suggests a fix, adding validation to check for empty fields and improving error handling. Since the assistant provides code suggestions and the review comments, we can apply suggested fixes with a single click. The workflow is straightforward, and you can get quick, insightful reviews without having to bug another developer. Next, we're rolling out AI Test Generation. This tool ingests the diff from your pull request along with the coverage and test result information, and then create a test to cover your changes. This will dramatically speed up the process of improving your test coverage over time, making it easier to reach your coverage thresholds and ensure you're shipping quality code. Here we have a PR where we add a function that takes in some text and generates sentiment for each sentence. Normally, we would write unit tests for this function and push it to the branch, but this time we can use Codcov to do that for us. To do this we can use the GitHub Copilot Sentry extension. We mentioned the Sentry bot and ask it to generate unit test for us. Within a few seconds, We have a full suite of thorough and more importantly, working tests that can easily be integrated back into our original changes. These tests are created in their own branch based off of your feature branch. So if you like the tests and want to incorporate them, all you have to do is merge the PR. If you want to change the tests, you can do so on this branch as well. Just merge these changes back into our original PR and you're done. This feature is going to make testing and managing code coverage a lot less painful, so I hope you give it a try. Now AJ is going to close us out. To recap, using all of these new features, you can easily build a robust test suite. Meet your coverage needs, ensure a speedy app, and get lightning fast reviews without ever leaving your PR. We believe this will make you more efficient and less stressed engineer. Please try them all out and let us know what you think. Thanks again.To view or add a comment, sign in