Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tools to collect performance metrics #64

Closed
eugene-manuilov opened this issue Dec 22, 2021 · 7 comments
Closed

Tools to collect performance metrics #64

eugene-manuilov opened this issue Dec 22, 2021 · 7 comments
Labels
Needs Discussion Anything that needs a discussion/agreement

Comments

@eugene-manuilov
Copy link
Contributor

Once we figure out the list of metrics (#63) that we need to measure, we will need to select which tools we want to use to collect metrics. Ideally, all contributors should be able to run these tools on their own computers without any limitations.

Please, suggest tools that you think we would need to use and provide some details about why we should use them.

Tools to track frontend metrics

  • TBD

Tools to track backend metrics

  • TBD
@felixarntz
Copy link
Member

@lolautruche @josephscott Pinging you here since you both already had some ideas and made progress on potential tools to collect performance metrics. #63 is also related to that.

@lolautruche
Copy link

Hey there 👋

Thank you @felixarntz
As for the backend measurements, Blackfire can help a lot.
Blackfire will provide detailed profiles to understand where we may have bottlenecks, what we may improve, etc.
On top of that, we can define metrics in a .blackfire.yaml file, which can then be used for automated testing (tests are evaluated against metrics, in each profile).

Eventually, when we have defined metrics and tests, we may define scenarios, which can be run e.g. for each PR, or each time we push a commit.

This works best with Platform.sh integration:

  1. A PR is opened
  2. Platform.sh deploys it automatically
  3. Once it's deployed, a Blackfire build is triggered thanks to the integration with Platform.sh
  4. We get a build report and a commit status on the PR
@lolautruche
Copy link

Regarding metrics, I wrote several articles explaining what we can do with them in Blackfire:

@ecotechie
Copy link

ecotechie commented Jan 2, 2022

Any thoughts of using Xdebug with the profiling flag? This is free, open source, easy enough to setup locally and very powerful if used with CacheGrind. I use Kcachegrind, but any other would do...

@eclarke1 eclarke1 added Needs Discussion Anything that needs a discussion/agreement and removed [Type] Discussion labels Jan 17, 2022
@dainemawer
Copy link
Contributor

I cant speak too much to the backend profiling here, but wondering if we should go barebones on measuring frontend metrics.

I've seen some of the Performance Advocates at Google use the web vitals library on their own personal endeavours: https://www.npmjs.com/package/web-vitals#basic-usage

Some motivations:

Pros

  1. Tiny file size (~1kb)
  2. Accurate and doesnt re-invent the wheel - dont think theres a need for us to create our own way of reporting if this package is maintained by the Google Performance team
  3. Measures CLS, FID and LCP as well as FCP and TTFB
  4. Allows you to report / send delta data (the difference between the current value and the last-reported value)

Cons:

  1. No visibility on iframe content - could lead to discrepancies in data.
  2. Would require a top level api to help WP end users extend it in any manner
  3. Adds an additional request to the page, and every page for that manner
  4. The reporting format is open ended to the end user (requires some thinking about how to make meaning from the data)

The other benefit is that this library allows us to send this data in a meaningful way to a report, endpoint or dashboard which could be really useful for reporting not only to the WordPress Admin / Dashboard but we could also help users with sending data to Google Analytics and Google Tag Manager or even Firebase

The code has great support all the way back to IE 9.

@jjgrainger
Copy link
Contributor

I have been working on a proposal looking into tools we can use for capturing frontend performance metrics as well as #63 Define a list of metrics that we need to measure. (document here)

One potential solution for collecting metrics could be Lighthouse CI. This would be used to run automated Lighthouse audits within the CI pipeline to test performance as features are being developed and before they are released.

There are a number of benefits and features that come with it, such as configuring how many runs, what URLs to test and running assertions against a Performance Budget (budget.json).

It can also be paired with Lighthouse CI server which provides a dashboard where developers can compare performance between different commits.

There is also a GitHub Action available that we could potentially use as a quick starting point.

@joemcgill
Copy link
Member

Since this issue was initially opened, we've documented the tools we use for measurement in our handbook: https://make.wordpress.org/performance/handbook/measuring-performance/

Closing this as complete, and we can open new tickets for any follow-up work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Discussion Anything that needs a discussion/agreement
9 participants