Lighthouse performance scoring
Lighthouse performance scoring

Lighthouse performance scoring

Lighthouse performance scoring

How Lighthouse calculates your overall Performance score Sep 19, 2019 — Updated Jun 4, 2021 Português, Русский, English available in : 한국어 Appears in :

Performance audits In general, lone metrics contribute to your Lighthouse Performance grade, not the results of Opportunities or Diagnostics. That said, improving the opportunities and diagnostics probably improve the metric values, so there is an indirect relationship. Below, we ‘ve outlined why the sexual conquest can fluctuate, how it ‘s comprised, and how Lighthouse scores each person metric unit .

Why your score fluctuates #

A draw of the variability in your overall Performance score and measured values is not due to Lighthouse. When your performance score fluctuates it ‘s normally because of changes in underlying conditions. Common problems include :

  • A/B tests or changes in ads being served
  • Internet traffic routing changes
  • Testing on different devices, such as a high-performance desktop and a low-performance laptop
  • Browser extensions that inject JavaScript and add/modify network requests
  • Antivirus software

Lighthouse ‘s software documentation on Variability covers this in more depth. furthermore, even though Lighthouse can provide you a single overall Performance score, it might be more utilitarian to think of your site performance as a distribution of scores, rather than a single number. See the presentation of User-Centric Performance Metrics to understand why .

How the Performance score is weighted #

The Performance score is a weighted average of the measured scores. naturally, more heavily weighted metrics have a bigger effect on your overall Performance sexual conquest. The metric unit scores are not visible in the report, but are calculated under the hood.

READ MORE  Car Insurance Houston, TX | Mercury Insurance

The weightings are chosen to provide a balanced representation of the user’s perception of performance. The weightings have changed over time because the Lighthouse team is regularly doing research and gathering feedback to understand what has the biggest impact on user-perceived performance.

Lighthouse scoring calculator webapp Explore seduce with the Lighthouse score calculator

Lighthouse 8 #

Lighthouse 6 #

How metric scores are determined #

once Lighthouse is done gathering the performance metrics ( largely reported in milliseconds ), it converts each raw measured value into a metric unit score from 0 to 100 by looking where the system of measurement prize falls on its Lighthouse score distribution. The scoring distribution is a log-normal distribution derived from the performance metrics of veridical web site performance data on HTTP Archive. For model, Largest Contentful Paint ( LCP ) measures when a exploiter perceives that the largest content of a page is visible. The measured value for LCP represents the time duration between the user initiating the page lode and the page rendering its basal contented. Based on real web site data, top-performing sites render LCP in about 1,220ms, so that metric function value is mapped to a score of 99.

Going a bit abstruse, the Lighthouse scoring wind exemplary uses HTTPArchive data to determine two control points that then set the shape of a log-normal curl. The 25th percentile of HTTPArchive data becomes a sexual conquest of 50 ( the medial control period ), and the 8th percentile becomes a score of 90 ( the good/green control point ). While exploring the scoring curvature plot below, note that between 0.50 and 0.92, there ‘s a near-linear kinship between metric function value and score. Around a score of 0.96 is the “ point of diminishing returns ” as above it, the wind pulls aside, requiring increasingly more metric improvement to improve an already high gear score. Image of the scoring curve for TTI Explore the seduce curl for TTI .

READ MORE  Sex Toys for National Masturbation Month

How desktop vs mobile is handled #

As mentioned above, the score curves are determined from real number performance data. Prior to Lighthouse v6, all score curves were based on mobile operation data, however a desktop Lighthouse melt would use that. In practice, this led to artificially high-flown background scores. Lighthouse v6 fixed this microbe by using particular desktop score. While you surely can expect overall changes in your perf score from 5 to 6, any scores for background will be significantly different .

How scores are color-coded #

The metrics scores and the perf score are colored according to these ranges :

  • 0 to 49 (red): Poor
  • 50 to 89 (orange): Needs Improvement
  • 90 to 100 (green): Good

To provide a good exploiter experience, sites should strive to have a good score ( 90-100 ). A “ perfective ” score of 100 is extremely challenging to achieve and not expected. For exemplar, taking a seduce from 99 to 100 needs about the same amount of metric unit improvement that would take a 90 to 94 .

What can developers do to improve their performance score? #

first, use the Lighthouse scoring calculator to help understand what thresholds you should be aiming for achieving a certain Lighthouse performance score. In the Lighthouse report, the Opportunities section has detailed suggestions and software documentation on how to implement them. additionally, the Diagnostics section lists extra guidance that developers can explore to further improve their performance. last update : Jun 4, 2021

— Improve article



source :
Category : Accessories

Leave a Reply

Your email address will not be published.