Our Ranking Methodology
To show the best products and services that you are looking for, we make sure to keep our data sources legitimate, up-to-date, and useful. We gather online reviews from reliable websites, weed out suspicious reviews and apply state-of-the-art machine learning algorithms to extract positive and negative sentiments of each service. Each sentiment is utilized in generating representative scores and ranks for all services relating to the keywords you search for.
How do we gather reviews?
We research and gather a comprehensive list of vendors providing similar services for each category.
We collect all possible reviews about each service vendor by scraping data from genuine online sources, such as Apple and Google app stores, using platform-specific web scrapers.
Our database is consistently updated with the latest reviews to keep you best informed.
How do we filter out suspicious reviews?
A rule-based algorithm is implemented to analyze text features of every scraped review.
The algorithm compares the check result of each text sentence with the result of existing text sentences that are flagged as normal reviews.
Reviews that are flagged as anomalies/outliers are excluded from our algorithm training for review summarization.
63 checkpoints to weed out a fake review
To count the total occurrence of nouns (singular, singular proper, plural, plural proper), pronouns (personal, possessive), verbs (base form, past tense, past participle), third persons (singular present, participles, possessive ending), adjectives, comparative adjectives, adverbs, prepositions, superlatives, determiners, pre-determiners, modals, and coordinating conjunctions.
To count the total occurrence of full stops, commas, cardinal numbers, upper-case letters, stop words, existential there, interjections, comparatives, negative words, interrogative words, foreign words, difficult words, power words, casual words, tentative words, and emotion words.
Inclusion of selected characters
To count the total occurrence of “Wh” (pronouns, determiners, adverbs), quotes (double and single), brackets (left and right), colons, symbols, “to”, “$”, “RBS”, and “#”.
To compile and compare the outcome of Flesch reading-ease score, Flesch Kincaid grade, smog index, automated readability index, Dale Chall readability score, Linsear write formula, Gunning fog, and text standards.
How do we cluster and summarise reviews?
Reviews related to similar contexts are clustered within the same group, forming a topic/context-based summary via a state-of-the-art machine learning model named BART.
For each topic cluster, the reviews are further processed to determine their sentiments towards the context. Each review is tagged as either a positive or a negative review concerning the topic.
How do we score each product?
We implement another state-of-the-art machine learning algorithm, named RoBERTa, to generate a score between 1-5 per review based on its positive or negative sentiment around a topic.
All ratings concerning a service are aggregated and averaged to generate a final score of each service vendor.
How do we rank products and services?
All service vendors belonging to the same search category are ranked accordingly based on their final score. Intuitively, services with higher ratings are positioned near the top of the search result.
To generate a fair and reliable ranking for both established and new service vendors (new = contain less than 50 reviews), we place greater importance on the Google search rank among these services than their pros and cons ratings.