When we use Google Maps to read reviews of a place, we rely on contributions written by ordinary users. These texts are moderated by Google before being published. However, due to the large volume of reviews to be examined, some inappropriate reviews sometimes escape initial scrutiny. For example, during the pandemic, negative reviews appeared on venues that respected the Covid-19 containment regulations, or criticism from users against social distancing measures.
The Google Maps Review Moderation Process
Google recently explained in detail how review moderation happens. First, content policies have been established to ensure that reviews are based on authentic experiences and free from offensive or irrelevant comments. These guidelines are constantly updated to reflect social changes. For example, many reviews were deleted because they criticized venues for adhering to health regulations during the pandemic.
The Importance of Machine Learning
Google's machine learning system carries out an initial screening of the reviews, eliminating those that do not comply with the guidelines. However, Google recognizes that machine learning alone is not enough, as it cannot capture all the nuances. For this reason, human intervention is essential. Human operators can correctly interpret the context of words that machine learning might misunderstand, such as the term “gay” used in a non-offensive way in a gay-friendly club. This human intervention helps remove bias from machine learning models, ensuring more accurate moderation.
Reporting Inappropriate Reviews
If a user believes that a review violates Google policies, they can report it through the links provided by Google for businesses and users.
In conclusion, the combination of machine learning and human intervention is essential to ensure that reviews on Google Maps are relevant, respectful and useful for all users.