9.6 Review

Finkelstein and Fishbach (2012) found that as people gain more expertise, they are more likely to respond to negative reviews (i.e., feedback), which is robust for many domains (e.g., language acquisition, environment, marketing). novices are more likely to respond to positive feedback while experts seek and respond to negative feedback

(Büschken and Allenby 2016) Sentence-based text analysis for customer reviews

  • Challenge in analyzing unstructured consumer reviews: making sense of topics expressed

  • Proposed a new model for text analysis using sentence structure in the reviews

  • Model leads to improved inference and prediction of consumer ratings

  • Sentence-based topics found to be more distinguished and coherent than word-based analysis

  • Data used from www.expedia.com and www.we8there.com to support findings.

Folse et al. (2016) defines negatively valenced emotional expressions as having intense language, all caps, exclamation points, emoticons in online reviews. Reviews with negatively valenced emotions are viewed as more helpful and damage attitude toward the product when used by experts, but when sued by novices, a negative self-reflection is observed and attitude towards the product is unchanged. Language complexity moderates the adverse effect of expertise on trustworthiness.

(Yazdani, Gopinath, and Carson 2018) Effect of Reviews by Rank on Product Sales

  • Objective:

    • Examine how reviews by top-ranked and bottom-ranked reviewers influence product sales.
  • Data Source:

    • Sales data from 182 new music albums released over approximately three months.

    • User review data sourced from Amazon.com.

  • Methodology:

    • Use of instrumental variables to account for potential confounding factors in the measurement of online word-of-mouth impacts.
  • Key Findings:

    1. Greater Impact by Bottom-Ranked Reviewers: Reviews from bottom-ranked reviewers have a more pronounced effect on sales than those from top-ranked reviewers.

    2. Influence of Top-Ranked Reviewers: While they can act as opinion leaders, their effect on sales is predominantly seen in specific cases such as:

      • New product releases.

      • Products with high variability in existing reviews.

    3. Driving Factors for Differences in Influence: The disparity in the influence of top- and bottom-ranked reviewers can be attributed to:

      • Content: The nature and quality of the review written.

      • Identity: The credibility and reputation of the reviewer.

    4. Robustness of Results: The findings remain consistent across:

      • Different product categories like music albums and cameras.

      • Various metrics like sales and sales rank.

  • Implications:

    • For Businesses: It’s crucial for businesses to understand that not all reviews have equal impact. While top-ranked reviewers can have authority, it’s the bottom-ranked reviewers that can potentially sway a larger audience.

    • For Consumers: When making purchasing decisions, consumers should consider the content of reviews and not just the rank of the reviewer.

    • For Platforms: Online platforms might consider re-evaluating their reviewer ranking systems or highlighting reviews that can provide the most accurate and influential information to potential buyers.

Dai, Chan, and Mogilner (2019)

  • Based on Amazon reviews and experiments, Consumer reviews are less likely to be trusted for experiential purchases than they are for material ones.

  • This impact stems from the idea that ratings of experiences are less representative of the purchase’s objective quality than reviews of actual objects. These findings reveal not only how word of mouth influences different types of purchases, but also the psychological mechanisms that underpin customers’ reliance on consumer reviews. Furthermore, these findings imply that people are less responsive to being instructed what to do than what to have, as one of the first examinations into how people choose among numerous experience and material buying possibilities. (X. (Shane). Wang et al. 2021)

  • extract and monitor products and attributes info from consumer reviews using machine learning and NLP (to create embedded representation)

    • In customer evaluations, embedded representation characterizes (represents) textual data like particular product features by employing the words that surround such textual data (i.e., the contextual information). Neural networks are used to measure how similar different product attributes are based on what people say about them (i.e., contextual information), which shows similarities and differences in how people use the attributes. From this embedded representation, the model then picks out multi-level clusters of product attributes that show how abstract the product benefits are at different levels.
  • Close the gap between engineered attributes (i.e., concrete attributes) and meta-attributes (abstract attributes)

  • Survey can be inconsistent and time consuming.

(Sunder, Kim, and Yorkston 2019) Drivers of Herding Behavior in Online Ratings

  • Objective: Investigate how herding effects, driven by reference groups (crowd and friends), impact online ratings.

  • Background: While post-purchase evaluations are known to influence sales, the nuances of herding in online ratings remain underexplored.

  • Key Insights:

    1. Herding Significance: Herding effects in online ratings are substantial, calling for a detailed understanding of its dynamics.

    2. Rater Experience: As raters become more experienced, the impact of the crowd diminishes, while friends’ influences grow.

    3. Divergent Opinions: Differences in opinions among reference groups lead to varied herding effects based on the reference group and the rater’s experience.

    4. Firm Product Portfolio’s Role: A diverse product range not only boosts perceived quality but also lessens the sway of social influence on ratings.

P. Nguyen et al. (2020) found that more expertise leads to less extreme evaluations. Reviewing experts have less impact on a brand valence metric (which affects page rank and consumer evaluation). And experts do benefit and harm service providers with their ratings. Hence, excellent experiences may lead to lower ratings from experts (than from novices)

Observational data show that experts are more likely to post negative reviews.

(Schoenmueller, Netzer, and Stahl 2020) found Polarity self-selection of online reviews and it reduces the informativeness of online reviews

(Banerjee, Dellarocas, and Zervas 2021) Q&A section (answered by other consumers and sellers) improve fit and match between products and consumers and reduce negative reviews regarding mismatch.

(Nishijima, Rodrigues, and Souza 2021) interestingly found that Rotten Tomatoes ratings has no affect on box office performance using the categorization of “fresh” or “rotten.” They obtain box office performance data from boxofficemojo

(S. Park, Shin, and Xie 2021) The Fateful First Consumer Review

  • valence and volume are not independent

  • Positive first review can create long-term advantage in future WOM valance and volume, while negative first reveiw suffers long-term disadvantage in future WOM valance and volume.

  • Because of information-availability bias, consumers are entrenched with either positive or negative first reviews which renders difficulty for firms to correct their first negative review.

(Hoskins et al. 2021) Online Review Ratings: Differences Between Niche and Mainstream Brands

  • Objective:

    • Explore the differences in the drivers of online review ratings between niche and mainstream brands.
  • Data Source:

    • A unique dataset on the U.S. beer product category.
  • Key Factors Examined:

    1. Customer Review Valence: The overall positive or negative sentiment of customer reviews.

    2. Professional Critics Review Valence: Sentiment analysis of reviews by professional critics.

    3. Community Characteristics: The nature and characteristics of the online community reviewing the product.

    4. Location Similarity: How similar or close in location a reviewer is to a brand or other reviewers.

    5. Reviewer Characteristics: Traits, behaviors, and preferences of individual reviewers.

  • Major Findings:

    1. Niche Brand Influence: Niche brands are generally more affected by Online Word of Mouth (OWOM) because consumers typically have less established brand awareness and pre-formed brand imagery.

    2. Local Preference for Niche: Reviewers tend to rate a local niche brand more favorably compared to non-local niche brands.

    3. Professional Critics vs. Online Community: For the average reviewer, the online community’s sentiment has a more profound influence than professional critics.

    4. Influence of Prior Reviews: A review from the online community gains more traction when:

      • The reviewer’s expertise is high.

      • The prior reviewer shares geographic traits with the subsequent reviewer.

    5. Alignment of Reviewer Sentiments:

      • Reviewers engaging more with products/brands tend to have sentiments that align with professional critics.

      • Reviewers engaging more with the online community tend to resonate with that community’s sentiment.

(L. Li, Gopinath, and Carson 2022) Online Reviews on Intergenerational Product Sales

  • Objective:

    • Investigate how online customer reviews of one product generation influence the sales of another generation within the same product series.
  • Data Source:

    • Data from intergenerational pairs of point-and-shoot cameras sold on Amazon.com.
  • Methodology:

    • Joint estimation of the current and previous generation models, with errors clustered at both daily and product levels.

    • Use of instrumental variables to address potential endogeneity concerns related to online word-of-mouth measures.

  • Key Findings:

    1. Positive Influence of Previous Generation Reviews: The valence (or tone) of reviews for the previous product generation positively impacts the sales of the current generation.

    2. Negative Impact on Previous Generation Sales: Interestingly, the valence of current generation reviews negatively affects the sales of the previous generation.

    3. Factors Amplifying the Impact of Previous Generation Valence: The positive effect of previous generation valence on current generation sales intensifies when:

      • Uncertainty (as measured by the standard deviation) in reviews for the current generation is high.

      • The current generation product receives favorable reviews (high valence).

    4. Factors Mitigating the Impact of Previous Generation Valence: The positive effect weakens when:

      • There’s higher uncertainty in reviews for the previous generation.

      • The current generation product has been available in the market for a more extended period.

  • Implications:

    • For Marketers: The legacy of a product series plays a crucial role in shaping consumer perceptions and sales of newer versions. Ensuring consistent quality and addressing issues in earlier generations can bolster the success of future releases.

    • For Online Retailers: Highlighting positive reviews of previous generations, especially when there’s uncertainty around a newer product, can help boost sales.

    • For Consumers: Evaluating reviews of earlier versions of a product can provide valuable insights into the expected performance and reliability of the current generation.

(Ordabayeva, Cavanaugh, and Dahl 2022)

  • Negative internet reviews from socially distant (but not socially close) individuals may not be as harmful to identity-relevant brands. Because a negative review of an identity-relevant brand can threaten a client’s identity, the consumer will seek to strengthen their relationship with the brand.

  • They show that this effect does not appear when the review is positive or when the brand is irrelevant.

(J. Chen et al. 2022) Order between rating and tipping matters

  • If customers rate a service professional before tipping, they will tip less

  • If customers tip before rating, the tipping amount is unchanged.

  • This negative effect is because customers think that they reward service providers already by reviewing, so they only need to tip a smaller amount

  • This negative effect is more pronounced when customers

    • tip from their own pocket

    • have higher categorization flexibility (i.e., considering rating as reward like tipping)

    • think service professionals benefit from rating

  • To mitigate this effect, service professionals can highlight the consistency motivation between the rating and tipping sequence.

(He, Hollenbeck, and Proserpio 2022)

  • There is a big online market where fake online reviews are sold and bought.

  • Fake internet reviews do help product vendors get better ratings and make more sales.

  • Most big companies don’t buy fake reviews.

  • Online marketplaces try to stop fake reviews, but they can’t always do it right away.

  • Fake reviews (bought on Facebook private groups) is correlated with short-term increase in average rating and number of reviews

  • When firms stop purchasing fake views, their average ratings decrease (due to share of one-star reviews), especially for young and low-quality products.

References

Banerjee, Shrabastee, Chrysanthos Dellarocas, and Georgios Zervas. 2021. “Interacting User-Generated Content Technologies: How Questions and Answers Affect Consumer Reviews.” Journal of Marketing Research 58 (4): 742–61. https://doi.org/10.1177/00222437211020274.
Büschken, Joachim, and Greg M Allenby. 2016. “Sentence-Based Text Analysis for Customer Reviews.” Marketing Science 35 (6): 953–75.
Chen, Jinjie, Alison Jing Xu, Maria A. Rodas, and Xuefeng Liu. 2022. “EXPRESS: Order Matters: Rating Service Professionals Reduces Tipping Amount.” Journal of Marketing, April, 002224292210986. https://doi.org/10.1177/00222429221098698.
Dai, Hengchen, Cindy Chan, and Cassie Mogilner. 2019. “People Rely Less on Consumer Reviews for Experiential Than Material Purchases.” Edited by Darren W Dahl, Margaret C Campbell, and Cait Lamberton. Journal of Consumer Research 46 (6): 1052–75. https://doi.org/10.1093/jcr/ucz042.
Finkelstein, Stacey R., and Ayelet Fishbach. 2012. “Tell Me What I Did Wrong: Experts Seek and Respond to Negative Feedback.” Journal of Consumer Research 39 (1): 22–38. https://doi.org/10.1086/661934.
Folse, Judith Anne Garretson, McDowell Porter III, Mousumi Bose Godbole, and Kristy E. Reynolds. 2016. “The Effects of Negatively Valenced Emotional Expressions in Online Reviews on the Reviewer, the Review, and the Product.” Psychology and Marketing 33 (9): 747–60. https://doi.org/10.1002/mar.20914.
He, Sherry, Brett Hollenbeck, and Davide Proserpio. 2022. “The Market for Fake Reviews.” Marketing Science, February. https://doi.org/10.1287/mksc.2022.1353.
Hoskins, Jake, Shyam Gopinath, J Cameron Verhaal, and Elham Yazdani. 2021. “The Influence of the Online Community, Professional Critics, and Location Similarity on Review Ratings for Niche and Mainstream Brands.” Journal of the Academy of Marketing Science 49: 1065–87.
Li, Linyi, Shyam Gopinath, and Stephen J Carson. 2022. “History Matters: The Impact of Online Customer Reviews Across Product Generations.” Management Science 68 (5): 3878–3903.
Nguyen, Peter, Xin (Shane) Wang, Xi Li, and June Cotte. 2020. “Reviewing Experts Restraint from Extremes and Its Impact on Service Providers.” Edited by J Jeffrey Inman and Andrew T Stephen. Journal of Consumer Research 47 (5): 654–74. https://doi.org/10.1093/jcr/ucaa037.
Nishijima, Marislei, Mauro Rodrigues, and Thaís Luiza Donega Souza. 2021. “Is Rotten Tomatoes Killing the Movie Industry? A Regression Discontinuity Approach.” Applied Economics Letters, April, 1–6. https://doi.org/10.1080/13504851.2021.1918324.
Ordabayeva, Nailya, Lisa A. Cavanaugh, and Darren W. Dahl. 2022. “EXPRESS: The Upside of Negative: Social Distance in Online Reviews of Identity-Relevant Brands.” Journal of Marketing, January, 002224292210747. https://doi.org/10.1177/00222429221074704.
Park, Sungsik, Woochoel Shin, and Jinhong Xie. 2021. “The Fateful First Consumer Review.” Marketing Science 40 (3): 481–507. https://doi.org/10.1287/mksc.2020.1264.
Schoenmueller, Verena, Oded Netzer, and Florian Stahl. 2020. “The Polarity of Online Reviews: Prevalence, Drivers and Implications.” Journal of Marketing Research 57 (5): 853–77. https://doi.org/10.1177/0022243720941832.
Sunder, Sarang, Kihyun Hannah Kim, and Eric A Yorkston. 2019. “What Drives Herding Behavior in Online Ratings? The Role of Rater Experience, Product Portfolio, and Diverging Opinions.” Journal of Marketing 83 (6): 93–112.
Wang, Xin (Shane), Jiaxiu He, David J. Curry, and Jun Hyun (Joseph) Ryoo. 2021. “Attribute Embedding: Learning Hierarchical Representations of Product Attributes from Consumer Reviews.” Journal of Marketing, November, 002224292110478. https://doi.org/10.1177/00222429211047822.
Yazdani, Elham, Shyam Gopinath, and Steve Carson. 2018. “Preaching to the Choir: The Chasm Between Top-Ranked Reviewers, Mainstream Customers, and Product Sales.” Marketing Science 37 (5): 838–51.