How Machine-Generated Ratings and Social Exposure Affect Human Reviewers: Evidence from Initial Coin Offerings

While machine-generated information has become more prevalent in online professional ratings, how it is used as a basis for human ratings has been relatively underexplored. Using online professional ratings of initial coin offerings (ICOs) projects as a context, this study examines how the increased social exposure of human ratings and the different types of experiences influence the ratings of human ICO experts relative to machine-generated ratings. Leveraging an event of an interface design change on a major ICO rating platform, we find that the increased social exposure of human ratings makes human experts with advisor experiences reduce their rating levels, makes them less likely to rate above the machine-generated rating scores, and makes them rate closer to the machine-generated rating scores. We also find that the increased social exposure of human ratings makes human experts with team member experiences (of ICO projects) rate closer to the machine-generated rating scores, although the increased social exposure does not significantly change their rating levels. Our findings suggest that human experts with advisor experiences may strategically go above the machine-generated scores to overrate projects and impress the project teams, while human experts with team member experiences do not have the tendency to overrate. In general, the increased social exposure drives human experts to conform to the machine-generated scores in ratings and possibly corrects the biases in human ratings.