Determining whether constituent opinion agrees or disagrees with proposed regulation is crucial to improving our understanding of standard-setting practices. However, the constituent feedback mechanisms provided by regulators to constituents results in large-scale unstructured datasets—thus establishing an obstacle in examining differences of opinion between parties. Utilizing publicly available documents of the FASB, this study trains machine-learning models to efficiently and effectively categorize the level of agreement and disagreement on proposed regulation between the regulator and its constituent base. We employ three different approaches—a lexicon-based approach using the dictionary method and two participant-based approaches leveraging human raters (AMT and AS). We find that the machine-learning models demonstrate more accuracy in correctly classifying observations as compared to human raters. Further, the analysis indicates that the machine-learning models using the participant-based approach and the lexicon-based approach achieve similar accuracy in predicting constituent agreement and disagreement with proposed regulation.

Data Availability: Data available upon request.

You do not currently have access to this content.