SYNOPSIS
This study examines user interactions with a hypothetical generative AI tool, “TaxAssistAI,” compared with a human tax expert (CPA) through four experimental scenarios. Our findings reveal a persistent preference for human expertise, even as generative AI tools become increasingly prevalent. Notably, the cost or version of the AI tool did not significantly influence user confidence or willingness to act, suggesting that users evaluate AI differently from human advisors. Moreover, a complementary human-AI model boosted confidence in the advice provided, reinforcing the potential for collaborative decision-making approaches. However, participants demonstrated a tendency to internalize blame following incorrect AI advice, showing a continued reliance on AI despite errors. These insights contribute to a deeper understanding of trust, confidence, and blame attribution in AI-assisted tax advisory, offering important implications for the integration of generative AI in professional advisory contexts and expanding decision support systems (DSS) research.