Algorithmic Bias and Hospitality Justice: Simulating AI Discrimination through Legal and Neurocognitive Lenses

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

As artificial intelligence becomes increasingly integrated into the hospitality and tourism service systems, it raises questions related to fairness, inclusion, and accountability in algorithmic decision-making. This study presents a simulation-based evaluation model to explore how AI service-delivery decisions impact guests from cognitive, affective, and legal perspectives. Drawing on predictive processing theory, affective neuroscience, and international anti-discrimination law, the model simulates interactions between diverse traveler profiles and distinct AI decision architectures—transparent, opaque, and multi-input systems—commonly deployed in hospitality and tourism contexts that involve AI-based personalization, such as access decisions, eligibility screening, or service prioritization. A total of 108 synthetic interactions were generated across systematically varied profile–AI pairings and diagnostic rules. Every simulation-based interaction produces two main outputs: a perceptual fairness index and a legal compliance score, thus identifying trust deficits that correlate with normative risks. The results consistently exhibit differences in emotional legitimacy and legal adequacy for profiles marked as either linguistically or gender-based. Opaque and multi-input systems often heighten dissonance, but transparent AI encodings can serve as perceptual stabilizers for different identity system groups. This study demonstrates that algorithmic fairness must evolve from merely procedural logic to encompass relational trust and emotional valence. Moreover, it also helps design inclusivity and check the compliance of hospitality technologies by providing a tool that creates a risk simulator, which can be tested in meaningful and real-world situations.

Related articles

Related articles are currently not available for this article.