Endeavors have been recently made to leverage the vision transformer (ViT)for the challenging unsupervised domain adaptation (UDA) task. They typicallyadopt the cross-attention in ViT for direct domain alignment. However, as theperformance of cross-attention highly relies on the quality of pseudo labelsfor targeted samples, it becomes less effective when the domain gap becomeslarge. We solve this problem from a game theory’s perspective with the proposedmodel dubbed as PMTrans, which bridges source and target domains with anintermediate domain. Specifically, we propose a novel ViT-based module calledPatchMix that effectively builds up the intermediate domain, i.e., probabilitydistribution, by learning to sample patches from both domains based on thegame-theoretical models. This way, it learns to mix the patches from the sourceand target domains to maximize the cross entropy (CE), while exploiting twosemi-supervised mixup losses in the feature and label spaces to minimize it. Assuch, we interpret the process of UDA as a min-max CE game with three players,including the feature extractor, classifier, and PatchMix, to find the NashEquilibria. Moreover, we leverage attention maps from ViT to re-weight thelabel of each patch by its importance, making it possible to obtain moredomain-discriminative feature representations. We conduct extensive experimentson four benchmark datasets, and the results show that PMTrans significantlysurpasses the ViT-based and CNN-based SoTA methods by +3.6% on Office-Home,+1.4% on Office-31, and +17.7% on DomainNet, respectively.