In an article submitted to the arXiv preprint* server, researchers at Xiaoduo AI introduced Xmodel-1.5, a 1-billion-parameter multilingual large model pre-trained on approximately 2 trillion tokens.