Foxconn launched its first large language model, FoxBrain, and trained on 120 of Nvidia’s H100 GPUs in about four weeks.
Based on Meta’s Llama 3.1 architecture, it is Taiwan’s first LLM with reasoning capabilities optimised for traditional Chinese and Taiwanese.
Though slightly behind China’s DeepSeek model, Foxconn says FoxBrain’s performance is close to world-class standards.
Initially for internal use, it supports data analysis, decision-making, document collaboration, coding, and problem-solving.
Foxconn plans to collaborate with tech partners, expand applications, and promote AI in manufacturing and supply chain management.
Nvidia’s Taipei-1 supercomputer in Taiwan supported the training, with more details to be revealed at Nvidia’s GTC conference in mid-March.
Wondering About A Stock? The Analyst Has Answers.
The future of investing is here!
Unicorn Signals leverages advanced AI technology to provide you with powerful market predictions and actionable stock scans. Download the app today and 10x your trading & investing journey!