Apple is developing multiple generative artificial intelligence models, and is spending millions of dollars per day on its compute budget.
The Information reports that the conversational AI team, named Foundational Models, is made up of just 16 people. There are also two other relatively new teams at the company developing language or image models.
The AI efforts are spearheaded by John Giannandrea, who joined in 2018 as SVP of machine learning and artificial intelligence strategy after a lengthy stint at Google. Within the wider AI group, the Foundational Models team is led by Ruoming Pang, who joined the company in 2021 after spending 15 years at Google.
The group has developed several advanced models, including a large-language model (LLM) chatbot that could interact with customers who use AppleCare, the company’s after-sales service for warranty and technical support.
Giannandrea has, however, told colleagues of his doubt about the usefulness of chatbots powered by AI language models, The Information reports. This view has begun to shift over the past year.
The Siri team separately plans to use LLMs in its voice assistant, and is developing several models. The company believes that its most powerful, Ajax GPT, can do more than OpenAI’s GPT 3.5, the LLM that powered the initial version of ChatGPT. OpenAI has since released more powerful models, however.
With both Giannandrea and Pang coming over from Google, they helped convince Apple to use Google Cloud - in particular making use of its custom tensor processing unit (TPU) chips for machine learning training. AXLearn, a machine-learning framework developed to train Ajax GPT, was partly based on Pang's research, and is optimized for TPUs.
Apple's Google Cloud contract is believed to be discounted, due to the the two companies' wider business arrangements, including Google Search being the default search provider for the Safari browser. The company is thought to be one of Google's largest cloud customers, but is also a major user of Amazon Web Services.