Recruitment "Leak the Secret"! Apple wants to develop AI models running on mobile phones, not in the cloud

巴比特_

In the tide of artificial intelligence, AI has become a keyword that every technology company opens its mouth. But it was found that Apple seems to be an exception. Last week, Apple’s Q2 (third quarter of fiscal year 2023) post-earnings conference call, Apple still maintained “restraint” on AI.

But superficial restraint can’t stop Apple’s demand for AI talents. Perhaps Apple has already begun to expand its talent pool, and started to develop AI large models running on mobile phones.

On August 6, the British “Financial Times” stated that according to Apple’s recruitment information released between April and July this year, they are conducting “a large-scale long-term research project that is expected to affect the future of Apple and related products.”

In mid-July, a job posting from Apple stated that the company was looking for a senior software engineer to “implement the functions of compressing and accelerating large language models in the inference engine on Apple devices,”** enabling them to be implemented on mobile devices. run, not in the cloud. **

Another recruitment information released by Apple on July 28 stated that the company hopes to “equip the most advanced basic model on a mobile phone that can be placed in a pocket, and realize a new generation of functional experience based on machine learning in a privacy-protecting way.”

According to the analysis of the British “Financial Times”, Apple’s recruitment requirements have revealed the direction of Apple’s research and development over the years-to compress the existing language model so that it can run more efficiently on the mobile phone instead of the cloud.

A French AI entrepreneur who recently left a large technology company told the Financial Times that Apple wants to expand its AI talent pool locally in Paris and recruit more than other large technology companies:

Apple currently has a small AI research institute in Paris. It has recently recruited many researchers from Meta and has plans to further expand the team.

Apple has long started to develop AI models running on mobile phones?

Cook’s focus on artificial intelligence in last week’s conference call was as low as ever, and he seemed more maverick compared with his peers such as Microsoft and Alphabet.

When asked by analysts, Cook said that Apple has been working on the research and development of generative artificial intelligence and other models for many years:

We view artificial intelligence and machine learning as core foundational technologies. They are an integral part of nearly every product we make. This is absolutely critical to us. Building on our research, we have been working on artificial intelligence and machine learning, including generative artificial intelligence, for several years.

Cook also said that these technologies will continue to be used “responsibly” to advance Apple products, and that Apple tends to “announce them as they become available.”

Back in 2020, Apple spent nearly $200 million to acquire Seattle-based artificial intelligence startup Xnor, beating out bids from other big players including Microsoft, Amazon and Intel, according to two people familiar with the matter. The main business of Xnor is to study how to run AI large models on mobile devices.

In mid-July, Wall Street reports mentioned that Apple is developing its own generative AI tools. Last year, it created its own framework for building large language models, called Ajax**, which aims to unify Apple’s machine learning development. . With the help of Ajax, Apple has developed a ChatGPT-like chat tool human service, which is called Apple GPT by internal engineers.

Battle of the big models on the mobile side

Before Apple’s hiring “leaks,” Meta was also looking at mobile mockups.

On July 19th, the latest announcement from Qualcomm and Qualcomm showed that from 2024, Llama 2 will be able to run on flagship smartphones and PCs:

Customers, partners and developers can build use cases such as intelligent virtual assistants, productivity applications, content creation tools, entertainment, etc., AI functions can run in places without network connection, even in airplane mode. Running generative AI models such as Llama 2 on terminals such as smartphones, PCs, VR/AR headsets, and cars in 2024 will support developers to save cloud costs and provide users with more private, reliable, and personalized experience.

Durga Malladi, senior vice president and general manager of edge cloud computing solutions business of Qualcomm Technologies, said that in order to effectively promote generative artificial intelligence to the mainstream market, artificial intelligence will need to be available on both cloud and edge terminals (such as smartphones, laptops, Cars and IoT terminals).

Qualcomm said that compared with cloud-based large language models, edge cloud computing that runs large language models such as Llama 2 on smartphones and other devices has many advantages. It can work under the environment, and can provide more personalized and safer AI services. **

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments