Apple’s new Vision Pro virtual reality headset was demonstrated at the Apple Worldwide Developers Conference (WWDC) held at Apple Park in Cupertino, California on June 5, 2023.
Josh Adelson | AFP | Getty Images
For years, Apple has avoided using the abbreviation AI when talking about its products. no longer.
The generative artificial intelligence boom spurred by OpenAI in late 2022 has become the biggest story in the tech industry in recent times, boosting chip manufacturers Nvidia Market cap reaches $3 trillion and leads to major shift in priorities Microsoft, Google and Amazonthey are all racing to add this technology to their core services.
Investors and customers now want to know what the iPhone maker has to offer.
The new artificial intelligence capabilities will be unveiled on Monday at Apple’s Worldwide Developers Conference (WWDC) at Apple’s campus in Cupertino, California. apple Chief Executive Tim Cook teased “big plans,” a change in approach from a company that doesn’t like talking about products before they’re released.
WWDC is generally not a major investor draw. On the first day, the company announced annual updates to its iOS, iPadOS, WatchOS and MacOS software, usually with a two-hour video-themed conference hosted by Cook. This year, the briefing will be screened at Apple headquarters. App developers will then participate in a week of meetups and virtual workshops to learn about the new Apple software.
Apple fans can preview software on iPhone. Developers can start updating their applications. Even if new hardware products appear, they are not demonstrations.
But this year, everyone will be hearing the hottest acronym in tech.
With more than 1 billion iPhones in use, Wall Street wants to know what artificial intelligence features will make the iPhone more competitive against Android rivals and how the company can justify its investment in developing its own chips.
Investors reward companies that demonstrate a clear AI strategy and vision. Shares of Nvidia, a major maker of artificial intelligence processors, have tripled in the past year. Microsoft has actively integrated OpenAI into its products, and its stock price rose 28% last year. Apple has only gained 9% in the same period, and the market value of two other companies has surpassed it.
“This is the most important event for Cook and Cupertino in more than a decade,” Wedbush analyst Dan Ives told CNBC. “The AI strategy is the missing piece in Apple’s growth puzzle, and this event needs to be A sensational event, not a shrug.”
Executives will take the stage, including software chief Craig Federighi, who is likely to discuss the practical uses of Apple’s artificial intelligence, whether it should run locally or in a large cloud cluster, and what should be built into the operating system and not distributed within the application.
Privacy is also a key issue, and attendees may wonder how Apple can deploy data-intensive technology without compromising user privacy, which has been a core part of the company’s marketing for more than five years.
“At WWDC, we expect Apple to unveil its long-term vision for implementing generative AI across its diverse ecosystem of personal devices,” DA Davidson analyst Gil Luria wrote in a report this week. “We expect Apple to unveil its long-term vision for implementing generative AI across its diverse ecosystem of personal devices.” I believe that the impact of generative AI on Apple’s business will be one of the most profound of all technologies, and unlike AI innovations that impact developers or businesses, Apple clearly has the opportunity to reach billions of consumer devices. Function.
Upgrade Siri
Last month, OpenAI announced the voice mode of its artificial intelligence software ChatGPT-4o.
In a brief demonstration, OpenAI researchers took an iPhone and spoke directly to a bot within the ChatGPT app, which was able to perform impressions, speak fluently, and even sing. The dialogue is snappy, the bot gives advice, and the voice sounds human. Further demonstrations at live events showed the robot singing, teaching trigonometry, translating and telling jokes.
Apple Users & Experts Got it immediately OpenAI has demonstrated a preview of the future of Apple’s Siri. Apple’s voice assistant debuted in 2011 and has since gained a reputation for being impractical. It is rigid and can only answer a small set of well-defined queries, in part because it is based on older machine learning technology.
Apple may work with OpenAI to upgrade Siri next week. The company is also in discussions to license its chatbot technology to other companies, including Google and Cohere, according to a report New York Times.
Apple declined to comment on the OpenAI partnership.
One possibility is that Apple’s new Siri won’t compete directly with full-featured chatbots, but will improve on its current functionality and hand off questions that can only be answered by chatbots to partners. It’s very close to how Apple’s current Spotlight search and Siri work. Apple’s system tries to answer this question, but if it can’t, it turns to Google. The agreement is part of a deal worth $18 billion a year for Apple.
Apple may also shy away from fully embracing an OpenAI partnership or chatbots. One reason is that chatbot malfunctions can make embarrassing headlines and potentially undermine companies’ emphasis on user privacy and personal control over user profiles.
“Data security will be a key differentiator for the company, and we hope they will spend time talking about their privacy efforts during WWDC as well,” Citi analyst Atif Malik said in a recent report.
OpenAI’s technology is based on web scraping, and ChatGPT user interaction is used to improve the model itself. This technology may violate some of Apple’s privacy principles.
Large language models like OpenAI still suffer from inaccuracies or “illusions,” like when Google’s search AI said last month that President Barack Obama was the first Muslim president. OpenAI CEO Sam Altman recently found himself mired in a thorny social debate over deepfakes and deception, denying actress Scarlett Johansson’s accusations that OpenAI’s speech mode robbed her of her voice. Apple executives want to avoid such conflicts.
High efficiency and large
Craig Federighi, Apple’s senior vice president of software engineering, speaks before the opening of Apple’s Worldwide Developers Conference on June 5, 2023 in Cupertino, California. Apple CEO Tim Cook kicked off the annual WWDC23 developer conference.
Justin Sullivan | Getty Images News | Getty Images
Outside of Apple, artificial intelligence has come to rely on large server farms using powerful Nvidia processors and terabytes of memory to crunch numbers.
In contrast, Apple wants its artificial intelligence capabilities to run on iPhones, iPads and Macs, which require battery power. Cook emphasized that Apple’s own chips are excellent at running artificial intelligence models.
“We believe in the transformative power and promise of artificial intelligence, and we believe we have the advantages that will make us stand out in this new era, including Apple’s unique combination of seamless integration of hardware, software and services, breakthrough Apple silicon and our industry-leading neural network engine, and our unwavering focus on privacy,” Cook told investors during an earnings call in May.
“We expect Apple’s presentation at the WWDC keynote to focus on features and on-device capabilities, as well as the GenAI models running on the device to enable these,” JPMorgan analyst Samik Chatterjee wrote in a report this month. Function.
In April, Apple released research on AI models It’s called an “efficient language model” It will be able to run on mobile phones. Microsoft also released information about same concept. One of Apple’s “OpenELM” models has 1.1 billion parameters, or weights, which is far smaller than OpenAI’s 2020 GPT-3 model (which has 175 billion parameters), and even smaller than one version’s 70 billion parameters. Yuan Llama, one of the most widely used language models.
In the paper, Apple researchers benchmarked the model on a MacBook Pro laptop running Apple’s M2 Max chip, showing that these efficient models don’t necessarily need to be connected to the cloud. This improves response times and provides a layer of privacy, as sensitive questions can be answered on the device itself rather than being sent back to Apple servers.
Some of the features built into Apple’s software could include providing users with summaries of missed text messages, image generation for new emojis, completing code in the company’s development software Xcode or drafting email responses. According to Bloomberg.
Apple may also decide to load M2 Ultra chips into its data centers to handle AI queries that require more horsepower. Bloomberg reports.
Green Bubbles and Vision Pro
On February 2, 2024, a customer used Apple Vision Pro headphones at the Apple Fifth Avenue store in Manhattan, New York City, the United States.
Brendan McDermid | Reuters
WWDC will not strictly discuss artificial intelligence.
The company has more than 2.2 billion devices in use, and customers are demanding improved software and new apps.
One potential upgrade could be Apple’s adoption of RCS, an improvement on the old messaging system SMS. Apple’s Messages app transfers text between iPhones to its own iMessage system, which displays conversations as blue bubbles. When an iPhone sends a text message to an Android phone, the bubble is green. Many features, such as typing notifications, are unavailable.
Google led the development of RCS, adding encryption and other features to text messages. Late last year Apple comfirmed It will add support for RCS along with iMessage. The debut of iOS 18 is a natural time to show off its work.
The conference will also mark the first anniversary of Apple’s Vision Pro, its virtual and augmented reality headset, which was released in the United States in February. Apple may announce expansion to more countries, including China and UK
Apple said in its WWDC announcement that the Vision Pro will be the center of attention. Vision Pro is currently in the first version of its operating system, and its core features (such as Persona video conferencing simulation) are still in beta.
For those with Vision Pro, Apple will be offering some virtual sessions at the event 3D environment.