Massive Data Breach at AT&T Exposed Six Months of Call and SMS Records of Nearly All Customers 

Matt Egan and Sean Lyngaas, reporting for CNN:

The call and text message records from mid-to-late 2022 of tens of millions of AT&T cellphone customers and many non-AT&T customers were exposed in a massive data breach, the telecom company revealed Friday. AT&T said the compromised data includes the telephone numbers of “nearly all” of its cellular customers and the customers of wireless providers that use its network between May 1, 2022 and October 31, 2022.

The stolen logs also contain a record of every number AT&T customers called or texted — including customers of other wireless networks — the number of times they interacted, and the call duration.

Importantly, AT&T said the stolen data did not include the contents of calls and text messages nor the time of those communications.

Of course the breach didn’t contain the content of phone calls and text messages, because carriers don’t record phone calls and, thankfully, don’t log the contents of text messages. This isn’t an important distinction at all. This is a devastating breach.

Hermès’s H08 Watch, the Other Source for Samsung’s Ultra Rip-Off 

I’ve seen a few people arguing that Samsung’s Galaxy Watch Ultra, though clearly inspired by Apple Watch Ultra, isn’t a rip-off, per se, because it’s not an exact clone. Ben Thompson even tried to argue such with me on Dithering this week.

Here, for example, is a literal clone of Apple Watch Ultra that I bought on Temu last year for $16. (I’m linking to the user manual because the watch itself is no longer available, but here’s a thumbnail photo from Temu.) But of course Samsung wasn’t going to go that far and literally clone Apple Watch Ultra. That’s absurd. What they did was rip off as much as they thought they could get away with.

What I neglected to point out, but have since updated the post to mention, is that whatever elements of the Galaxy Watch Ultra weren’t copied from Apple Watch Ultra were clearly ripped off from Hermès’s H08 watch:

Photo of Hermès H08 Watch

That’s a handsome watch in and of itself, but it should be noted that Hermès is a longstanding partner of a smartwatch maker named — checks notes... — Apple.

European Commission Charges X With Breach of DSA 

I guess the European Commission hasn’t taken off for their months-long summer vacation quite yet:

[T]he Commission has issued preliminary findings of non-compliance on three grievances:

  • First, X designs and operates its interface for the “verified accounts” with the “Blue checkmark” in a way that does not correspond to industry practice and deceives users. Since anyone can subscribe to obtain such a “verified” status, it negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts and the content they interact with. There is evidence of motivated malicious actors abusing the “verified account” to deceive users.

  • Second, X does not comply with the required transparency on advertising, as it does not provide a searchable and reliable advertisement repository, but instead put in place design features and access barriers that make the repository unfit for its transparency purpose towards users. In particular, the design does not allow for the required supervision and research into emerging risks brought about by the distribution of advertising online.

  • Third, X fails to provide access to its public data to researchers in line with the conditions set out in the DSA. In particular, X prohibits eligible researchers from independently accessing its public data, such as by scraping, as stated in its terms of service. In addition, X’s process to grant eligible researchers access to its application programming interface (API) appears to dissuade researchers from carrying out their research projects or leave them with no other choice than to pay disproportionally high fees.

I don’t really have an opinion on the second and third points, but the first one seems daft to me. Here’s how commissioner Thierry Breton is quoted in the EC’s press release:

“Back in the day, BlueChecks used to mean trustworthy sources of information. Now with X, our preliminary view is that they deceive users and infringe the DSA. We also consider that X’s ads repository and conditions for data access by researchers are not in line with the DSA transparency requirements. X has now the right of defence — but if our view is confirmed we will impose fines and require significant changes.”

Blue checkmarks were indeed used, “back in the day”, to indicate “verified” accounts. But upon purchasing Twitter, Elon Musk eliminated that program. They don’t advertise it as “Verified” any more; they just call it “Twitter Premium” and make it very clear that blue checkmarks indicate premium account status. That’s illegal under the DSA?

Anyway, here’s Elon Musk, replying to Breton’s announcement of this investigation:

How we know you’re real? 🧐

And:

We look forward to a very public battle in court, so that the people of Europe can know the truth.

And, more intriguingly, replying to Margrethe Vestager:

The European Commission offered X an illegal secret deal: if we quietly censored speech without telling anyone, they would not fine us.

The other platforms accepted that deal.

X did not.

The weapon the EC wields is their ability to fine companies 10–20 percent of global revenue. Musk is in a unique position to tell them to fuck off. Twitter’s revenue peaked at $5 billion in 2021 — when the company was still publicly-held — and has surely declined since then. A $500 million fine is figuratively nothing to Musk. He’d gladly pay that just for the attention a public fight over this will bring to him personally and X as a platform.

Amid Antitrust Scrutiny, Microsoft Drops OpenAI Board Observer Seat, and Apple, Reversing Course, Will Not Take One 

Camilla Hodgson and George Hammond, reporting for The Financial Times:

Microsoft has given up its seat as an observer on the board of OpenAI while Apple will not take up a similar position, amid growing scrutiny by global regulators of Big Tech’s investments in AI start-ups.

Microsoft, which has invested $13bn in the maker of the generative AI chatbot ChatGPT, said in a letter to OpenAI that its withdrawal from its board role would be “effective immediately”.

Apple had also been expected to take an observer role on OpenAI’s board as part of a deal to integrate ChatGPT into the iPhone maker’s devices, but would not do so, according to a person with direct knowledge of the matter. Apple declined to comment.

OpenAI would instead host regular meetings with partners such as Microsoft and Apple and investors Thrive Capital and Khosla Ventures.

Apple’s board observer seat, set to be taken by Phil Schiller, was never officially announced. But after Mark Gurman broke the story at Bloomberg, it was confirmed by the Financial Times. So it really does seem like a fast reversal. Or as Emily Litella would say, “Never mind”. But I suspect these “regular meetings” will serve the same purpose, and I bet Schiller will be in those meetings representing Apple.

See also Reporting for Axios, Ina Fried has excerpts from Microsoft’s letter to OpenAI.

M1 MacBook Air Drops from $700 to $650 at Walmart 

Joe Rossignol, MacRumors:

Walmart+ members have early access to the deal as of 12 p.m. Eastern Time today, and it will be available to all Walmart customers starting at 6 p.m. Eastern Time today.

Walmart first began selling the MacBook Air with the M1 chip for $699 in March, marking the first time the retailer ever sold Macs directly. Now, it is available for an even lower $649 heading into the back-to-school shopping season. It is unclear how long the deal will last.

The M1 MacBook Air will turn 4 years old in November, but it remains an excellent laptop, including support for the upcoming Apple Intelligence features in MacOS 15 Sequoia. As I wrote in March, when this partnership started:

And while, yes, these machines are now over three years old, for $700 this is a great deal. That’s 30 percent less than the cheapest MacBook in an Apple Store. I’d bet serious money that a base M1 MacBook Air outperforms any other $700 laptop on the market. Show me another $700 laptop with a retina display. I’ll wait.

Fascinating example of pricing-as-branding that Apple won’t sell this machine in its own stores, but will through Walmart — which doesn’t sell any other Macs.

Pennsylvania Is, Finally, Getting Beautiful License Plates 

I’ve been a big fan of Pennsylvania governor Josh Shapiro since his term as our attorney general. He was absolutely fantastic in the aftermath of the 2020 election, when Trump attempted to steal Pennsylvania.

But as of this week he might be my favorite politician in the entire country. He accomplished what I had long ago given up hope of ever seeing: replacing PA’s fugly-as-sin license plates with a new design that’s among the best I’ve ever seen. Good typography, great colors, and a new slogan and icon that exemplify Pennsylvania’s role as the birthplace of the longest-standing democracy the world has ever seen: the Liberty Bell.

Bravo.

(Next job: Apply this same design language to our god-awful driver’s licenses.)

Samsung Rips Off Apple Watch Ultra, Right Down to the Name 

Quinn Nelson on X:

  • Watch Ultra is the most shameless copy of an Apple product in ages — and it’s hideous
  • Wait, it gets more shameless — Buds3 and Buds3 Pro are clones of AirPods

It’s sad to see Samsung — who once was a leader in design and innovation — start knocking off popular products like some third-rate OEM. Do better.

I agree that the new Buds are AirPod rip-offs, and the new Galaxy Watch Ultra is such a blatant rip-off — the name, the orange accents, the comically slavish copy of Apple’s Ocean Band — that it defies parody. It’s an outright disgrace. Theft, pure and simple. Whatever elements of this watch weren’t ripped off from Apple Watch ULtra were ripped off from Hermès’s H08 watch — and Hermès, of course, has a longstanding partnership with Apple. (Victoria Song at The Verge calls it “not exactly hiding where it got its inspiration from” and “That’s not necessarily a bad thing!”; I doubt she’d consider it “inspiration” and “not necessarily a bad thing” if someone were to rip off her articles to the degree Samsung rips off Apple’s designs. There is no reason to defend this. Call it what it is: theft.)

I disagree that Samsung was ever “a leader in design”. I don’t recall a time when their strategy was anything other than just outright stealing the designs of whoever the current market leader is and undercutting them on price just enough to take the Pepsi position (happy to be in second place, happy to have no shame). Before they started ripping off the iPhone, they ripped off BlackBerry, and called their rip-off lineup of phones “BlackJack”. Really. These new blatant, shameful rip-offs aren’t an aberration; they define the company that Samsung is.

Flight Tracking in Messages (and Anywhere Data Detectors Work) 

Nelson Aguilar and Blake Stimac, writing for CNet:

That’s right. There’s a hidden flight tracker built right into iMessage that you probably would have never noticed unless you threw in the right combination of details within a message. [...]

Although the airline name/flight number format highlighted above is the best way to go, there are other texting options that will lead you to the same result. So let’s say we stick with American Airlines 9707, other options that may bring up the flight tracker include:

  • AmericanAirlines9707 (no spaces)
  • AmericanAirlines 9707 (only one space)
  • AA9707 (airline name is abbreviated and no space)
  • AA 9707 (abbreviated and space)

This is a cool feature — that dates back to iOS 9 in 2015 — but don’t cancel your Flighty subscription. It’s maddeningly inconsistent. Even some of CNet’s own suggestions don’t work — neither AmericanAirlines1776 nor AmericanAirlines 1776 works, but American Airlines 9707 does.

The abbreviated names work for the major U.S. airlines — AA123 (American), DL123 (Delta), and UA123 (United) are all recognized. But neither B6123 nor JBU123 (JetBlue) work, nor F9123 or FFT123 (Frontier). JetBlue 123, JetBlue Airways 123, and JetBlue Airlines 123 work (and even Jet Blue 123 works, with the erroneous space), but you need to include “Airlines” for most carriers. None of these work: American 123, Delta 123, United 123, Frontier 123. All of those do work if you include “Airlines” in the name.

(Update: Turns out it’s not about major vs. regional airlines, because JetBlue is classified as a major by the DOT. Instead it seems that only flight codes from airlines whose IATA abbreviation consists of two alphabetic letters work. JetBlue’s B6 and Frontier’s F9 don’t work because they contain numbers. But even with British Airways, whose code is BA, BA123 isn’t recognized. But if you put words like “airline” or “flight” after the flight code — BA123 airline, BA1588 flight — it does, because the data detector picks up the additional context.)

CNet attributes this feature to iMessage, going so far as to claim that it doesn’t work for messages sent using SMS, but that’s wrong. It works just fine for SMS messages. In fact, it’s not even a feature specific to the Messages app. It’s a feature from Apple’s DataDetection framework — the same system-wide feature that recognizes calendar events, postal addresses, URLs, shipment tracking numbers, and more. So you can use this same flight-code trick with, say, Apple Mail. Or if you’re just sending it to yourself, put it in Apple Notes. It even works with text recognized in screenshots.

Windows Notepad Finally Gets Spellcheck and Autocorrect 

Dave Grochocki, writing for Microsoft’s Windows Insider Blog:

With this update, Notepad will now highlight misspelled words and provide suggestions so that you can easily identify and correct mistakes. We are also introducing autocorrect which seamlessly fixes common typing mistakes as you type.

Getting started with spellcheck in Notepad is easy as misspelled words are automatically underlined in red. To fix a spelling mistake, click, tap, or use the keyboard shortcut Shift + F10 on the misspelled word to see suggested spellings. Selecting a suggestion immediately updates the word. You can also choose to ignore words in a single document or add them to the dictionary, so they are not flagged as a mistake again. Spellcheck in Notepad supports multiple languages.

Better late than never, but it’s kind of wild that Notepad is 41 years old and only getting these features now. I haven’t used a single Mac app that doesn’t offer the system’s built-in spellchecking for over 20 years.

New Cars in the EU Now Equipped With Nagging Speed Limiters 

Kieran Kelly, reporting for LBC:

New cars that are sold in Europe from this week will host automatically-installed speed limiters, following the introduction of a new EU law.

Even though the rule to install the technology does not apply in the UK, many of the cars will have been made in Europe and so will feature the Intelligent Speed Assistance (ISA) anyway.

The technology allows the car to automatically restrict its speed based on GPS location, speed-sign recognition and cameras within the vehicle. This is not done simply by applying the brakes, which could be dangerous, but by gradually reducing the engine’s power. However, drivers will first get a warning that they are driving too fast and be told to slow down before the measure takes affect.

When a friend sent me this link, I thought at first that LBC was some sort of Onion/Babylon Bee-style parody site. But no, this is real. Any politician in the U.S. seeking to end their career should propose similar legislation here.

In the EU, drivers will be able to turn off the system every time they start their car. It cannot be permanently shut off.

I take back my complaint that the EU no longer innovates in technology. They brought the EU cookie-consent web experience to cars. Nonstop pointless nagging and annoyance.

Gurman: Apple Intelligence Powered Siri Won’t Arrive Until iOS 18.4 

Mark Gurman, in his Power On column for Bloomberg:

For the first time, the digital assistant will have precise control over actions inside of Apple’s apps. That means you can ask Siri to, say, edit a photo and then ship it off to a friend. It also will have the ability to understand what you’re looking at on your display, helping Siri determine what you want to do based on the context. But neither of those upgrades will be ready when Apple Intelligence launches this fall, as I’ve previously reported.

Instead, I’m told, Siri features are likely to go into beta testing for developers in January and then debut publicly around the springtime — part of an iOS 18.4 upgrade that’s already in the works. Other Siri features, such as a new design and ChatGPT integration, will be coming later this year.

Recent dates for iOS x.4 updates:

  • iOS 17.4: 7 March 2024
  • iOS 16.4: 27 March 2023
  • iOS 15.4: 14 March 2022
  • iOS 14.4: 26 January 2021
  • iOS 13.4: 24 March 2020

If the usual pattern holds, it’s a safe guess that iOS 18.4 will arrive in mid-to-late March. (iOS 14.4 came in January, but 2020 was, needless to say, an unusual year.) This jibes with my take post-WWDC, when I wrote:

If generative AI weren’t seen as essential — both in terms of consumer marketing and investor confidence — I think much, if not most, of what Apple unveiled in “Apple Intelligence” wouldn’t even have been announced until next year’s WWDC, not last week’s WWDC. Again, none of the features in “Apple Intelligence” are even available in beta yet, and I think all or most of them will be available only under a “beta” label until next year.

BriefLook 

My thanks to BriefLook for sponsoring last week at DF. BriefLook is a clever iPhone app that does one thing and does it well: it summarizes your postal (paper) mail. Just point your iPhone camera at a letter, and boom, a few moments later you get an AI-generated summary. Who it’s from, what it’s about, and what you’re expected to do about it. That’s useful for summarizing long letters in your own native language, but BriefLook can read and translate between dozens of languages. Truly an amazing use case for AI. Not too long ago a “universal mail translator / summarizer” was the stuff of science fiction. Now you can carry one in your pocket.

Download BriefLook and try it free of charge. Super useful, yet so utterly simple.

Shockingly, Apple and Epic Games Still Aren’t Getting Along 

Epic Games, on X two days ago:

Apple has rejected our Epic Games Store notarization submission twice now, claiming the design and position of Epic’s “Install” button is too similar to Apple’s “Get” button and that our “In-app purchases” label is too similar to the App Store’s “In-App Purchases” label.

We are using the same “Install” and “In-app purchases” naming conventions that are used across popular app stores on multiple platforms, and are following standard conventions for buttons in iOS apps. We’re just trying to build a store that mobile users can easily understand, and the disclosure of in-app purchases is a regulatory best practice followed by all stores nowadays.

Apple’s rejection is arbitrary, obstructive, and in violation of the DMA, and we’ve shared our concerns with the European Commission. Barring further roadblocks from Apple, we remain ready to launch in the Epic Games Store and Fortnite on iOS in the EU in the next couple of months.

Later that same day:

Update: Apple has informed us that our previously rejected Epic Games Store notarization submission has now been accepted.

Tim Sweeney:

Epic had supported notarization during Epic v Apple on the basis that Mac’s mandatory malware scanning could add value to iOS. Now it’s disheartening to see Apple twist its once-honest notarization process into another vector to manipulate and thwart competition.

Asked if he would provide screenshots of the rejected screens, Sweeney responded:

We shared screenshots with EC regulators. We want to compete with other stores by having a big exciting product rollout, which means not trickling out media publicly before launch with the Apple fan press doing a teardown using Phil Schiller crafted talking points.

Epic is certainly under no obligation to reveal screenshots of its in-progress iOS games marketplace, but without screenshots, there’s also no reason for anyone to take their own description of the notarization dispute with Apple at face value. Epic Games is an unreliable narrator. Last year the FTC fined Epic $245 million “to settle charges that the company used dark patterns to trick players into making unwanted purchases and let children rack up unauthorized charges without any parental involvement.”

Was Apple’s rejection of Epic’s notarization submission as petty and silly as Epic makes it sound? Maybe! Or perhaps Epic’s Game Store is designed to trick users into thinking it’s Apple’s own official App Store, and there’s nothing silly about the rejection at all. But in that case, it still might be illegal under the DMA for Apple to reject the submission for notarization — the DMA may well require Apple to notarize a third-party marketplace app that is a pixel-for-pixel rip-off of the App Store app, so long as it’s not outright malware.

The point is, we don’t know. And one party, Apple, is barely commenting on the fracas, and the other, Epic, was just fined a quarter of a billion dollars for tricking users, including children, into making unwanted purchases.

Phil Schiller to Represent Apple as Board ‘Observer’ at OpenAI 

Mark Gurman, reporting for Bloomberg last week:

Apple Inc. will get an observer role on OpenAI’s board as part of a landmark agreement announced last month, further tightening ties between the once-unlikely partners. Phil Schiller, the head of Apple’s App Store and its former marketing chief, was chosen for the position, according to people familiar with the situation. As a board observer, he won’t be serving as a full-fledged director, said the people, who asked not to be identified because the matter isn’t public. [...]

The board observer role will put Apple on par with Microsoft Corp., OpenAI’s biggest backer and its main AI technology provider. The job allows someone to attend board meetings without being able to vote or exercise other director powers. Observers, however, do gain insights into how decisions are made at the company.

Trust, but verify” the adage goes. This board observer role gives Apple a position to verify that OpenAI remains the trustworthy partner Apple thinks they are. I can think of no one better than Schiller for this position. Perhaps you’re no fan of Schiller’s stewardship of the App Store, but he’s long been a staunch proponent of user privacy at Apple. He’s significantly responsible for Apple’s shift toward making “privacy” a major selling point of their products and services.

Figma Disables ‘Make Design’ AI-Powered Rip-Off Tool 

Sarah Perez, TechCrunch:

Figma CEO Dylan Field says the company will temporarily disable its “Make Design” AI feature that was said to be ripping off the designs of Apple’s own Weather app. The problem was first spotted by Andy Allen, the founder of NotBoring Software, which makes a suite of apps that includes a popular, skinnable Weather app and other utilities. He found by testing Figma’s tool that it would repeatedly reproduce Apple’s Weather app when used as a design aid.

Field is right to pull the feature but this explanation is sophistry. The feature is clearly fundamentally flawed. It’s not in need of a tweak. It’s in need of being completely scrapped.

Generative AI is really good and truly useful when you say “Here’s a thing, help me tweak it or change it”. But when you say “Make a new thing for me” you’re effectively just getting a rip-off a lot — or perhaps most — of the time.

Figma AI Is a Rip-Off Engine 

Andy Allen:

Figma AI looks rather heavily trained on existing apps.

This is a “weather app” using the new Make Designs feature and the results are basically Apple’s Weather app (left). Tried three times, same results.

This is even more disgraceful than a human rip-off. Figma knows what they trained this thing on, and they know what it outputs. In the case of this utter, shameless, abject rip-off of Apple Weather, they’re even copying Weather’s semi-inscrutable (semi-scrutable?) daily temperature range bars.

“AI” didn’t do this. Figma did this. And they’re handing this feature to designers who trust Figma and are the ones who are going to be on the hook when they present a design that, unbeknownst to them, is a blatant rip-off of some existing app.

Maybe now that the Adobe deal fell through, Figma is looking to sell itself to Samsung?

The Talk Show: ‘Curiously Short Episodes’ 

John Moltz returns to the show for a holiday-week look at the best of recent prestige streaming content, particularly Apple TV+. And, yes, a bit on the latest Apple/EU/DMA drama.

Sponsored by:

  • Squarespace: Make your next move. Use code talkshow for 10% off your first order.
WorkOS 

My thanks to WorkOS for sponsoring last week at Daring Fireball. WorkOS is a modern identity platform for B2B SaaS. Start selling to enterprise customers with just a few lines of code. Ship complex features like SSO and SCIM (pronounced skim) provisioning in minutes instead of months.

Today, some of the fastest growing startups are already powered by WorkOS, including Perplexity, Vercel, and Webflow.

For SaaS apps that care deeply about design and user experience, WorkOS is the perfect fit. From high-quality documentation to self-serve onboarding for your customers, it removes all the unnecessary complexity for your engineering team.

European Commission Launches Investigation Against Microsoft for Integrating Teams With Office 

The European Commission:

In particular, the Commission is concerned that Microsoft may have granted Teams a distribution advantage by not giving customers the choice whether or not to acquire access to Teams when they subscribe to their SaaS productivity applications. This advantage may have been further exacerbated by interoperability limitations between Teams’ competitors and Microsoft’s offerings. The conduct may have prevented Teams’ rivals from competing, and in turn innovating, to the detriment of customers in the European Economic Area.

If confirmed, these practices would infringe Article 102 of the Treaty on the Functioning of the European Union (‘TFEU’), which prohibits the abuse of a dominant market position.

After the Commission opened proceedings in July 2023, Microsoft introduced changes in the way it distributes Teams. In particular, Microsoft started offering some suites without Teams. The Commission preliminarily finds that these changes are insufficient to address its concerns and that more changes to Microsoft’s conduct are necessary to restore competition.

I can see the argument from regulatory proponents, that unbundling Teams from Office in some packages, after the fact, is too little too late. That the damage from abusing their dominant position was already done. But still, what more does the EC want?

The sending of a Statement of Objections does not prejudge the outcome of an investigation.

Translation: They’re guilty and we’re just going through the motions of giving them a chance to state their case.

If the Commission concludes, after the company has exercised its rights of defence, that there is sufficient evidence of an infringement, it can adopt a decision prohibiting the conduct and imposing a fine of up to 10% of the company’s annual worldwide turnover. The Commission may also impose on the company any remedies which are proportionate to bring the infringement effectively to an end.

My read on this is that the EC’s stance is that its designated gatekeeping companies — all of which happen, by sheer coincidence I’m repeatedly told, to be from the US or Asia — should be forbidden from evolving their platforms to stay on top. That churn should be mandated by law.

I mean of course Microsoft had an advantage by being able to bundle Teams with Office. But Office needs something like Teams to remain relevant today. If Office had never evolved after achieving a dominant position in the market, it would still be sold in boxes full of floppy disks. Moving from licensed installations to SaaS was inevitable if Office was to remain relevant, and adding a collaborative communication layer like Teams was essential in today’s world.

The EC, to my eyes, is saying that it’s illegal for a successful platform to adapt and evolve. Or at the very least they’re saying they might deem it illegal. And once again it’s the EC itself that is proclaiming its threat to fine Microsoft up to 10 percent of its annual global revenue, and I’ll wager, once again, that the EU itself comprises less than 10 percent of Microsoft’s revenue. They’re threatening fines incommensurate with their market size.

I think the EC expects these companies to capitulate. To bend their entire global strategy to the whims of EC bureaucrats, and just accept being handcuffed. But what’s clearly happening is that the these gatekeepers are reading the writing on the wall, and are going to postpone all new features and products in the EU until after they have assurances that they’re compliant under EU law. The EC thinks they’re going to handcuff these companies, but instead all they’re doing is setting the entire EU market months, or even years, behind the rest of the world for new products and services. In some cases those products and services will just never come to the EU at all.

Surely the lesson Microsoft is taking from this is not that they were wrong to bundle Teams with Office, but that they were wrong to offer their integrated service in the EU.

Sponsorship Openings at Daring Fireball and the Talk Show, Summer 2024 Edition 

Yours truly back in March:

After being sold out for months, the upcoming sponsorship schedule at DF is unusually open at the moment — including this upcoming week.

Weekly sponsorships have been the top source of revenue for Daring Fireball ever since I started selling them back in 2007. They’ve succeeded, I think, because they make everyone happy. They generate good money. There’s only one sponsor per week and the sponsors are always relevant to at least some sizable portion of the DF audience, so you, the reader, are never annoyed and hopefully often intrigued by them. And, from the sponsors’ perspective, they work. My favorite thing about them is how many sponsors return for subsequent weeks after seeing the results.

If you’ve got a product or service you think would be of interest to DF’s audience of people obsessed with high quality and good design, get in touch.

This is now true, once again for next week. And just like in March, sponsorship spots for The Talk Show are open for the summer months as well.

The Talk Show: ‘150 Million Calculator Apps’ 

Quinn Nelson, esteemed host of Snazzy Labs, returns to the show to recap the highlights of WWDC: Apple Intelligence, platform updates, and the latest salvos from the EC regarding Apple’s compliance with the DMA.

Sponsored by:

  • Trade Coffee: Enjoy 30% off your first month of coffee.
  • Squarespace: Make your next move. Use code talkshow for 10% off your first order.
Wavelength Is Shutting Down at the End of July 

Wavelength:

We’re sad to announce that we’re shutting down Wavelength. We’re so grateful to our users and community — you’ve been amazing.

On July 31st we’ll turn off our servers, which means that you’ll no longer be able to sign in, create a group, or send messages. You will continue to have access to your message history as long as you keep the app installed on your device, but we recommend saving or copying anything important out of the app as soon as you can.

Your Wavelength account data will be deleted from our servers at the time of the shutdown. Rest assured that we will not retain, sell, or transfer any user information, and that your messages remain end-to-end encrypted and secure.

You may recall I’ve been an advisor to the team at Wavelength for a little over a year, so I knew this announcement was coming. It’s a bummer, personally, at two levels. First, just knowing the team, particularly cofounders Richard Henry and Marc Bodnick, both of whom I now consider friends. They tried to crack the “privacy-minded social network” nut before with Telepath, and with Wavelength got even closer to pulling it off. So much work went into it, and so much of it was so good.

Second, though, is a more selfish reason: I’m an active participant in a bunch of active, vibrant groups on Wavelength. I’m going to miss them. The groups I’m most active in on Wavelength have a higher signal-to-noise ratio than any social networking platform I’ve seen in ages. I’d have to go back to the heyday of Usenet and email group mailing lists, literally decades ago, or the very early years of Twitter, to find anything with such a high level of discourse.

But the simple truth is that while Wavelength has been far from a failure, it’s also far from a breakout hit. It’d be an easy decision to shut it down if it were a flop. It was a hard decision to shut it down because it wasn’t. But a social platform really needs to be a breakout hit to succeed, and Wavelength just wasn’t on a path to become one.

So: time to move on. Until the plug gets pulled at the end of next month though, I’ll still be there.

Microsoft Edge Has an ‘Enhanced Security’ Mode That Disables the JIT 

Sergiu Gatlan, writing for Bleeping Computer in 2021 (thanks to Kevin van Haaren):

Microsoft has announced that the Edge Vulnerability Research team is experimenting with a new feature dubbed “Super Duper Secure Mode” and designed to bring security improvements without significant performance losses. When enabled, the new Microsoft Edge Super Duper Secure Mode will remove Just-In-Time Compilation (JIT) from the V8 processing pipeline, reducing the attack surface threat actors can use to hack into Edge users’ systems.

Based on CVE (Common Vulnerabilities and Exposures) data collected since 2019, around 45% of vulnerabilities found in the V8 JavaScript and WebAssembly engine were related to the JIT engine, more than half of all “in the wild” Chrome exploits abusing JIT bugs.

“Super Duper Secure Mode” was a funner name, but they settled on “Enhanced Security Mode”.

This is why Apple considers BrowserEngineKit — which is complex and requires a special entitlement with stringent requirements to use — necessary for complying with the DMA’s mandate to allow third-party browser engines. JITs are inherently vulnerable. It’s not about known bugs — it’s the unknown bugs.

The anti-WebKit peanut gallery responded to my piece on JITs yesterday with a collective response along the lines of “Who’s to say WebKit’s JIT is any more secure than Chrome’s or Gecko’s?” That’s not really the point, but that answer is, Apple is to say. iOS is their platform and they’ve decided that it’s better for the platform to reduce the attack surface to a single browser engine, WebKit, the one they themselves control. And Apple isn’t saying WebKit as a whole, or its JavaScript JIT compiler in particular, is more secure than Chrome or Gecko. They’re saying, implicitly, that it’s safer to have just one that they themselves are fully responsible for. And that the safest way to comply with the DMA’s mandate to allow third-party rendering engines is via a stringent framework like BrowserEngineKit.

You might think it would be just fine for iOS to work just like MacOS, where you can install whatever software you want. But Apple, expressly, does not. iOS is designed to be significantly more secure than MacOS.

Reuters: Amazon Is Considering $5 Monthly Charge for Improved Alexa 

Greg Bensinger, reporting for Reuters:

Amazon is planning a major revamp of its decade-old money-losing Alexa service to include a conversational generative AI with two tiers of service and has considered a monthly fee of around $5 to access the superior version, according to people with direct knowledge of the company’s plans.

Known internally as “Banyan,” a reference to the sprawling ficus trees, the project would represent the first major overhaul of the voice assistant since it was introduced in 2014 along with the Echo line of speakers. Amazon has dubbed the new voice assistant “Remarkable Alexa,” the people said.

A bit of a role reversal here. Apple, which is not known for giving away much for free, isn’t charging users for Apple Intelligence, including ChatGPT integration. Amazon, which is known for ruthlessly pursuing low prices, is, according to this report, looking to charge for an LLM-powered version of Alexa. Maybe that new version of Alexa really is that good? But I sort of think that if they gate this new Alexa behind a paywall, it will just be added to the existing package for Prime.

Speaking of Alexa, though, I’m reminded that Apple’s WWDC announcements didn’t include anything about bringing the new Apple-Intelligence-powered Siri to devices like HomePods or Apple Watches. Let’s say you have an iPhone 15 Pro or buy a new iPhone 16 this fall. What happens when you talk to Siri through your Apple Watch? Do you get the new Apple Intelligence Siri, because your watch is paired to your iPhone, which meets the device requirements for Apple Intelligence? Or do you get old dumb Siri on your Watch and only get new Siri when talking directly to your iPhone?

Gurman Just Pantsed the WSJ on Their Report About Apple and Meta Working on an AI Deal 

Salvador Rodriguez, Aaron Tilley, Miles Kruppa, reporting for The Wall Street Journal Sunday morning (News+):

In its hustle to catch up on AI, Apple has been talking with a longtime rival: Meta. Facebook’s parent has held discussions with Apple about integrating Meta Platforms’ generative AI model into Apple Intelligence, the recently announced AI system for iPhones and other devices, according to people familiar with the matter.

This didn’t make much sense, given Tim Cook’s strident condemnation of Meta and Mark Zuckerberg. E.g. this interview with Kara Swisher, which, though it was six years ago, doesn’t leave much room for a strange bedfellows partnership today: “Asked by Swisher what he would do if he were in Zuckerberg’s position, Cook said pointedly: ‘I wouldn’t be in this situation.’” Cook and Apple’s entire problem with Meta is their approach to privacy and monetizing through targeted advertising based on user profiles. Apple is trying to convince customers that Apple’s approach to AI is completely private and trustworthy; a partnership with Meta would run counter to that. And, quite frankly, Meta’s AI technology is not enviable.

Now here’s Mark Gurman, reporting for Bloomberg yesterday evening (News+):

Apple Inc. rejected overtures by Meta Platforms Inc. to integrate the social networking company’s AI chatbot into the iPhone months ago, according to people with knowledge of the matter.

The two companies aren’t in discussions about using Meta’s Llama chatbot in an AI partnership and only held brief talks in March, said the people, who asked not to be identified because the situation is private. The dialogue about a partnership didn’t reach any formal stage, and Apple has no active plans to integrate Llama. [...]

Apple decided not to move forward with formal Meta discussions in part because it doesn’t see that company’s privacy practices as stringent enough, according to the people. Apple has spent years criticizing Meta’s technology, and integrating Llama into the iPhone would have been a stark about-face.

Spokespeople for Apple and Meta declined to comment. The Wall Street Journal reported on Sunday that the two companies were in talks about an AI partnership.

Delicious, right down to the fact that Bloomberg’s link on “reported on Sunday” points not to the Journal but to Bloomberg’s own regurgitation of the WSJ’s report.

European Commission Dings Apple Over Anti-Steering Provisions in App Store, and Opens New Investigations Into Core Technology Fee, Sideloading Protections, and the Eligibility Requirements to Offer an Alternative Marketplace 

The European Commission:

Today, the European Commission has informed Apple of its preliminary view that its App Store rules are in breach of the Digital Markets Act (DMA), as they prevent app developers from freely steering consumers to alternative channels for offers and content.

I think what they’re saying here is that Apple’s current compliance offering, where developers can remain exclusively in the App Store in the EU under the existing terms, or choose the new terms that allow for linking out to the web, aren’t going to pass muster. The EC wants all apps to be able to freely — as in free of charge freely — link out to the web for purchases, regardless of whether they’re from the App Store, an alternative marketplace, or directly sideloaded.

The Commission will investigate whether these new contractual requirements for third-party app developers and app stores breach Article 6(4) of the DMA and notably the necessity and proportionality requirements provided therein. This includes:

1. Apple’s Core Technology Fee, under which developers of third-party app stores and third-party apps must pay a €0.50 fee per installed app. The Commission will investigate whether Apple has demonstrated that the fee structure that it has imposed, as part of the new business terms, and in particular the Core Technology Fee, effectively complies with the DMA.

No word on how it doesn’t comply, just that they don’t like it.

2. Apple’s multi-step user journey to download and install alternative app stores or apps on iPhones. The Commission will investigate whether the steps that a user has to undertake to successfully complete the download and installation of alternative app stores or apps, as well as the various information screens displayed by Apple to the user, comply with the DMA.

This sounds like they’re going to insist that Apple make installing sideloaded apps and alternative stores a no-hassle experience. What critics see is Apple putting up obstacles to installing marketplaces or sideloaded apps just to be a dick about it and discouraging their use to keep users in the App Store. What I see are reasonable warnings for potentially dangerous software. We’ll see how that goes.

Perhaps where the EC will wind up is making app store choice like web browser choice. Force Apple to present each user with a screen listing all available app marketplaces in their country in random order, of which Apple’s own App Store is but one, just like Safari in the default browser choice screen.

3. The eligibility requirements for developers related to the ability to offer alternative app stores or directly distribute apps from the web on iPhones. The Commission will investigate whether these requirements, such as the ‘membership of good standing’ in the Apple Developer Program, that app developers have to meet in order to be able to benefit from alternative distribution provided for in the DMA comply with the DMA.

I’m not sure what this is about, given that Apple relented on allowing even Epic Games to open a store. Maybe the financial requirements? Update: OK, this is probably about the other half of the eligibility requirements to offer a marketplace, too. One way to qualify as a marketplace is to provide Apple with a €1,000,000 letter of credit. The other is to “be a member of good standing in the Apple Developer Program for two continuous years or more, and have an app that had more than one million first annual installs on iOS and/or iPadOS in the EU in the prior calendar year.” For sideloading, Apple requires that developers “Be a member in good standing of the Apple Developer Program for two continuous years or more, and have an app that had more than one million first annual installs on iOS and/or iPadOS in the EU in the prior calendar year.” Apple’s requirements are an attempt to prevent fly-by-night scammers from opening marketplaces or offering nefarious apps for sideloading. But the EC sees that as a catch-22, where the only way to become a marketplace or offer sideloading is to already be a longstanding developer in Apple’s own App Store. So the EC is, I guess, saying don’t worry about fly-by-night scammers, Apple needs to allow any new developer to offer their apps outside the App Store or to provide their own marketplace.

In parallel, the Commission will continue undertaking preliminary investigative steps outside of the scope of the present investigation, in particular with respect to the checks and reviews put in place by Apple to validate apps and alternative app stores to be sideloaded.

This pretty clearly is about Apple using notarization as a review for anything other than egregious bugs or security vulnerabilities. I complain as much as anyone about the aspects of the DMA that are vague (or downright inscrutable), but this aspect seems clear-cut. It’s a bit baffling why Apple seemingly sees notarization as an opportunity for content/purpose review, like with last week’s brouhaha over the UTM SE PC emulator. Refusing to notarize an emulator that uses a JIT is something Apple ought to be able to defend under the DMA’s exceptions pertaining to device security; refusing to notarize an emulator that doesn’t use a JIT seems clearly forbidden by the DMA.


Apple Disables WebKit’s JIT in Lockdown Mode, Offering a Hint Why BrowserEngineKit Is Complex and Restricted

Last week I mentioned Apple’s prohibition on JITs — just-in-time compilers — in the context of their rejection of UTM SE, an open source PC emulator. Apple’s prohibition on JITs, on security grounds, is a side issue regarding UTM SE, because UTM SE is the version of UTM that doesn’t use a JIT. But because it doesn’t use a JIT, it’s so slow that the UTM team doesn’t consider it worth fighting with Apple regarding its rejection.

On that no-JITs prohibition, though, it’s worth noting that Apple even disables its own trusted JIT in WebKit when you enable Lockdown Mode, which Apple now describes as “an optional, extreme protection that’s designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats. Most people are never targeted by attacks of this nature.” Apple previously described Lockdown Mode as protection for those targeted by “private companies developing state-sponsored mercenary spyware”, but has recently dropped the “state-sponsored” language.

Here’s how Apple describes Lockdown Mode’s effect on web browsing:

Web browsing — Certain complex web technologies are blocked, which might cause some websites to load more slowly or not operate correctly. In addition, web fonts might not be displayed, and images might be replaced with a missing image icon.

JavaScriptCore’s JIT interpreter is one of those “complex web technologies”. Alexis Lours did some benchmarking two years ago, when iOS 16 was in beta, to gauge the effect of disabling the JIT on JavaScript performance (and he also determined a long list of other WebKit features that get disabled in Lockdown Mode, a list I wish Apple would publish and keep up to date). Lours ran several benchmarks, but I suspect Speedometer is most relevant to real-world usage. Lours’s benchmarking indicated roughly a two-third reduction in JavaScript performance with Lockdown Mode enabled in Speedometer.

This brings me to BrowserEngineKit, a new framework Apple created specifically for compliance with the EU’s DMA, which requires gatekeeping platforms to allow for third-party browser engines. Apple has permitted third-party browsers on iOS for over a decade, but requires all browsers to use the system’s WebKit rendering engine. One take on Apple’s longstanding prohibition against third-party rendering engines is that they’re protecting their own interests with Safari. More or less that they’re just being dicks about it. But there really is a security angle to it. JavaScript engines run much faster with JIT compilation, but JITs inherently pose security challenges. There’s a whole section in the BrowserEngineKit docs specifically about JIT compilation.

As I see it Apple had three choices, broadly speaking, for complying with the third-party browser engine mandate in the DMA:

  1. Disallow third-party browser engines from using JITs. This would clearly be deemed malicious by anyone who actually wants to see Chromium or Gecko-based browsers on iOS. JavaScript execution would be somewhere between 65 to 90 percent slower compared to WebKit.

  2. Allow third-party browser engines in the EU to just use JIT compilation freely without restrictions. This would open iOS devices running such browsers to security vulnerabilities. The message to users would be, effectively, “If you use one of these browsers you’re on your own.”

  3. Create something like BrowserEngineKit, which adds complexity in the name of allowing for JIT compilation (and other potentially insecure technologies) in a safer way, and limit the use of BrowserEngineKit only to trusted web browser developers.

Apple went with choice 3, and I doubt they gave serious consideration to anything else. Disallowing third-party rendering engines from using JITs wasn’t going to fly, and allowing them to run willy-nilly would be insecure. The use of BrowserEngineKit also requires a special entitlement:

Apple will provide authorized developers access to technologies within the system that enable critical functionality and help developers offer high-performance modern browser engines. These technologies include just-in-time compilation, multiprocess support, and more.

However, as browser engines are constantly exposed to untrusted and potentially malicious content and have visibility of sensitive user data, they are one of the most common attack vectors for bad actors. To help keep users safe online, Apple will only authorize developers to implement alternative browser engines after meeting specific criteria and who commit to a number of ongoing privacy and security requirements, including timely security updates to address emerging threats and vulnerabilities.

BrowserEngineKit isn’t easy, but I genuinely don’t think any good solution would be. Browsers don’t need a special entitlement or complex framework to run on MacOS, true, but iOS is not MacOS. To put it in Steven Sinofsky’s terms, gatekeeping is a fundamental aspect of Apple’s brand promise with iOS. 


WWDC 2024: Apple Intelligence

An oft-told story is that back in 2009 — two years after Dropbox debuted, two years before Apple unveiled iCloud — Steve Jobs invited Dropbox cofounders Drew Houston and Arash Ferdowsi to Cupertino to pitch them on selling the company to Apple. Dropbox, Jobs told them, was “a feature, not a product”.

It’s easy today to forget just how revolutionary a product Dropbox was. A simple installation on your Mac and boom, you had a folder that synced between every Mac you used — automatically, reliably, and quickly. At the time Dropbox had a big sign in its headquarters that read, simply, “It Just Works”, and they delivered on that ideal — at a time when no other sync service did. Jobs, of course, was trying to convince Houston and Ferdowsi to sell, but that doesn’t mean he was wrong that, ultimately, it was a feature, not a product. A tremendously useful feature, but a feature nonetheless.

Leading up to WWDC last week, I’d been thinking that this same description applies, in spades, to LLM generative AI. Fantastically useful, downright amazing at times, but features. Not products. Or at least not broadly universal products. Chatbots are products, of course. People pay for access to the best of them, or for extended use of them. But people pay for Dropbox too.

Chatbots can be useful. There are people doing amazing work through them. But they’re akin to the terminal and command-line tools. Most people just don’t think like that.

What Apple unveiled last week with Apple Intelligence wasn’t so much new products, but new features — a slew of them — for existing products, powered by generative AI.

Safari? Better now, with generative AI page summaries. Messages? More fun, with Genmoji. Notes and Mail and Pages (and any other app that uses the system text frameworks)? Better now, with proofreading and rewriting tools built-in. Photos? Even better recommendations for memories, and automatic categorization of photos into smart collections. Siri? That frustrating, dumb-as-a-rock son of a bitch, Siri? Maybe, actually, pretty useful and kind of smart now. These aren’t new apps or new products. They’re the most used, most important apps Apple makes, the core apps that define the Apple platforms ecosystem, and Apple is using generative AI to make them better and more useful — without, in any way, rendering them unfamiliar.1

We had a lot of questions about Apple’s generative AI strategy heading into WWDC. Now that we have the answers, it all looks very obvious, and mostly straightforward. First, their models are almost entirely based on personal context, by way of an on-device semantic index. In broad strokes, this on-device semantic index can be thought of as a next-generation Spotlight. Apple is focusing on what it can do that no one else can on Apple devices, and not really even trying to compete against ChatGPT et al. for world-knowledge context. They’re focusing on unique differentiation, and eschewing commoditization.

Second, they’re doing both on-device processing, for smaller/simpler tasks, and cloud processing (under the name Private Cloud Compute) for more complex tasks. All of this is entirely Apple’s own work: the models, the servers (based on Apple silicon), the entire software stack running on the servers, and the data centers where the servers reside. This is an enormous amount of work, and seemingly puts the lie to reports that Apple executives only even became interested in generative AI 18 months ago. And if they did accomplish all this in just 18 months that’s a remarkable achievement.

Anyone can make a chatbot. (And, seemingly, everyone is — searching for “chatbot” in the App Store is about as useful as searching for “game”.) Apple, conspicuously, has not made one. Benedict Evans keenly observes:

To begin, then: Apple has built an LLM with no chatbot. Apple has built its own foundation models, which (on the benchmarks it published) are comparable to anything else on the market, but there’s nowhere that you can plug a raw prompt directly into the model and get a raw output back — there are always sets of buttons and options shaping what you ask, and that’s presented to the user in different ways for different features. In most of these features, there’s no visible bot at all. You don’t ask a question and get a response: instead, your emails are prioritised, or you press “summarise” and a summary appears. You can type a request into Siri (and Siri itself is only one of the many features using Apple’s models), but even then you don’t get raw model output back: you get GUI. The LLM is abstracted away as an API call.

Instead Apple is doing what no one else can do: integrating generative AI into the frameworks in iOS and MacOS used by developers to create native apps. Apps built on the system APIs and frameworks will gain generative AI features for free, both in the sense that the features come automatically when the app is running on a device that meets the minimum specs to qualify for Apple Intelligence, and in the sense that Apple isn’t charging developers or users to utilize these features.

Apple’s keynote presentation was exceedingly well-structured and paced. But nevertheless it was widely misunderstood, I suspect because expectations were so wrong. Those who believed going in that Apple was far behind the state of the art in generative AI technology wrongly saw the keynote’s coda — the announcement of a partnership with OpenAI to integrate their latest model, ChatGPT-4o, as an optional “world knowledge” layer sitting atop Apple’s own homegrown Apple Intelligence — as an indication that most or even all of the cool features Apple revealed were in fact powered by OpenAI. Quite the opposite. Almost nothing Apple showed in the keynote was from OpenAI.

What I see as the main takeaways:

  • Apple continues to build machine learning and generative AI features across its core platforms, iOS and MacOS. They’ve been adding such features for years, and announced many new ones this year. Nothing Apple announced in the entire first hour of the keynote was part of “Apple Intelligence”. Math Notes (freeform handwritten or typed mathematics, in Apple Notes and the Calculator app, which is finally coming to iPadOS) is coming to all devices running iOS 18 and MacOS 15 Sequoia. Smart Script — the new personalized handwriting feature when using Apple Pencil, which aims to improve the legibility of your handwriting as you write, and simulates your handwriting when pasting text or generating answers in Math Notes — is coming to all iPads with an A14 or better chip. Inbox categorization and smart message summaries are coming to Apple Mail on all devices. Safari web page summaries are coming to all devices. Better background clipping (“greenscreening”) for videoconferencing. None of these features are under the “Apple Intelligence” umbrella. They’re for everyone with devices eligible for this year’s OS releases.

  • The minimum device specs for Apple Intelligence are understandable, but regrettable, particularly the fact that the only current iPhones that are eligible are the iPhone 15 Pro and Pro Max. Even the only-nine-month-old iPhone 15 models don’t make the cut. When I asked John Giannandrea (along with Craig Federighi and Greg Joswiak) about this on stage at The Talk Show Live last week, his answer was simple: lesser devices aren’t fast enough to provide a good experience. That’s the Apple way: better not to offer the feature at all than offer it with a bad (slow) experience. A-series chips before last year’s A17 Pro don’t have enough RAM and don’t have powerful enough Neural Engines. But by the time iOS 18 is released and Apple Intelligence features actually become available — even in beta form (they are not enabled in the current developer OS betas) — the iPhone 15 Pro will surely be joined by all iPhone 16 models, both Pro and non-pro. Apple Intelligence is skating to where the puck is going to be in a few years, not where it is now.

  • Surely Apple is also being persnickety with the device requirements to lessen the load on its cloud compute servers. And if this pushes more people to upgrade to a new iPhone this year, I doubt Tim Cook is going to see that as a problem.

  • One question I’ve been asked repeatedly is why devices that don’t qualify for Apple Intelligence can’t just do everything via Private Cloud Compute. Everyone understands that if a device isn’t fast or powerful enough for on-device processing, that’s that. But why can’t older iPhones (or in the case of the non-pro iPhones 15, new iPhones with two-year-old chips) simply use Private Cloud Compute for everything? From what I gather, that just isn’t how Apple Intelligence is designed to work. The models that run on-device are entirely different models than the ones that run in the cloud, and one of those on-device models is the heuristic that determines which tasks can execute with on-device processing and which require Private Cloud Compute or ChatGPT. But, see also the previous item in this list — surely Apple has scaling concerns as well. As things stand, with only devices using M-series chips or the A17 or later eligible, Apple is going to be on the hook for an enormous amount of server-side computation with Private Cloud Compute. They’d be on the hook for multiples of that scale if they enabled Apple Intelligence for older iPhones, with those older iPhones doing none of the processing on-device. The on-device processing component of Apple Intelligence isn’t just nice-to-have, it’s a keystone to the entire thing.

  • Apple could have skipped, or simply delayed announcing until the fall, the entire OpenAI partnership, and they still would have had an impressive array of generative AI features with broad, practical appeal. And clearly they would have gotten a lot more credit for their achievements in the aftermath of the keynote. I remain skeptical that integrating ChatGPT (and any future world-knowledge LLM chatbot partners) at the OS level will bring any significant practical advantage to users versus just using the chatbot apps from the makers of those LLMs. But perhaps removing a few steps, and eliminating the need to choose, download, and sign up for a third-party chatbot, will expose such features to many more users than are using them currently. But I can’t help but feel that integrating these third-party chatbots in the OSes is at least as much a services-revenue play as a user-experience play.

  • The most unheralded aspect of Apple Intelligence is that the data centers Apple is building for Private Cloud Compute are not only carbon neutral, but are operating entirely on renewable energy sources. That’s extraordinary, and I believe unique in the entire industry. But it’s gone largely un-remarked-upon — because Apple itself did not mention this during the WWDC keynote. Craig Federighi first mentioned it in a post-keynote interview with Justine Ezarik, and he reiterated it on stage with me at The Talk Show Live From WWDC. In hindsight, I wish I’d asked, on stage, why Apple did not even mention this during the keynote, let alone trumpet it. I suspect the real answer is that Apple felt like they couldn’t brag about their own data centers running entirely on renewable energy during the same event in which they announced a partnership with OpenAI, whose data centers can make no such claims. OpenAI’s carbon footprint is a secret, and experts suspect it’s bad. It’s unseemly to throw your own partner under the bus, but that takes Apple Intelligence’s proclaimed carbon neutrality off the table as a marketing point. Yet another reason why I feel Apple might have been better off not announcing this partnership last week.

  • If you don’t want or don’t trust Apple Intelligence (or just not yet), you’ll be able to turn it off. And you’ll have to opt-in to using the integrated ChatGPT feature, and, each time Apple Intelligence decides to send you to ChatGPT to handle a task, you’ll have to explicitly allow it. As currently designed, no one is going to accidentally interact with, let alone expose personal information to, ChatGPT. If anything I suspect the more common complaint will come from people who wish to use ChatGPT without confirmation each time. Some people are going to want an “Always allow” option for handing requests to ChatGPT, but according to Apple reps I’ve spoken with, such an option does not yet exist.

  • At a technical level Apple is using indirection to anonymize devices from ChatGPT. OpenAI will never see your IP address or precise location. At a policy level, OpenAI has agreed not to store user data, nor use data for training purposes, unless users have signed into a ChatGPT account. If you want to use Apple Intelligence but not ChatGPT, you can. If you want to use ChatGPT anonymously, you can. And if you do want ChatGPT to keep a history of your interactions, you can do that too, by signing in to your account. Users are entirely in control, as they should be.

  • VisionOS 2 is not getting any Apple Intelligence features, despite the fact that the Vision Pro has an M2 chip. One reason is that VisionOS remains a dripping-wet new platform — Apple is still busy building the fundamentals, like rearranging and organizing apps in the Home view. VisionOS 2 isn’t even getting features like Math Notes, which, as I mentioned above, isn’t even under the Apple Intelligence umbrella. But another reason is that, according to well-informed little birdies, Vision Pro is already making significant use of the M2’s Neural Engine to supplement the R1 chip for real-time processing purposes — occlusion and object detection, things like that. With M-series-equipped Macs and iPads, the Neural Engine is basically sitting there, fully available for Apple Intelligence features. With the Vision Pro, it’s already being used.

  • “Apple Intelligence” is not one thing or one model. Or even two models — local and cloud. It’s an umbrella for dozens of models, some of them very specific. One of the best, potentially, is a new model that will allow Siri to answer technical support questions about Apple products and services. This model has been trained on Apple’s own copious Knowledge Base of support documentation. The age-old gripe is that “no one reads the documentation”, but maybe now that’s no longer a problem because Siri is reading it. Apple’s platforms are so rich and deep, but most users’ knowledge of them is shallow; getting correct answers from Siri to specific how-to questions could be a game-changer. AI-generated slop is polluting web search results for technical help; Apple is using targeted AI trained on its own documentation to avoid the need to search the web in the first place. Technical documentation isn’t sexy, but exposing it all through natural language queries could be one of the sleeper hits of this year’s announcements.

  • Xcode is the one product where Apple was clearly behind on generative AI features. It was behind on LLM-backed code completion/suggestion/help last year. Apple introduced two generative AI features in Xcode 16, and they exemplify the local/cloud distinction in Apple Intelligence in general. Predictive code completion runs locally, on your Mac. Swift Assist is more profound, answering natural language questions and providing entire solutions in working Swift code, and runs entirely in Private Cloud Compute.

Take It All With a Grain of Salt

Lastly, it is essential to note that we haven’t been able to try any of these Apple Intelligence features yet. None of them are yet available in the developer OS betas, and none are slated to be available, even in beta, until “later this summer”. I witnessed multiple live demos of some of these features last week, during press briefings at Apple Park after the keynote. Demos I witnessed included the writing tools (“make this email sound more professional”) and Xcode code completion and Swift Assist. But those demos were conducted by Apple employees; we in the media were not able to try them ourselves.

It all looks very impressive, and almost all these features seem very practical. But it’s all very, very early. None of it counts as real until we’re able to use it ourselves. We don’t know how well it works. We don’t know how well it scales.

If generative AI weren’t seen as essential — both in terms of consumer marketing and investor confidence — I think much, if not most, of what Apple unveiled in “Apple Intelligence” wouldn’t even have been announced until next year’s WWDC, not last week’s WWDC. Again, none of the features in “Apple Intelligence” are even available in beta yet, and I think all or most of them will be available only under a “beta” label until next year.

It’s good to see Apple hustling, though. I continue to believe it’s incorrect to see Apple as “behind”, overall, on generative AI. But clearly they are feeling tremendous competitive pressure on this front, which is good for them, and great for us. 


  1. Image Playground is a new app, and thus definitely counts as a product, but at the moment I’m seeing it as the least interesting part of Apple Intelligence, if only because it’s offering something a dozen other products offer, and it doesn’t seem to do a particularly interesting job of it. ↩︎


Kolide by 1Password 

My thanks to Kolide by 1Password for sponsoring last week at DF. The September 2023 MGM hack is one of the most notorious ransomware attacks in recent years. Journalists and cybersecurity experts rushed to report on the broken slot machines, angry hotel guests, and the fateful phishing call to MGM’s help desk that started it all.

But while it’s true that MGM’s help desk needed better ways of verifying employee identity, there’s another factor that should have stopped the hackers in their tracks. That’s where you should focus your attention. In fact, if you just focus your vision, you’ll find you’re already staring at the security story the pros have been missing.

It’s the device you’re reading this on.

To read more about what they learned after researching the MGM hack — like how hacker groups get their names, the worrying gaps in MGM’s security, and why device trust is the real core of the story — check out the Kolide by 1Password blog.


Training Large Language Models on the Public Web

Yesterday, quoting Anthropic’s announcement of their impressive new model, Claude 3.5 Sonnet, I wrote:

Also, from the bottom of the post, this interesting nugget:

One of the core constitutional principles that guides our AI model development is privacy. We do not train our generative models on user-submitted data unless a user gives us explicit permission to do so. To date we have not used any customer or user-submitted data to train our generative models.

Even Apple can’t say that.

It now seems clear that I misread Anthropic’s statement. I wrongly interpreted this as implying that Claude was not trained on public web data. Here is Anthropic’s FAQ on training data:

Large language models such as Claude need to be “trained” on text so that they can learn the patterns and connections between words. This training is important so that the model performs effectively and safely.

While it is not our intention to “train” our models on personal data specifically, training data for our large language models, like others, can include web-based data that may contain publicly available personal data. We train our models using data from three sources:

  1. Publicly available information via the Internet
  2. Datasets that we license from third party businesses
  3. Data that our users or crowd workers provide

We take steps to minimize the privacy impact on individuals through the training process. We operate under strict policies and guidelines for instance that we do not access password protected pages or bypass CAPTCHA controls. We undertake due diligence on the data that we license. And we encourage our users not to use our products and services to process personal data. Additionally, our models are trained to respect privacy: one of our constitutional “principles” at the heart of Claude, based on the Universal Declaration of Human Rights, is to choose the response that is most respectful of everyone’s privacy, independence, reputation, family, property rights, and rights of association.

Here is Apple, from its announcement last week of their on-device and server foundation models:

We train our foundation models on licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot. Web publishers have the option to opt out of the use of their web content for Apple Intelligence training with a data usage control.

We never use our users’ private personal data or user interactions when training our foundation models, and we apply filters to remove personally identifiable information like social security and credit card numbers that are publicly available on the Internet. We also filter profanity and other low-quality content to prevent its inclusion in the training corpus. In addition to filtering, we perform data extraction, deduplication, and the application of a model-based classifier to identify high quality documents.

This puts Apple in the same boat as Anthropic in terms of using public pages on the web as training sources. Some writers and creators object to this — including Federico Viticci, whose piece on MacStories I linked to with my “Even Apple can’t say that” comment yesterday. Dan Moren wrote a good introduction to blocking these crawling bots with robots.txt directives.

The best argument against Apple’s use of public web pages for model training is that they trained first, but only after announcing Apple Intelligence last week issued the instructions for blocking Applebot for AI training purposes. Apple should clarify whether they plan to re-index the public data they used for training before Apple Intelligence ships in beta this summer. Clearly, a website that bans Applebot-Extended shouldn’t have its data in Apple’s training corpus simply because Applebot crawled it before Apple Intelligence was even announced. It’s fair for public data to be excluded on an opt-out basis, rather than included on an opt-in one, but Apple trained its models on the public web before they allowed for opting out.

But other than that chicken/egg opt-out issue, I don’t object to this. The whole point of the public web is that it’s there to learn from — even if the learner isn’t human. Is there a single LLM that was not trained on the public web? To my knowledge there is not, and a model that is ignorant of all information available on the public web would be, well, pretty ignorant of the world. To me the standards for LLMs should be similar to those we hold people to. You’re free to learn from anything I publish, but not free to plagiarize it. If you quote it, attribute and link to the source. That’s my standard for AI bots as well. So at the moment, my robots.txt file bans just one: Perplexity.

(I’d block a second, the hypocrites at Arc, if I could figure out how.) 


Reggie Jackson on Willie Mays’s Legacy, and the Abject Racism Faced by Black Baseball Players in the 1960s 

The whole 8-minute clip is excellent and worth your time, but do not miss the second half, starting with a sharp question from Alex Rodriguez at the 4:30 mark. Reggie describes, in heartfelt detail, the abject racism he faced as a minor league player as recently as the 1960s. Restaurants he couldn’t eat at. Hotels he couldn’t stay at. Threats to burn to the ground the apartment building where he was sleeping. The pain, over five decades later, remains searing.

Kudos to Fox Sports for airing this. We can’t celebrate progress without honestly facing society’s dark past. (Kudos too, for putting a box of Reggie Bars at the desk. Respect.)

EU Users Won’t Get Apple Intelligence, iPhone Mirroring, or the New SharePlay Screen Sharing Features This Year, Thanks to the DMA 

The Financial Times:

Apple blamed complexities in making the system compatible with EU rules that have forced it to make key parts of its iOS software and App Store services interoperable with third parties.

“Due to the regulatory uncertainties brought about by the Digital Markets Act,” Apple said on Friday, “we do not believe that we will be able to roll out three of these features — iPhone Mirroring, SharePlay Screen Sharing enhancements, and Apple Intelligence — to our EU users this year.”

Kudos to Apple for breaking this news to the Financial Times, of all outlets. Poetry in media relations. Here’s the full on-the-record statement, provided to me by an Apple spokesperson:

Two weeks ago, Apple unveiled hundreds of new features that we are excited to bring to our users around the world. We are highly motivated to make these technologies accessible to all users. However, due to the regulatory uncertainties brought about by the Digital Markets Act (DMA), we do not believe that we will be able to roll out three of these features — iPhone Mirroring, SharePlay Screen Sharing enhancements, and Apple Intelligence — to our EU users this year.

Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security. We are committed to collaborating with the European Commission in an attempt to find a solution that would enable us to deliver these features to our EU customers without compromising their safety.

None of these features are available yet in the developer beta OS releases, but it is my understanding that the first two — iPhone Mirroring and the new SharePlay Screen Sharing enhancements (where you’ll be able to see and doodle on the screens of others, like, say, if you’re providing remote help or how-to instructions to a friend or family member) — will be in the next developer betas, coming early next week. Apple Intelligence won’t even enter beta until later this summer. But in the meantime, even in beta, none of these features will be available within the EU.

The Mac is not considered a “gatekeeping” platform in the EU, but the iPhone and iPad are, and the iPhone Mirroring and screen sharing features obviously involve those platforms. I think Apple could try to thread a needle here and release Apple Intelligence only on the Mac in the EU, but given how inscrutable the European Commission’s interpretation of the DMA is — where gatekeepers are expected to somehow suss out the “spirit of the law” regardless of what the letter of the law says — I don’t see how Apple can be blamed for pausing the rollout in the EU, no matter the platform.

The EU’s self-induced slide into a technological backwater continues.

Matt Levine on OpenAI’s True Purpose 

Matt Levine, in his Money Stuff column:

OpenAI was founded to build artificial general intelligence safely, free of outside commercial pressures. And now every once in a while it shoots out a new AI firm whose mission is to build artificial general intelligence safely, free of the commercial pressures at OpenAI.

Anthropic Introduces Claude 3.5 Sonnet 

Anthropic:

Claude 3.5 Sonnet sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone.

Claude 3.5 Sonnet operates at twice the speed of Claude 3 Opus. This performance boost, combined with cost-effective pricing, makes Claude 3.5 Sonnet ideal for complex tasks such as context-sensitive customer support and orchestrating multi-step workflows.

In an internal agentic coding evaluation, Claude 3.5 Sonnet solved 64% of problems, outperforming Claude 3 Opus which solved 38%. Our evaluation tests the model’s ability to fix a bug or add functionality to an open source codebase, given a natural language description of the desired improvement. When instructed and provided with the relevant tools, Claude 3.5 Sonnet can independently write, edit, and execute code with sophisticated reasoning and troubleshooting capabilities. It handles code translations with ease, making it particularly effective for updating legacy applications and migrating codebases.

I’ll take them with a grain of self-promoting salt, but the evaluation tests presented by Anthropic position Claude 3.5 Sonnet as equal to or better than ChatGPT-4o. Again: I don’t think there’s a moat in this game.

Also, from the bottom of the post, this interesting nugget:

One of the core constitutional principles that guides our AI model development is privacy. We do not train our generative models on user-submitted data unless a user gives us explicit permission to do so. To date we have not used any customer or user-submitted data to train our generative models.

Even Apple can’t say that. Follow-Up:Training Large Language Models on the Public Web”.

Warp’s Agent Mode: Natural Language for Terminal Commands 

Michelle Lim, writing for the Warp blog (Warp, I’ll note, has previously sponsored DF):

Agent Mode raises the level of abstraction in the terminal. Using natural language, you can ask the terminal to accomplish any high level task without worrying about the specific commands you need.

Agent Mode knows when it needs more information to help you complete a task. It will ask permission to run commands on your machine and use the outputs to guide you, step-by-step. It’s also self-correcting when it runs into errors.

Because Agent Mode executes tasks through the CLI, it integrates with practically any service with zero configuration. If the service has a CLI, an API, or publicly available docs, you can use Agent Mode for the task. Agent Mode has inherent knowledge of most public CLIs, and you can easily teach it how to use internal CLIs by asking it to read their help content.

Man (no pun intended?), is this cool. I just tried “How do I generate a random number between 0000 and 9999?” and got:

You can generate a random number between 0000 and 9999 using the following shell command in the Fish shell:

  printf "%04d\n" (random 0 9999)

And when I followed up with “How can I use jot to generate a random number between 0000 and 9999?” I got:

You can use the jot command to generate a random number between 0000 and 9999 using the following command:

  jot -w "%04d" -r 1 0 9999

Both of which answers are correct. For jot — a tool I first learned about, of course, from the inimitable Dr. Drang — I think a simpler, and thus better, answer is:

jot -r 1 0000 9999

but Warp’s Agent Mode suggestion is certainly good enough.

Lacking Votes, EU Postpones Vote on CSAM Law That Would Ban End-to-End-Encryption for Messaging 

Clothilde Goujard, reporting for Politico:

A vote scheduled today to amend a draft law that may require WhatsApp and Signal to scan people’s pictures and links for potential child sexual abuse material was removed from European Union countries’ agenda, according to three EU diplomats.

Ambassadors in the EU Council were scheduled to decide whether to back a joint position on an EU regulation to fight child sexual abuse material (CSAM). But many EU countries including Germany, Austria, Poland, the Netherlands and the Czech Republic were expected to abstain or oppose the law over cybersecurity and privacy concerns.

“In the last hours, it appeared that the required qualified majority would just not be met,” said an EU diplomat from the Belgian presidency, which is spearheading negotiations until end June as chair of the EU Council.

Sanity prevails — for now.

‘This $8 Cardboard Knife Will Change Your Life’ 

Matthew Panzarino, writing at The Obsessor:

The cardboard is inescapable if you use Amazon or other online stores, they pile up in the hallways and next to the garbage cans and you triage as you can.

We get so many that I have to break down our boxes in order to fit them in our recycle bin. I’ve used all of the typical tools — scissors, pocket knife, box cutter — and many unconventional ones like drywall saws just trying to make this painful job a bit easier.

The CANARY is uniquely serrated all the way around its edge, like a chainsaw. This makes it incredibly good at cutting cardboard either with or across corrugation with ease. I cannot express how easily this knife cuts cardboard, it’s like slicing through regular old paper, it’s amazing.

Last year when I linked to (and recommended) Studio Neat’s Keen — the world’s best box cutter, but which costs about $100 — at least one reader recommended the Canary. For $8 I figured why not. It truly is an amazing product. I do still love my prototype Keen but for opening and breaking down cardboard boxes, the Canary can’t be beat. It’s both highly effective and very safe.

‘Fast Crimes at Lambda School’ 

Ben Sandofksy went deep on the history of Lambda School, a learn-to-code startup that aimed to disrupt computer science education, and its founder, Austen Allred:

What set his boot camp apart from the others were “Income Share Agreements.” Instead of paying up-front for tuition, students agreed to pay a portion of future income. If you don’t get a job, you pay nothing. It was an idea so clever it became a breakout hit of Y Combinator, the same tech incubator that birthed Stripe, AirBnb, and countless other unicorns.

When Lambda School launched in 2017, critics likened ISAs to indentured servitude, but by 2019 it was Silicon Valley’s golden child. Every day, Austen tweeted jaw-dropping results. [...]

Things got worse from there, and we’ll get to it. First I need to address a common question: what do I have to do with any of this? I have no professional or personal connections to the company or the team. What compelled me to follow this story for the last five years?

On the surface, this is another window into the 2010’s tech bubble, a period where mediocre people could raise ludicrous money amid a venture capitalist echo chamber fueled by low-interest rates. But what makes this any worse than Juicero, Clinkle, or Humane? Why does this rise to the level of Theranos?

These stories hinge on their villains, whose hubris and stupidity end in comeuppance. Theranos had Elizabeth Holmes, Fyre Festival had Bobby McFarlane, and Lambda School has Austen Allred.

Independent journalism at its best.

Apple ID to Be Renamed to Apple Account 

Adam Engst, TidBITS:

The real problem comes when tech writers document features across multiple versions of Apple’s operating systems. We’ll probably use both terms for a while before slowly standardizing on the new term. Blame Apple if you see awkward sentences like “Continuity features require that you be logged into the same Apple Account (Apple ID in pre-2024 operating systems).” Or maybe writers will compress further to “Continuity features require that you be logged into the same Apple Account/ID.”

I do think “Apple Account” is a better name, so I think the transitional pain is worthwhile.

Perplexity AI Is Lying About Their User Agent 

Robb Knight:

I put up a post about blocking AI bots after the block was in place, so assuming the user agents are sent, there’s no way Perplexity should be able to access my site. So I asked:

What is this post about https://rknight.me/blog/blocking-bots-with-nginx/

I got a perfect summary of the post including various details that they couldn’t have just guessed. Read the full response here. So what the fuck are they doing?

I checked a few sites and this is just Google Chrome running on Windows 10. So they’re using headless browsers to scrape content, ignoring robots.txt, and not sending their user agent string. I can’t even block their IP ranges because it appears these headless browsers are not on their IP ranges.

Terrific, succinct write-up documenting that Perplexity has clearly been reading and indexing web pages that it is forbidden, by site owner policy, from reading and indexing — all contrary to Perplexity’s own documentation and public statements.

Wired: ‘Perplexity Is a Bullshit Machine’ 

Dhruv Mehrotra and Tim Marchman, reporting for Wired (News+ link):

A Wired analysis and one carried out by developer Robb Knight suggest that Perplexity is able to achieve this partly through apparently ignoring a widely accepted web standard known as the Robots Exclusion Protocol to surreptitiously scrape areas of websites that operators do not want accessed by bots, despite claiming that it won’t. Wired observed a machine tied to Perplexity — more specifically, one on an Amazon server and almost certainly operated by Perplexity — doing this on wired.com and across other Condé Nast publications.

The Wired analysis also demonstrates that despite claims that Perplexity’s tools provide “instant, reliable answers to any question with complete sources and citations included,” doing away with the need to “click on different links,” its chatbot, which is capable of accurately summarizing journalistic work with appropriate credit, is also prone to bullshitting, in the technical sense of the word.

This paints Perplexity as, effectively, an IP theft engine, and its CEO, Aravind Srinivas, as a degenerate liar. None of this is an oversight or just playing fast and loose. It’s a scheme to deliberately circumvent the plain intention of website owners not to have Perplexity index their sites. Liars and thieves. Utterly shameless.

A Rose by Any Other Name Would Smell as Sweet; An Encryption Back Door by Any Other Name Would Still Smell Like Shit 

Signal president Meredith Whittaker, responding to a new initiative in the EU to ban end-to-end-encryption (for some reason published as a PDF despite the fact that Signal has a blog):

In November, the EU Parliament lit a beacon for global tech policy when it voted to exclude end-to-end encryption from mass surveillance orders in the chat control legislation. This move responded to longstanding expert consensus, and a global coalition of hundreds of preeminent computer security experts who patiently weighed in to explain the serious dangers of the approaches on the table — approaches that aimed to subject everyone’s private communications to mass scanning against a government-curated database or AI model of “acceptable” speech and content.

There is no way to implement such proposals in the context of end-to-end encrypted communications without fundamentally undermining encryption and creating a dangerous vulnerability in core infrastructure that would have global implications well beyond Europe.

Instead of accepting this fundamental mathematical reality, some European countries continue to play rhetorical games. They’ve come back to the table with the same idea under a new label. Instead of using the previous term “client-side scanning,” they’ve rebranded and are now calling it “upload moderation.” Some are claiming that “upload moderation” does not undermine encryption because it happens before your message or video is encrypted. This is untrue.

Yes, but it’s a great idea to let these same EU bureaucrats design how mobile software distribution should work.

Copilot Plus PCs, Where the ‘Plus’ Means More Dumb Stickers 

Paul Thurrott on Threads, after getting his new Samsung Galaxy Book4 Edge laptop:

Former Windows head Terry Myerson once told me the goal of partnering with Qualcomm on Windows on Arm was to “get those f#$%ing Intel stickers off of PCs.”

Mission accomplished, Terry. There are no Intel stickers on the new Qualcomm-based Copilot+ PCs.

Still covered with stickers. And as Thurrott’s photo hints at, and this screenshot from Tim Schofield’s unboxing video shows clearly, Samsung can’t even be bothered to apply the stickers straight. Looks like they were applied by a little kid. Screams “premium” experience.

Two of these stickers don’t even make sense. The Snapdragon one is obviously paid for by Qualcomm, the same way Intel pays PC makers to apply their stickers. But why would Samsung booger up its own laptops with stickers promoting their own Dynamic AMOLED 2X display technology? And what’s the deal with the Energy Star stickers? Who pays to put those on laptops and why?