Is it too early to worry about killer robots?

Six robotics companies recently published an open letter pledging to not create weaponized robots. But the need for some a pledge is itself alarming about the future of the industry.
Six robotics companies recently published an open letter pledging to not create weaponized robots. But the need for some a pledge is itself alarming about the future of the industry.
Abah Arar—AFP/Getty Images

Hello, deputy tech editor Alexei Oreskovic here, filling in for Jeremy this week.

I was struck by something I read a few days ago. A half-dozen robot companies published an open letter pledging not to weaponize their droids and asking others in the industry to follow their example. 

It’s a nice sentiment, but if it was meant to put my mind at ease about armies of Terminators, it didn’t quite do the trick. In fact, while that’s not a scenario I generally lose too much sleep over, the letter’s very existence only served to make me wonder if maybe I should be worried. Why did these companies suddenly feel the need to take a stand against killer robots?  

It seems there have been a few sightings of “makeshift” weaponized robots, including a four-legged mechanical creature with an automatic rifle strapped to its back. The deadly droid, which circulated on Twitter and was featured in Vice, looked similar to a robot made by Boston Dynamics (one of the authors of the letter) except that this version was apparently made by someone in Russia and was rigged with a lethal weapon. 

As it turns out, Boston Dynamics began its life with funding from the U.S. military and designed products for use on the battlefield. Same with iRobot, the company that makes the cute Roomba vacuum cleaner (and which was recently acquired by Amazon). Both companies moved away from the military business years ago in order to focus on selling robots to civilian customers.  

But it’s a reminder that, however shocked companies claim to be by the notion of their products being turned into weapons, it’s a phenomenon that should be neither surprising nor unexpected. There have been weaponized robots in science fiction since before robots ever existed in real life. The same can be said for A.I.

Today, all of these powerful technologies are gradually seeping into everyday life. The benefits, in everything from health care to agriculture, will be huge. But the dangers—both from bad actors who misuse technology and from the proliferation of products designed to be dangerous—are also real. And it shouldn’t be deemed alarmist to warn about the dangers posed by artificial intelligence, self-driving cars, and other cutting edge tech. After all, even the companies making the robots are warning about the robots. 

Alexei Oreskovic
@lexnfx
alexei.oreskovic@fortune.com

***
If you want to know more about how to use A.I. effectively to supercharge your business, please join us in San Francisco on December 5th and 6th for Fortune’s second annual Brainstorm A.I. conference. Learn how A.I. can help you to Augment, Accelerate, and Automate. Confirmed speakers include such A.I. luminaries as Stanford University’s Fei-Fei Li and Landing AI’ Andrew Ng, Google’s James Manyika, and Darktrace’s Nicole Eagan. Apply to attend today!

***

A.I. IN THE NEWS

A.I. maps for post hurricane help. A nonprofit called GiveDirectly is using A.I. technology developed by Google.org to distribute $700 checks to low-income residents of Puerto Rico and Florida affected by hurricanes Ian and Fiona. The group uses a special A.I. mapping tool that mashes up satellite images with poverty data to find the people most in need of emergency payments, according to Fast Company. The technology can zoom in to affected areas with block-by-block granularity, providing a more accurate and effective way to identify eligible aid recipients. 

Google's latest chip. Google’s latest Tensor chip made its debut in the new Pixel 7 phones, unveiled by the company last week. The new Tensor chip, known as the Tensor G2, boasts 8 Arm CPU cores, a graphics processing engine, and an on-chip A.I. accelerator (dubbed a Tensor Processing Unit in Google parlance), according to Cnet. Thanks to the new A.I. acceleration, Google claims the G2 chip can process text to speech 70% faster than what was possible with its previous phone processor, at half the power consumption. The on-chip A.I. also powers new features like Photo Unblur, which as the name suggests, uses machine learning to remove any blur from your photos.

EYE ON A.I. RESEARCH

A.I. just shattered a math record. DeepMind’s A.I. has made headlines before by beating human players at board games like Chess and Go. Its latest feat may seem less exciting to the average person, but it’s sending shockwaves through the computer science community.  

AlphaTensor, the name of a new A.I. created by DeepMind, discovered a new, faster way to calculate two matrices, the company announced in a paper published in the journal Nature last week. That means an A.I. has figured out how to create an entirely new algorithm that’s more efficient than anything humans have been capable of.  

According to the paper, AlphaTensor beat the record set by German mathematician Volker Strassen in 1969. It’s not simply an academic milestone—matrix calculation is a fundamental computer science problem that’s used in thousands of everyday computer tasks from generating images on your smartphone screen to running weather simulations. 

Here’s how DeepMind CEO Demis Hassabis described it in a tweet: Since 1969 Strassen’s algorithm has famously stood as the fastest way to multiply 2 matrices - but with #AlphaTensor we’ve found a new algorithm that’s faster, with potential to improve efficiency by 10-20% across trillions of calculations per day! 

Read the Nature paper here. 

FORTUNE ON A.I.

Jamie Dimon is sick of bots too — and calls on Elon Musk to ‘clean up Twitter’ — by Chole Taylor

Horizon Worlds will take years to refine. Can Meta afford to wait that long? — by Jacob Carpenter

BRAINFOOD

Train that tune. The latest attempt to make music with A.I. is here. A new tool called Dance Diffusion, released in September, can produce short audio clips that mimic whatever genre of music, or specific instrument, it’s trained on–be it disco, jazz, piano or guitar. The tool, from a company called Harmonai, uses the same technology behind A.I. image generator Stable Diffusion (and Stable AI, the company behind Stable Diffusion, is an investor in Harmonai).  

According to an in-depth feature on A.I. music in TechCrunch, Dance Diffusion “destroys and recovers” each piece of music it’s trained on. Kyle Worrall, a UK PhD student studying musical applications of machine learning, tells TechCrunch that “eventually, the trained model can take noise and turn that into music similar to the training data.” 

Achieving that result takes hundreds of hours of training data. And the technology is currently only capable of producing clips that are a few seconds long. While the clips—which you can listen to here—are not about to displace Lizzo or Bad Bunny at the top of the charts anytime soon, the advances in A.I. music are clear—along with a cluster of thorny intellectual property and other ethical issues. 

Harmonai seems aware of the issues. It says the A.I. models it has publicly released were trained on music that is either in the public domain, has open creative commons licenses, or were contributed for the project by artists.  

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.