John Patrick Pullen
May 20, 2019
“In the same way a kid can learn to grasp objects throughout their life, we want to have robots who can learn to grasp and manipulate objects,” says Roberto Calandra, who is developing algorithms at Facebook intended to rival human learning.
At the Facebook AI Research lab, the online publisher is teaching robots how to learn. It promises to share the results with its friends.
Daisy, a hexapod robot born in Facebook’s new artificial intelligence lab, scuttles across the verdant roof of the company’s Menlo Park, Calif., headquarters with a message to deliver: The future belongs to those who teach—and learn.
That concept sits at the center of Facebook’s AI Research lab, a previously unrevealed open-source project that launched in late 2018, even as the company endured repeated black eyes over privacy concerns related to its advertising products. The lab’s purpose is to use robotics as a vehicle for developing better A.I. “Having embodied intelligence is a really important problem because it creates constraints to the kinds of algorithms that you can use,” says Roberto Calandra, one of Facebook’s robotics research scientists. “You need to have algorithms that can be robust, efficient, and applicable in the real world.”
Facebook’s AI Research lab, housed in a former conference room that overlooks an entrance to the company’s headquarters, whirs with robot arms, legs, and 3-D printers (to prototype parts and playthings).
That’s why Daisy’s stroll along a dusty path is so significant. Introducing the A.I. to “noise”—like bumps in the road—not only helps the robot walk better but, more important, also helps Daisy learn how to learn.
Touch, posits Calandra, is key to learning. But the lab’s goal isn’t merely to create more tactile robots. This is Facebook, after all. So starting at the recent International Conference on Robotics and Automation, what Facebook learns, it shares with others.
Jeans have a natural way of wearing in, but using lasers, Levi’s can now be finished with a worn-in look, right off the rack.
Bart Sights, whose fingernails are stained blue, hand-dyes a pair of jeans. “Indigo is our lifeblood,” he says. “Without the uniqueness of the indigo dye, jeans wouldn’t be what they are, and Levi’s wouldn’t be what they are.”
Levi’s Eureka Innovation Lab in San Francisco uses lasers, pigments, and ingenuity to keep the jeansmaker technologically fashion-forward.
Housed in a small prototyping factory in San Francisco’s Telegraph Hill neighborhood, Levi’s Eureka Innovation Lab churns out not a stitch of denim. Instead, it solves big problems for the 166-year-old apparel maker, which recently relisted its shares publicly and returns to the Fortune 500 for the first time in seven years.
For instance, in one corner of the 18,000-square-foot space, a team works on the company’s Screened Chemistry Program, which seeks to replace chemicals that are hazardous to human health and the environment with safer alternatives. In another corner, a crew experiments with lasers to make Levi’s supply chain more agile during the denim’s “finishing” process.
Levi’s Eureka Innovation Lab was one of few neighborhood buildings to survive the 1906 earthquake and fire, primarily because the structure, a grain mill at the time, had an underground waterway to the bay, which helped to fend off the flames.
“Forty years ago, there were only three finishes: dark stonewash, medium stonewash, and light stonewash,” says Bart Sights, Levi’s vice president of technical innovation. “Fast-forward to today, we do about a thousand different finishes every season. Just our company.” Using the new laser-finishing treatment, the company has essentially gone back to the future, producing only the three base styles, then letting far-flung Levi facilities finish the jeans locally.
Eureka’s 30-person crew includes tailors, software developers, and other experts. All have one thing in common: Everyone knows how to produce the company’s legendary 501 jeans.
The 20-acre test track in New Stanton, Pa. can be reconfigured with movable shipping containers to test real-world road scenarios in a controlled environment.
Argo AI’s sensor pod sits on the roof of a prototype Ford Fusion hybrid, with lidar positioned on top and cameras pointing around the vehicle to help make up the sensor suite that operates the autonomous vehicle.
At the Pittsburgh-area test track of Argo AI, majority shareholder Ford is running its first self-driving cars through their paces.
A baby stroller rolls into traffic. A blind corner hides a rush of cars around the bend. The blazing, early evening sun outshines a frantically blinking stoplight. At its test-track facility in New Stanton, Pa., Argo AI aims to re-create real-world hazards to get Ford’s autonomous vehicles ready to hit the road—and dodge its dangers—by 2021. That’s when the automaker wants to launch its ambitious autonomous ride-hailing and delivery services in select U.S. cities.
Argo is developing a self-driving technology platform that’s being engineered into cars produced by Ford, which invested $1 billion in 2017 for a majority stake in the private company.
Before the cars begin self-driving, they are driven manually in test areas by humans to collect data with Argo AI’s sensors, in order to build 3-D high-resolution maps.
Argo’s 20-acre closed course, located in a semi-decommissioned industrial plant where Sony once built big-screen televisions, is the ideal controlled environment for testing robotic vehicles. And at the company’s depot in nearby Pittsburgh, software gets tweaked, and cars can even be localized to match driving behaviors inherent to particular cities.
Ford’s Argo-powered autonomous cars are currently being tested in five U.S. cities, including on the paved-over colonial-era horse paths around Pittsburgh’s Carnegie Mellon University. Researchers there are helping the company refine its computer vision and machine-learning systems.
A version of this article appears in the June 2019 issue of Fortune with the headline “Skunkworks.”