Scientists Teach Machines How to Learn Like Humans
December 15, 2015 | New York UniversityEstimated reading time: 4 minutes
To do so, they developed a “Bayesian Program Learning” (BPL) framework, where concepts are represented as simple computer programs. For instance, the letter ‘A’ is represented by computer code —resembling the work of a computer programmer— that generates examples of that letter when the code is run. Yet no programmer is required during the learning process: the algorithm programs itself by constructing code to produce the letter it sees. Also, unlike standard computer programs that produce the same output every time they run, these probabilistic programs produce different outputs at each execution. This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter ‘A.’
While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns “generative models” of processes in the world, making learning a matter of “model building” or “explaining” the data provided to the algorithm. In the case of writing and recognizing letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently. The model also “learns to learn” by using knowledge from previous concepts to speed learning on new concepts—e.g., using knowledge of the Latin alphabet to learn letters in the Greek alphabet. The authors applied their model to over 1,600 types of handwritten characters in 50 of the world’s writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic—and even invented characters such as those from the television series Futurama.
In addition to testing the algorithm’s ability to recognize new instances of a concept, the authors asked both humans and computers to reproduce a series of handwritten characters after being shown a single example of each character, or in some cases, to create new characters in the style of those it had been shown. The scientists then compared the outputs from both humans and machines through “visual Turing tests.” Here, human judges were given paired examples of both the human and machine output, along with the original prompt, and asked to identify which of the symbols were produced by the computer.
While judges’ correct responses varied across characters, for each visual Turing test, fewer than 25 percent of judges performed significantly better than chance in assessing whether a machine or a human produced a given set of symbols.
“Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven’t seen,” notes Tenenbaum. “I’ve wanted to build models of these remarkable abilities since my own doctoral work in the late nineties. We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts—even simple visual concepts such as handwritten characters—in ways that are hard to tell apart from humans.”
The work was supported by grants from the National Science Foundation to MIT’s Center for Brains, Minds and Machines (CCF-1231216), the Army Research Office (W911NF-08-1-0242, W911NF-13-1-2012), the Office of Naval Research (N000141310333), and the Moore-Sloan Data Science Environment at New York University.
Suggested Items
Real Time with… IPC APEX EXPO 2024: Automation in North American PCB Shops
05/17/2024 | Real Time with...IPC APEX EXPOBenmayor Group has entered the North American market's automation landscape with their Technosystem division. In this interview, Eduardo Benmayor highlights this underinvestment and current efforts to catch up and address challenges related to strategic planning. Eduardo shares Technosystem's automation journey, from simple equipment to robotic arms, stressing the importance of machine communication and data analysis. He also offers advice on implementing automation in older facilities.
TactoTek Licenses IMSE Technology to Polestar for Sustainable Electronics Design Innovation
05/15/2024 | TactoTekPolestar, the Swedish electric performance car brand, and Finnish smart surface pioneer TactoTek, have entered a collaboration to explore integration of Injection Molded Structural Electronics (IMSE) technology into Polestar’s vehicle programs.
Using AI to Redefine Productivity
05/15/2024 | Nolan Johnson, SMT007 MagazinePlato Systems, a machine perception company spun out of Stanford University, employs AI and video data to analyze and optimize the human component in manufacturing. Initially focused on semiconductors, Plato Systems has expanded into EMS manufacturing. Co-founder and CEO Amin Arbabian, along with product advisor Anders Holden and head of growth Luis Vidal, discuss their approach to changeover optimization and its impact on productivity in the industry. They’ve also included customer Raj Vora in the conversation.
Real Time with… IPC APEX EXPO 2024: Saki America's Inspection Solutions and Market Expansion
05/15/2024 | Real Time with...IPC APEX EXPOCraig Brown, GM of Saki America, speaks with Editor Marcy LaRont about Saki's comprehensive inspection solutions, metrology, quality control, and software integration. He also touches on the company's recent X-ray machine award, the 3Di-LS3 machine, and their growing presence in the US and Mexico markets.
Saki Set to Highlight Cutting-Edge Inspection Technology at SMTConnect 2024
05/15/2024 | Saki CorporationSaki Corporation, an innovator in the field of automated optical and X-ray inspection equipment, will be exhibiting at SMTConnect 2024 in Nuremberg, Germany from June 11-13. As a key highlight of the Selecs booth #235 in Hall 4, Saki will be showcasing its unique range of automated inspection solutions including the award-winning X-ray Automated Inspection System (AXI) and innovative ‘One Programming’ and AI Solutions Features.