Earlier this month, Apple surprised the Artificial Intelligence research community when it announced it would begin publishing AI papers.
Apple surprised the community again when it published its first paper only a few weeks after its announcement.
Apple submitted its first AI paper on November 15, which was then published on December 22. The paper was about techniques for improving an algorithms ability to “recognize images using computer generated images rather than real-world images”, according to Forbes.
The paper was titled “Learning from Simulated and Unsupervised Images Through Adversarial Training.”
According to the paper, in machine learning, using synthetic images such as ones in video game to train an AI can be more efficient than using real world images.
This is because in synthetic (computer-generated) images, the “objects” of the image are already labeled- this is a pencil, pillow, bag, dog etc. In real-world images this is not the case, and someone would have to label everything the computer is seeing.
However, this approach is a bit problematic in that computer-generated images do not always carry over to real world images.
This could train the neural networks of an AI to detect details that are only detectable in synthetic images but not real images.
To accommodate this problem, Apple suggests using what they call Simulated and Unsupervised learning, where the synthetic images are boosted in terms of realism.
“The task is to learn a model to improve the realism of a simulator’s output using unlabeled real data while preserving the annotation information from the simulator,” as explained by Economic Times.
“Apple’s first public research paper was penned by vision expert Ashish Shrivastava and a team of engineers including Tomas Pfister, Oncel Tuzel, Wenda Wang, Russ Webb and Apple Director of Artificial Intelligence Research Josh Susskind,” Appleinsider reported on Tuesday.