Google and Facebook face-off with vastly different neural network artwork AIs
Web giants Facebook and Google have been experimenting with creating computers capable of recognizing and categorising images using a neural network.
The algorithms, while differing on many fronts, both task AIs with identifying real-world objects and repurposing them in new images.
Facebook’s scheme is tasked with creating “high quality samples of natural images” the size of a thumbnail. Research from the firm states that humans mistook the AI-created images as real 40 per cent of the time.
Below is an example of its image generation capabilities.
Google on the other hand is not chasing realism but easier identification of everyday object, no doubt to aid its image search capabilities. The AI generated images however, while accurate to a degree are vaguely trippy.
On how its system works, Google Research said: “We train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn’t matter (a fork can be any shape, size, color or orientation). But how do you check that the network has correctly learned the right features? It can help to visualize the network’s representation of a fork.”
Below are some of Google’s AI-generated images.