Here’s a quick one for all of you in SEOville. According to CognitiveSEO’s months-old post, Google has started utilizing its “automatic object recognition” algorithm to analyze images—perhaps all in an effort to ascertain the image’s category and how it relates to the content it’s complementing. But will this new process affect overall site rankings?
Not so fast. Let’s first see how this all works.
Image Object Detection Methods
In Google’s infancy, keyword-rich ALT tags were like SEO arrows that pointed to what the page was about—and it could certainly affect your rank. While we’ve evolved past that point, ALT text has continued to be a tried-and-true method of organizing and optimizing a page. With Google’s new method of object detection, ALT text will definitely play a smaller part in a much larger production.
Now, when you include an image on your page, Google will potentially be able to piece together all of the pic’s elements to categorize it; should the image match your content’s main topic, some speculate that your page rank could improve. A page containing many similar images that fall into that Google-identified category may gain even more ground, and identifiable text within the image could prove to be even more helpful.
The Main Challenge
While it’s certainly nifty, object identification isn’t refined enough to be perfect. According to CognitiveSEO, AI supercomputers are first trained on all types of subject matters. They’re shown what certain things look like and given lists of slang terms and synonyms for those things; then the cycle continues to improve their speed and image-memory cache. When that AI comes across an image, it extracts all known features and objects and then classifies them based on its growing database.
The major concern comes out of how Google’s AI can work through layers of complexity. After all, it must accurately decode an image and all of its elements without diluting its focus—something that humans can do quite easily.
Focus: That’s another interesting element to consider. How does Google decipher an image’s focal point? Is it based on the contextual page content it’s attached to or the object ratios in the image itself? Or does ratio matter at all if it’s labeled under all identified categories? Take this image as an example:
Were we to use this image for content about “playful kittens,” it makes sense. Does it also make sense to place this image on a page about “bookshelves”? Or how about “living room media center”? We’ve even got a toy car in this picture, so would it be an appropriate image for content about cars?
The Savior: Google Cloud Vision API
Google+ has been categorizing images for a while now, and Cloud Vision API has been the reason for its success. The search engine undoubtedly uses this feature as a ranking factor for G+ images, and likely for its media SERPs.
The API’s users can add image metadata, moderate images for improper categorization, and even match them to similar images to improve semantic connectivity. When you post those pictures to your page, you’re telling Google that the content is positioned to be in one or more of that image’s categories.
While it may be simple for us humans to tie an image to the content it’s supposed to accompany, the same cannot be said of our internet robots. Until now? Leave us a comment and describe your experience with Google Vision or image rankings in general. We’re all in this together.
Sources & Photos: