Getty
Up until now, Twitter automatically cropped an image with the help of face detection, or by simply carving a thumbnail from the middle of the image.
By training neural networks with thousands of images, Twitter now understands which part an user would likely be interested in seeing.
In an example, the firm showed a tweet with an image of an airplane wing. The previous automatic crop shows little of the wing itself, and mostly focuses on the sky, whereas the neural network-enhanced preview shows almost the entire wing, which helps the user better understand both the content and the context.
The new crop system also works well with faces; that's another case scenario where the non-intelligent crop could cut out faces entirely - especially in portrait shots - whereas the AI-trained image recognises the face and puts that in the preview.
To identify the regions, Twitter says that the system analyses images to understand where there is a high concentration of so-called "saliency."
"A region having high saliency means that a person is likely to look at it when freely viewing the image," the post reads. Academics have studied and measured saliency by using eye trackers, which record the pixels people fixated with their eyes."
There are no hard dates, but the new image preview is currently rolling out Twitter's official app on Android and iOS, as well as twitter.com on the web.