This article is literally a graphic example of machine learning in practice, particularly it’s application in finding insight in unstructured data, and how ML today still requires human intervention.
Here, Google have set out to distort the output, by repeatedly emphasising and re-processing the images to enhance the patterns that are recognised, so the results are purposefully extreme and engineered to be wrong. It’s fascinating, and the results closely mimics our own imagination (for example when we’re afraid of the dark – every sound can be distorted in our minds to be something it isn’t, positively reinforcing our fear) but also highlights the limitations and dangers of ML today – finding patterns and answers where none actually exist, missing information where to a human it’s obvious.