Remember going crazy over Pokémon Go in 2016? Turns out, you were training AI all along!
Nov 22, 2024, 15:52 IST
When Pokémon Go hit the app stores in 2016, millions of players took to the streets to capture virtual Pokémon in augmented reality (AR). But behind the scenes, players unknowingly became contributors to something much bigger: a powerful artificial intelligence (AI) model designed to map the physical world.
The announcement, made in a blog post on November 12, disclosed that the company has used over 10 million scanned locations worldwide to train its model, with players contributing 1 million new scans each week.
“In our vision for a Large Geospatial Model (LGM), each of these local networks would contribute to a global large model, implementing a shared understanding of geographic locations, and comprehending places yet to be fully scanned,” wrote Eric Brachmann, Niantic staff scientist, and Victor Adrian Prisacariu, Niantic chief scientist.
This massive AI effort is powered by Niantic’s Visual Positioning System, which can pinpoint the exact position and orientation of an object using just a smartphone camera image — accurate to within 0.4 inches.
This capability could make LGMs invaluable in fields ranging from robotics and autonomous vehicles to AR glasses and content creation. As Brachmann and Prisacariu noted, the technology enables machines to not only perceive physical spaces but also interact with them in groundbreaking ways.
Elise Thomas, a senior intelligence analyst at the Institute for Strategic Dialogue, voiced her concerns on X: “It’s so incredibly 2020s coded that Pokémon Go is being used to build an AI system which will almost inevitably end up being used by automated weapons systems to kill people.”
Niantic’s LGM reflects a growing trend: the blending of entertainment and AI research. While the technology promises to unlock new possibilities, it also raises ethical questions about privacy, consent, and the future uses of such data.
For Niantic, the data scraping was possible because users willingly scanned their surroundings through the game. These scans were part of AR features that players engaged with, from battling at gyms to exploring PokéStops. But few players likely realised they were helping train a system with potential implications far beyond gaming.
The question remains: should users be more aware of how their data is used? And where should companies like Niantic draw the line between innovation and transparency?
Advertisement
Mapping the world, one PokéStop at a time
Niantic, the developer of Pokémon Go, recently revealed that it has been using data gathered from its AR games to create a Large Geospatial Model (LGM). This AI model will allow robots and other devices to navigate and interact with the physical world — even with limited data.The announcement, made in a blog post on November 12, disclosed that the company has used over 10 million scanned locations worldwide to train its model, with players contributing 1 million new scans each week.
“In our vision for a Large Geospatial Model (LGM), each of these local networks would contribute to a global large model, implementing a shared understanding of geographic locations, and comprehending places yet to be fully scanned,” wrote Eric Brachmann, Niantic staff scientist, and Victor Adrian Prisacariu, Niantic chief scientist.
This massive AI effort is powered by Niantic’s Visual Positioning System, which can pinpoint the exact position and orientation of an object using just a smartphone camera image — accurate to within 0.4 inches.
From gaming to robotics
The LGM project is no small feat. Just as Large Language Models (LLMs) like ChatGPT use vast amounts of text data to predict words, LGMs use geospatial data to infer what buildings and environments look like, even imagining parts of a scene that haven’t been fully scanned.Advertisement
A cause for concern?
While many Pokémon Go players have shrugged off the news, others are more skeptical. Critics fear that such technology could have unintended or harmful applications, including use in surveillance or even military operations.Elise Thomas, a senior intelligence analyst at the Institute for Strategic Dialogue, voiced her concerns on X: “It’s so incredibly 2020s coded that Pokémon Go is being used to build an AI system which will almost inevitably end up being used by automated weapons systems to kill people.”
Niantic’s LGM reflects a growing trend: the blending of entertainment and AI research. While the technology promises to unlock new possibilities, it also raises ethical questions about privacy, consent, and the future uses of such data.
For Niantic, the data scraping was possible because users willingly scanned their surroundings through the game. These scans were part of AR features that players engaged with, from battling at gyms to exploring PokéStops. But few players likely realised they were helping train a system with potential implications far beyond gaming.
The question remains: should users be more aware of how their data is used? And where should companies like Niantic draw the line between innovation and transparency?
Advertisement