Shropshire Star

This AI system can transform 2D football matches into 3D holograms

The technology allows games to be played on your table.

Published

Watching football in 3D from the comfort of your own home may still be a distant dream, but a team of US researchers have found a way to create hologram-style reconstructions of some videos.

Scientists from University of Washington have designed an artificial intelligence system that can augment 2D videos in 3D – which can then be viewed on a flat surface like a large kitchen table, for example.

For their study, the researchers used around 12,000 images from the EA video game FIFA 2018.

They also gathered the corresponding 3D data for the images to train their algorithm to make the correlation.

Although real game footage wasn’t used in building the AI system, the hope is that the algorithm, when it is fully developed, will be able to seamlessly generate 3D renderings from 2D YouTube clips.

Explaining why they used Fifa game data, the researchers wrote: “Fifa, similar to most games, uses deferred shading during gameplay. Having access to the GPU [Graphics Processing Unit] calls enables capture of the depth and colour buffers per frame.

“Once depth and colour are captured for a given frame, we process it to extract the players.”

Football hologram.
(Konstantinos Rematas/YouTube screenshot)

Microsoft’s HoloLens – the company’s augmented reality smart glasses – was used to test their system.

When shown unseen YouTube videos of real-life matches, the system was able to accurately reconstruct each player in 3D and superimpose them on a virtual soccer pitch, although there were glitches with recreating the ball and certain aspects of player interaction.

But the team is hopeful that they will be able to smooth out the kinks, saying additional improvements in player detection, tracking, and depth estimation could help reduce the glitches that reconstructing the ball in the field could “enable a more satisfactory viewing of an entire game”.

The research is published in an open paper and is being presented at the annual Computer Vision and Pattern Recognition (CVPR) conference in Utah, which takes place from June 18–22.

Sorry, we are not accepting comments on this article.