9to5mac.com can now tell us something interesting. Apple's researchers have created an AI model that reconstructs a 3D object from a single image, while "keeping reflections, highlights, and other effects consistent across different viewing angles".
In Apple's new study, titled LiTo: Surface Light Field Tokenization, the researchers "propose a 3D latent representation that jointly models object geometry and view-dependent appearance". In other words, Apple has created a way to reconstruct a three-dimensional object, and also how light interacting with it should appear from different angles.
The researchers also managed to train the model so it can do all of that by using a single image instead of using more common methods that require images from different angles to enable 3D reconstruction. All of this required quite a lot of training for the model, as expected.
The actual process is very technical and quite demanding, so anyone interested should read more about the topic right here.