Examples include omnidirectional vision for drones, robots, and autonomous cars, molecular regression problems, and global weather and climate modelling.
A naive application of convolutional networks to a planar projection of the spherical signal is destined to fail, because the space-varying distortions introduced by such a projection will make translational weight sharing ineffective.
While the stylization step transfers the style of the reference photo to the content photo, the smoothing step ensures spatially consistent stylizations.
Each of the steps has a closed-form solution and can be computed efficiently. The results show that the proposed method generates photorealistic stylization outputs that are more preferred by human subjects as compared to those by the competing methods while running much faster.
However, any planar projection of a spherical signal results in distortions.
To overcome this problem, the group of researchers from the University of Amsterdam introduces the theory of spherical CNNs, the networks that can analyze spherical images without being fooled by distortions.
Due to the importance and prevalence of computer vision and image generation for applied and enterprise AI, we did feature some of the papers below in our previous article summarizing the top overall machine learning papers of 2018.
Since you might not have read that previous piece, we chose to highlight the vision-related research ones again here.
We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.
Google Brain researchers seek an answer to the question: do adversarial examples that are not model-specific and can fool different computer vision models without access to their parameters and architectures, can also fool time-limited humans?