Andrew Gilbert
  • Home
  • Semantic 3D Pose
  • NAS-DIP
  • Visual & IMU 3D Pose
  • Inpainting MVV

Neural Architecture Search for Deep Image Prior

Kary Ho[1], Andrew Gilbert[1], Hailin Jin[2], John Collomosse[1,2]
[1] Centre for Vision Speech and Signal Processing, University of Surrey
[2] Creative Intelligence Lab, Adobe Research
arXiv:2001.04776

Abstract

We present a neural architecture search (NAS) technique to enhance the performance of image de-noising, in-painting, and super-resolution tasks under the recently proposed Deep Image Prior (DIP). We show that evolutionary search can automatically optimize the encoder-decoder (E-D) structure and meta-parameters of the DIP network, which serves as a content-specific prior to regularize these single image restoration tasks. Our binary representation encodes the design space for an asymmetric E-D network that typically converges to yield a content-specific DIP within 10-20 generations using a population size of 500. The optimized architectures consistently improve upon the visual quality of classical DIP for a diverse range of photographic and artistic content.

​Paper

Picture

​Citation

@INPROCEEDINGS{Ho20,
title = {Neural Architecture Search for Deep Image Prior},
year={2020},
booktitle={arXiv preprint arXiv:2001.04776},
author={Ho, K. and Gilbert, A. and Jin, H. and Collomosse, J.}
​ }
  • Home
  • Semantic 3D Pose
  • NAS-DIP
  • Visual & IMU 3D Pose
  • Inpainting MVV