Research On Enhancement And Extraction Algorithms Of Printed Quantum Dots Image Using A Generative Adversarial Network

Sources
Abstract
References8
Newest
Jun 27, 2016 in CVPR (Computer Vision and Pattern Recognition)
#1Kaiming He (Microsoft)H-Index: 80
#2Xiangyu Zhang (Xi'an Jiaotong University)H-Index: 39
Last. Jian Sun (Microsoft)H-Index: 113
view all 4 authors...
Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the...
Source
Jun 27, 2016 in CVPR (Computer Vision and Pattern Recognition)
#1Wenzhe Shi (ICL: Imperial College London)H-Index: 28
#2Jose CaballeroH-Index: 23
Last. Zehan Wang (ICL: Imperial College London)H-Index: 12
view all 8 authors...
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds comput...
Source
Dec 7, 2015 in ICCV (International Conference on Computer Vision)
#1Kaiming He (Microsoft)H-Index: 80
#2Xiangyu Zhang (Xi'an Jiaotong University)H-Index: 39
Last. Jian Sun (Microsoft)H-Index: 113
view all 4 authors...
Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities....
Source
#1Sergey Ioffe (Google)H-Index: 22
#2Christian Szegedy (Google)H-Index: 29
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from ...
Dec 8, 2014 in NeurIPS (Neural Information Processing Systems)
#1Ian Goodfellow (UdeM: Université de Montréal)H-Index: 83
#2Jean Pouget-Abadie (UdeM: Université de Montréal)H-Index: 10
Last. Yoshua Bengio (UdeM: Université de Montréal)H-Index: 205
view all 8 authors...
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a uniqu...
Source
Sep 4, 2014 in CVPR (Computer Vision and Pattern Recognition)
#1Karen Simonyan (University of Oxford)H-Index: 58
#2Andrew Zisserman (University of Oxford)H-Index: 180
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our t...
#1FU Qing-qing (Yangtze University)H-Index: 1
Several classical image restoration algorithms are studied.In the case of knowing the model of the image degradation,the observed images are restored using inverse filtering,Wiener filtering and constrained least squares filtering algorithm.A wealth of empirical data on the parameter selection of the above algorithms is obtained,and the experimental results are analyzed and summarized.
#1M.K. Ozkan (UR: University of Rochester)H-Index: 9
#2A.T. ErdemH-Index: 14
Last. A.M. TekalpH-Index: 59
view all 4 authors...
Computationally efficient multiframe Wiener filtering algorithms that account for both intraframe (spatial) and interframe (temporal) correlations are proposed for restoring image sequences that are degraded by both blur and noise. One is a general computationally efficient multiframe filter, the cross-correlated multiframe (CCMF) Wiener filter, which directly utilizes the power and cross power spectra of only N*N matrices, where N is the number of frames used in the restoration. In certain spec...
Source
Cited By0
Newest
This website uses cookies.
We use cookies to improve your online experience. By continuing to use our website we assume you agree to the placement of these cookies.
To learn more, you can find in our Privacy Policy.