Seunghyun Lee
Inha University
Deep learningAlgorithmOphthalmologyMachine learningData miningGraph (abstract data type)ConvolutionArtificial intelligenceCode (cryptography)Pattern recognitionDistillationSingular value decompositionPrincipal component analysisObject detectionKnowledge transferMulti-task learningComputer visionComputer scienceEmbeddingComputationArtificial neural networkMedicineCluster analysisFeature (computer vision)Convolutional neural networkProcess (computing)
29Publications
4H-index
102Citations
Publications 27
Newest
Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets. However, depending on the quality of the trained agents and the application being considered, it is often desirable to fine-tune such agents via further online interactions. In this paper, we observe that state-action distribution shift may lead to severe bootstrap error during fine-tuning, which destroys the good initial policy obtained via offline RL. To address...
Knowledge distillation (KD) is one of the most useful techniques for light-weight neural networks. Although neural networks have a clear purpose of embedding datasets into the low-dimensional space, the existing knowledge was quite far from this purpose and provided only limited information. We argue that good knowledge should be able to interpret the embedding procedure. This paper proposes a method of generating interpretable embedding procedure (IEP) knowledge based on principal component ana...
#2Seunghyun LeeH-Index: 4
Last. Byung Cheol SongH-Index: 15
view all 3 authors...
Conventional single-stage object detectors have been able to efficiently detect objects of various sizes using a feature pyramid network. However, because they adopt a too simple manner of aggregating feature maps, they cannot avoid performance degradation due to information loss. To solve this problem, this paper proposes a new framework for single-stage object detection. The proposed aggregation scheme introduces two independent modules to extract global and local information. First, the globa...
2 CitationsSource
#1Kang Il Lee (Inha University)H-Index: 1
#2Seunghyun Lee (Inha University)H-Index: 4
Last. Byung Cheol Song (Inha University)H-Index: 15
view all 3 authors...
Knowledge distillation (KD) is one of the most effective neural network light-weighting techniques when training data is available. However, KD is seldom applicable to an environment where it is difficult or impossible to access training data. To solve this problem, a complete zero-shot KD (C-ZSKD) based on adversarial learning has been recently proposed, but the so-called biased sample generation problem limits the performance of C-ZSKD. To overcome this limitation, this paper proposes a novel ...
Source
#1Seunghyun LeeH-Index: 4
#2Byeongho Heo (Naver Corporation)H-Index: 11
Last. Byung Cheol SongH-Index: 15
view all 4 authors...
Filter pruning is prevalent for pruning-based model compression. Most filter pruning methods have two main issues: 1) the pruned network capability depends on that of source pretrained models, and 2) they do not consider that filter weights follow a normal distribution. To address these issues, we propose a new pruning method employing both weight re-initialization and latent space clustering. For latent space clustering, we define filters and their feature maps as vertices and edges to be a gra...
Source
Knowledge distillation (KD) from a ``teacher'' neural network and transfer of the knowledge to a small student network is done to improve the performance of the student network. This method is one of the most popular techniques to lighten convolutional neural networks (CNNs). Many KD algorithms have been proposed recently, but they still cannot properly distill essential knowledge of the teacher network, and the transfer tends to depend on the spatial shape of the teacher's feature map. To solve...
Source
Oct 1, 2020 in ICIP (International Conference on Image Processing)
#1Min Kyu Lee (Inha University)H-Index: 2
#2Seunghyun Lee (Inha University)H-Index: 4
Last. Byung Cheol Song (Inha University)H-Index: 15
view all 4 authors...
Channel pruning for light-weighting networks is very effective in reducing memory footprint and computational cost. Many channel pruning methods assume that the magnitude of a particular element corresponding to each channel reflects the importance of the channel. Unfortunately, such an assumption does not always hold. To solve this problem, this paper proposes a new method to measure the importance of channels based on gradients of mutual information. The proposed method computes and measures g...
3 CitationsSource
Source
Last. Byung Cheol SongH-Index: 15
view all 4 authors...
Source
#1Seunghyun Lee (Inha University)H-Index: 4
#2Byung Cheol Song (Inha University)H-Index: 15
Singular value decomposition (SVD) is a popular technique to extract essential information by reducing the dimension of a feature set. SVD is able to analyze a vast matrix in spite of a relatively low computational cost. However, singular vectors produced by SVD have been seldom used in convolutional neural networks (CNNs). This is because the inherent properties of singular vectors such as sign ambiguity and manifold features make CNNs difficult to learn singular vectors. In order to overcome t...
1 CitationsSource