• Regularized learning
    • Regularization of complex parametric models via virtual adversarila training, which directly penalizes the non-smoothness of the output with respect to the input perturbation.

    • Reinforcement Learning
    • Efficient exploration considering the uncertainty of the estimates.

      Versatile AI using a little knowledge of the environment

    • Video object tracking
    • Occlusion-aware video ovject tracking

      Hyper-parameter tuning using dropout

    • Image Superresolution
    • Superresolution reconstructs the high-resolution image from a single or multiple low resolution images.

      We developed edge-preserving image superresolution method via Bayesian inference that treats the boundaries between segments as hidden variables.

    • X-ray Computed Tomography
    • CT reconstructs tomographic images from their projections. Detectors are positioned on the opposite side of the x-ray source, and they rotate around the object.

      There are some demands for CT. One is reduction of X-ray exposure. X-ray exposure should be minimized in order to avoid an overdose of radiation but limitation of X-ray exposure makes the observed data noisy. Another is reduction of several artifacts such as metal artifacts, motion aratifacts and dark bands artifacts. Presence of high density objects such as metal prostheses and dental fillings cause streak or star artifacts.

      We utilize Bayesian inference to mitigate these artifacts. Bayesian approach can regularize unwanted solutions by incorporating suitable prior knowledge. Also, the estimation is robust to the probabilistic fluctuations by considering the uncertainty regarding the unknown random variables.

      I'll show you my research on my proposed model with hierarchical proir and the result that shows better performance compared to the existing algorithm.

    • Learning and Estimating Discrete Probability Distribution
    • In probability theory and statistics, a discrete probability distribution is a probability distribution characterized by a probability mass function (ref: wikipedeia.).

      I propose a new learning framework named ‘Detailed Balance Learning’ (DBL) to learn a stationary distribution of Markov chain, and show DBL has a close relationship with contrastive divergence learning when applied to restricted Boltzmann machine. A sufficient condition for the convergence is also presented.