This page is providing information about double articulation analyzer (DAA), which can estimate latent double articulation structure embedded on a multidimensional time series data, e.g., speech signals, human motion data, and driving behavior data. For the purpose and backgrounds of the DAA, please see "What's DA?"


  1. Tadahiro Taniguchi, Shogo Nagasaka, Ryo Nakashima, Nonparametric Bayesian Double Articulation Analyzer for Direct Language Acquisition from Continuous Speech Signals, IEEE Transactions on Cognitive and Developmental Systems, Vol.8 (3), pp. 171-185 .(2016) DOI: 10.1109/TCDS.2016.2550591 (Open Access)  [LINK] 
    • The original paper of the nonparametric Bayesian double articulation analyzer (NPB-DAA). For the purpose of developing NPB-DAA, this paper presents a novel two-layer hierarchical hidden semi-Markov model called hierarchical Dirichlet process hidden language model (HDP-HLM).
  2. Tadahiro Taniguchi, Ryo Nakashima, Hailong Liu and Shogo Nagasaka, Double Articulation Analyzer with Deep Sparse Autoencoder for Unsupervised Word Discovery from Speech Signals, Advanced Robotics, Vol.30 (), (11-12) pp. 770-783 .(2016) DOI:10.1080/01691864.2016.1159981   [LINK] 
    • By composing NPB-DAA with deep sparse autoencoder (DSAE) whch is an unsupervised deep learning method, the NPB-DAA showed great performance in the task of unsupervised word discovery from Japaese vowel signals.
  3.   Tadahiro Taniguchi, Shogo Nagasaka, Double Articulation Analyzer for Unsegmented Human Motion using Pitman-Yor Language model and Infinite Hidden Markov Model, 2011 IEEE/SICE International Symposium on System Integration, pp. 250 - 255 .(2011)  [PDF]
    • This is the original paper of the old version of the DAA called the conventional DAA in the paper 1.   This simply uses the sticky hierarchical Dirichlet process-hidden Markov model (sticky HDP-HMM) for segmentation of time-series data and the nested Pitman-Yor language model (NPYLM) for segmenting the letter sequences, i.e., concatenated latent state sequences, into words.