您好,欢迎来到三六零分类信息网!老站,搜索引擎当天收录,欢迎发信息

KALDI语音识别工具包运行TIMIT数据库实例

2024/5/30 2:26:32发布47次查看
timit数据库介绍: timit数据库由630个话者组成,每个人讲10句,美式英语的8种主要方言。 timit s5实例: 首先,将timit.iso中的timit复制到主文件夹。 1.进入对应的目录,进行如下操作: zhangju@ubuntu :~$ cd kaldi-trunk/egs/timit/s5/zhangju@ubuntu :~
timit数据库介绍:
timit数据库由630个话者组成,每个人讲10句,美式英语的8种主要方言。
timit s5实例:
首先,将timit.iso中的timit复制到主文件夹。
1.进入对应的目录,进行如下操作:
zhangju@ubuntu :~$ cd kaldi-trunk/egs/timit/s5/zhangju@ubuntu :~/kaldi-trunk/egs/timit/s5$sudo local/timit_data_prep.sh /home/zhangju/timit
会看到如下显示:
creating coretest set.
mdab0  mwbt0  felc0  mtas1  mwew0  fpas0  mjmp0  mlnt0  fpkt0  mlll0  mtls0  fjlm0  mbpm0  mklt0  fnlp0  mcmj0  mjdh0  fmgd0  mgrt0  mnjm0  fdhc0  mjln0  mpam0  fmld0 
# of utterances in coretest set = 192
creating dev set.
faks0  fdac1  fjem0  mgwt0  mjar0  mmdb1  mmdm2  mpdf0  fcmh0  fkms0  mbdg0  mbwm0  mcsh0  fadg0  fdms0  fedw0  mgjf0  mglb0  mrtk0  mtaa0  mtdt0  mthc0  mwjg0  fnmr0  frew0  fsem0  mbns0  mmjr0  mdls0  mdlf0  mdvc0  mers0  fmah0  fdrw0  mrcs0  mrjm4  fcal1  mmwh0  fjsj0  majc0  mjsw0  mreb0  fgjd0  fjmg0  mroa0  mteb0  mjfc0  mrjr0  fmml0  mrws1 
# of utterances in dev set = 400
finalizing test
finalizing dev
timit_data_prep succeeded.
于是在/home/zhangju/kaldi-trunk/egs/timit/s5文件夹下新生成data文件夹,其内包含local文件夹以及相关内容。
2.在终端输入:
local/timit_train_lms.sh data/local(下载、计算文本,用以建立语言模型)local/timit_format_data.sh(处理与fst有关的东西)
3.创建train的mfcc:sudo steps/make_mfcc.sh data/train exp/make_mfcc/train mfccs 4
(要对train,dev,test创建)会看到:
succeeded creating mfcc features for train
sudo steps/make_mfcc.sh data/test exp/make_mfcc/test mfccs 4
会看到:
succeeded creating mfcc features for test
sudo steps/make_mfcc.sh data/dev exp/make_mfcc/dev mfccs 4
会看到:
succeeded creating mfcc features for dev
4.训练单音素系统(monophone systom)
sudo steps/train_mono.sh data/train data/lang exp/mono
会显示:computing cepstral mean and variance statistics
initializing monophone system.
compiling training graphs
pass 0
pass 1
aligning data
pass 2
aligning data
pass 3
aligning data
pass 4
aligning data
pass 5
aligning data
pass 6
aligning data
pass 7
aligning data
pass 8
aligning data
pass 9
aligning data
pass 10
aligning data
pass 11
pass 12
aligning data
pass 13
pass 14
pass 15
aligning data
pass 16
pass 17
pass 18
pass 19
pass 20
aligning data
pass 21
pass 22
pass 23
pass 24
pass 25
aligning data
pass 26
pass 27
pass 28
pass 29
于是,新建了exp/mono文件夹
scripts/mkgraph.sh --mono data/lang exp/mono exp/mono/graph(制图)
会显示:fsttablecompose data/lang/l.fst data/lang/g.fst
fstdeterminizestar --use-log=true
fstminimizeencoded
fstisstochastic data/lang/tmp/lg.fst
-0.000244359 -0.0912761
warning: lg not stochastic.
fstcomposecontext --context-size=1 --central-position=0 --read-disambig-syms=data/lang/tmp/disambig_phones.list --write-disambig-syms=data/lang/tmp/disambig_ilabels_1_0.list data/lang/tmp/ilabels_1_0
fstisstochastic data/lang/tmp/clg_1_0.fst
-0.000244359 -0.0912761
warning: clg not stochastic.
make-h-transducer --disambig-syms-out=exp/mono/graph/disambig_tid.list --transition-scale=1.0 data/lang/tmp/ilabels_1_0 exp/mono/tree exp/mono/final.mdl
fstminimizeencoded
fstdeterminizestar --use-log=true
fsttablecompose exp/mono/graph/ha.fst data/lang/tmp/clg_1_0.fst
fstrmsymbols exp/mono/graph/disambig_tid.list
fstrmepslocal
fstisstochastic exp/mono/graph/hclga.fst
0.000331581 -0.091291
hclga is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/mono/final.mdl
5.
for test in dev test ; dosteps/decode_deltas.sh exp/mono data/$test data/lang exp/mono/decode_$test &done(解码test数据集(test是*/s5/data中dev、test文件夹中的test文件夹))
终端输出结果是:[1] 2307                         [2] 2308
6.
scripts/average_wer.sh exp/mono/decode_*/wer > exp/mono/wer

会显示:
[1]-  完成                  steps/decode_deltas.sh exp/mono data/$test data/lang exp/mono/decode_$test
[2]+  完成                  steps/decode_deltas.sh exp/mono data/$test data/lang exp/mono/decode_$test
7.从单音素系统中获得alignments:(分别从mono文件夹中的train,dev,test中获得)(用以训练其他系统)
steps/align_deltas.sh data/train data/lang exp/mono exp/mono_ali_train

会显示:computing cepstral mean and variance statistics
aligning all training data
done.
方法二:修改run.sh中的timit路径,但后直接运行run.sh
timit s3实例
1 数据准备,输入:
local/timit_data_prep.sh /home/zhangju/timit
终端显示:creating coretest set.
mdab0  mwbt0  felc0  mtas1  mwew0  fpas0  mjmp0  mlnt0  fpkt0  mlll0  mtls0  fjlm0  mbpm0  mklt0  fnlp0  mcmj0  mjdh0  fmgd0  mgrt0  mnjm0  fdhc0  mjln0  mpam0  fmld0  (这是说话人的名字,前面加m,f分别表示男性和女性)
# of utterances in coretest set = 192 (核心测试集中有192句话)
creating dev set.
faks0  fdac1  fjem0  mgwt0  mjar0  mmdb1  mmdm2  mpdf0  fcmh0  fkms0  mbdg0  mbwm0  mcsh0  fadg0  fdms0  fedw0  mgjf0  mglb0  mrtk0  mtaa0  mtdt0  mthc0  mwjg0  fnmr0  frew0  fsem0  mbns0  mmjr0  mdls0  mdlf0  mdvc0  mers0  fmah0  fdrw0  mrcs0  mrjm4  fcal1  mmwh0  fjsj0  majc0  mjsw0  mreb0  fgjd0  fjmg0  mroa0  mteb0  mjfc0  mrjr0  fmml0  mrws1 
# of utterances in dev set = 400 (设备集中有400句话)
finalizing test (完成test)
finalizing dev (完成dev)
timit_data_prep succeeded.
输入:
local/timit_train_lms.sh data/local
终端显示为not installing the kaldi_lm toolkit since it is already there.
(kaldi_lm工具箱里有:
compute_perplexity计算复杂度(用于对语言模型作评估,复杂度越低越好)
discount_ngrams给n阶语法模型作平滑处理(留出频率给实际会出现的但ngram中没出现的词语组合)
get_raw_ngrams(得到原始n阶语法模型)
get_word_map.pl*(得到词语的映射表)
interpolate_ngrams(补充(修改)n阶语法模型)
finalize_arpa.pl(完成arpa(arpa是一种格式,协议),是interpolate_ngrams程序中调用的)
map_words_in_arpa.pl(得到arpa格式的词语)
merge_ngrams(合并、融合n阶语法模型)
merge_ngrams_online(在线合并、融合n阶语法模型)
optimize_alpha.pl(使alpha最优化)
prune_lm.sh(删去出现频率较低的数据)
prune_ngrams(删去出现频率较低的数据)
scale_configs.pl
train_lm.sh(训练语言模型)
uniq_to_ngrams)
creating phones file, and monophone lexicon (mapping phones to itself). (创建音子文件及单音素词典)
creating biphone model(创建双音子模型)
training biphone language model in folder data/local/lm (训练双音子语言模型)
creating directory data/local/lm/biphone (创建目录data/local/lm/biphone )
getting raw n-gram counts ()
iteration 1/7 of optimizing discounting parameters
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.900000 phi=2.000000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.900000 phi=2.000000
discount_ngrams: for n-gram order 3, d=0.800000, tau=1.100000 phi=2.000000
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.675000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.675000 phi=2.000000
discount_ngrams: for n-gram order 3, d=0.800000, tau=0.825000 phi=2.000000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=1.215000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=1.215000 phi=2.000000
discount_ngrams: for n-gram order 3, d=0.800000, tau=1.485000 phi=2.000000
interpolate_ngrams: 60 words in wordslist
perplexity over 11412.000000 words is 17.013357
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.460842
real   0m0.021s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.016472
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.464985
real   0m0.020s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.021475
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.471402
real   0m0.025s
user   0m0.012s
sys 0m0.000s
optimize_alpha.pl: alpha=-2.1628504673 is too negative, limiting it to -0.5
projected perplexity change from setting alpha=-0.5 is 17.016472->17.0106241428571, reduction of 0.00584785714286085
alpha value on iter 1 is -0.5
iteration 2/7 of optimizing discounting parameters
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=0.600000, tau=0.550000 phi=2.000000
interpolate_ngrams: 60 words in wordslist
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=0.800000, tau=0.550000 phi=2.000000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.080000, tau=0.550000 phi=2.000000
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.018s
user   0m0.004s
sys 0m0.008s
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.022s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.019s
user   0m0.008s
sys 0m0.004s
optimize_alpha.pl: objective function is not convex; returning alpha=0.7
projected perplexity change from setting alpha=0.7 is 17.011355->17.011355, reduction of 0
alpha value on iter 2 is 0.7
iteration 3/7 of optimizing discounting parameters
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.412500 phi=2.000000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.550000 phi=2.000000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.742500 phi=2.000000
interpolate_ngrams: 60 words in wordslist
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.020s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.019s
user   0m0.008s
sys 0m0.004s
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.021s
user   0m0.012s
sys 0m0.000s
optimize_alpha.pl: objective function is not convex; returning alpha=0.7
projected perplexity change from setting alpha=0.7 is 17.011355->17.011355, reduction of 0
alpha value on iter 3 is 0.7
iteration 4/7 of optimizing discounting parameters
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=1.750000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.000000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.350000
interpolate_ngrams: 60 words in wordslist
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.018s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.018s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.023s
user   0m0.012s
sys 0m0.000s
optimize_alpha.pl: objective function is not convex; returning alpha=0.7
projected perplexity change from setting alpha=0.7 is 17.011355->17.011355, reduction of 0
alpha value on iter 4 is 0.7
iteration 5/7 of optimizing discounting parameters
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.450000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
interpolate_ngrams: 60 words in wordslist
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.600000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.810000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
perplexity over 11412.000000 words is 17.008195
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.454326
real   0m0.019s
user   0m0.008s
sys 0m0.004s
perplexity over 11412.000000 words is 17.011355
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.457880
real   0m0.019s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.018212
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.465417
real   0m0.021s
user   0m0.012s
sys 0m0.000s
optimize_alpha.pl: alpha=-0.670499383475985 is too negative, limiting it to -0.5
projected perplexity change from setting alpha=-0.5 is 17.011355->17.0064832142857, reduction of 0.00487178571427904
alpha value on iter 5 is -0.5
iteration 6/7 of optimizing discounting parameters
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.300000, tau=0.337500 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 2, d=0.300000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.300000, tau=0.607500 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
perplexity over 11412.000000 words is 17.008198
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.454134
real   0m0.019s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.006972
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.452861
real   0m0.020s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.006526
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.452349
real   0m0.022s
user   0m0.012s
sys 0m0.000s
projected perplexity change from setting alpha=0.280321158690507 is 17.006972->17.0064966287094, reduction of 0.000475371290633575
alpha value on iter 6 is 0.280321158690507
iteration 7/7 of optimizing discounting parameters
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.300000, tau=0.576145 phi=1.750000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
interpolate_ngrams: 60 words in wordslist
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.300000, tau=0.576145 phi=2.350000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.300000, tau=0.576145 phi=2.000000
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
interpolate_ngrams: 60 words in wordslist
interpolate_ngrams: 60 words in wordslist
perplexity over 11412.000000 words is 17.006845
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.452750
real   0m0.019s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.006575
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.452414
real   0m0.021s
user   0m0.012s
sys 0m0.000s
perplexity over 11412.000000 words is 17.006336
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.452127
real   0m0.022s
user   0m0.012s
sys 0m0.000s
projected perplexity change from setting alpha=0.690827338145686 is 17.006575->17.0062591109755, reduction of 0.000315889024498972
alpha value on iter 7 is 0.690827338145686
final config is:
d=0.4 tau=0.45 phi=2.0
d=0.3 tau=0.576144521410728 phi=2.69082733814569
d=1.36 tau=0.935 phi=2.7
discounting n-grams.
discount_ngrams: for n-gram order 1, d=0.400000, tau=0.450000 phi=2.000000
discount_ngrams: for n-gram order 2, d=0.300000, tau=0.576145 phi=2.690827
discount_ngrams: for n-gram order 3, d=1.360000, tau=0.935000 phi=2.700000
computing final perplexity
building arpa lm (perplexity computation is in background)
interpolate_ngrams: 60 words in wordslist
interpolate_ngrams: 60 words in wordslist
perplexity over 11412.000000 words is 17.006029
perplexity over 10833.000000 words (excluding 579.000000 oovs) is 17.451754
17.006029
输入
local/timit_format_data.sh
终端显示:creating l.fst
done creating l.fst
creating l_disambig.fst
done creating l_disambig.fst
creating g.fst
arpa2fst -
\data\
processing 1-grams
processing 2-grams
connected 0 states without outgoing arcs.
remove_oovs.pl: removed 0 lines.
g.fst created. how stochastic is it ?
fstisstochastic data/lang_test/g.fst
0 -0.0900995
fsttablecompose data/lang_test/l_disambig.fst data/lang_test/g.fst
how stochastic is lg.fst.
fstisstochastic data/lang_test/g.fst
0 -0.0900995
fstisstochastic
fsttablecompose data/lang/l.fst data/lang_test/g.fst
0 -0.0900994
how stochastic is lg_disambig.fst.
fsttablecompose data/lang_test/l_disambig.fst data/lang_test/g.fst
fstisstochastic
0 -0.0900994
first few lines of lexicon fst:
0   1       0.356674939
0   1   sil   1.20397282
1   2   aa  aa  1.20397282
1   1   aa  aa  0.356674939
1   1   ae  ae  0.356674939
1   2   ae  ae  1.20397282
1   1   ah  ah  0.356674939
1   2   ah  ah  1.20397282
1   1   ao  ao  0.356674939
1   2   ao  ao  1.20397282
timit_format_data succeeded.
输入:
mfccdir=mfccs for test in train test dev ; do> steps/make_mfcc.sh data/$test exp/make_mfcc/$test $mfccdir 4> done
终端显示:succeeded creating mfcc features for train
succeeded creating mfcc features for test
succeeded creating mfcc features for dev
2 训练单音素系统,终端输入:
steps/train_mono.sh data/train data/lang exp/mono
终端显示:computing cepstral mean and variance statistics
initializing monophone system.
compiling training graphs
pass 0
pass 1
aligning data
pass 2
aligning data
pass 3
aligning data
pass 4
aligning data
pass 5
aligning data
pass 6
aligning data
pass 7
aligning data
pass 8
aligning data
pass 9
aligning data
pass 10
aligning data
pass 11
pass 12
aligning data
pass 13
pass 14
pass 15
aligning data
pass 16
pass 17
pass 18
pass 19
pass 20
aligning data
pass 21
pass 22
pass 23
pass 24
pass 25
aligning data
pass 26
pass 27
pass 28
pass 29
scripts/mkgraph.sh --mono data/lang_test exp/mono exp/mono/graph(制图)
终端显示:fsttablecompose data/lang_test/l_disambig.fst data/lang_test/g.fst
fstminimizeencoded
fstdeterminizestar --use-log=true
fstisstochastic data/lang_test/tmp/lg.fst
0 -0.0901494
warning: lg not stochastic.
fstcomposecontext --context-size=1 --central-position=0 --read-disambig-syms=data/lang_test/tmp/disambig_phones.list --write-disambig-syms=data/lang_test/tmp/disambig_ilabels_1_0.list data/lang_test/tmp/ilabels_1_0
fstisstochastic data/lang_test/tmp/clg_1_0.fst
0 -0.0901494
warning: clg not stochastic.
make-h-transducer --disambig-syms-out=exp/mono/graph/disambig_tid.list --transition-scale=1.0 data/lang_test/tmp/ilabels_1_0 exp/mono/tree exp/mono/final.mdl
fsttablecompose exp/mono/graph/ha.fst data/lang_test/tmp/clg_1_0.fst
fstdeterminizestar --use-log=true
fstminimizeencoded
fstrmsymbols exp/mono/graph/disambig_tid.list
fstrmepslocal
fstisstochastic exp/mono/graph/hclga.fst
0 -0.0901494
hclga is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/mono/final.mdl
3 解码测试的数据集,输入
for test in dev test ; do steps/decode_deltas.sh exp/mono data/$test data/lang exp/mono/decode_$test &done
终端显示:[1] 16368
[2] 16369
3.1计算结果,输入:
scripts/average_wer.sh exp/mono/decode_*/wer > exp/mono/wer

终端显示:[1]-  完成                  steps/decode_deltas.sh exp/mono data/$test data/lang exp/mono/decode_$test
[2]+  完成                  steps/decode_deltas.sh exp/mono data/$test data/lang exp/mono/decode_$test
4 从单音素系统中获得排列
创建排列用以训练其他系统,如ann-hmm。
输入:
steps/align_deltas.sh data/train data/lang exp/mono exp/mono_ali_train

终端显示:computing cepstral mean and variance statistics
aligning all training data
done.
steps/align_deltas.sh data/dev data/lang exp/mono exp/mono_ali_dev
方法二:修改相应的timit路径之后,直接运行run.sh
timit s4实例此脚本是用于构建一个音位识别器
workdir=/home/zhangju/ss4(自己找个有空间的路径作为workdir)
 mkdir -p $workdir
cp -r conf local utils steps path.sh $workdir
cd $workdir
. path.sh(此文件中的环境变量kaldiroot要自己修改路径,改到自己裝的kaldi文件中。kaldiroot=/home/mayuan/kaldi-trunk(我用nano改的。))
local/timit_data_prep.sh --config-dir=$pwd/conf --corpus-dir=/home/zhangju/timit --work-dir=$workdir
该用户其它信息

VIP推荐

免费发布信息,免费发布B2B信息网站平台 - 三六零分类信息网 沪ICP备09012988号-2
企业名录 Product