Google also manages to speed up the machine learning algorithms with the TPUs because it doesn’t need the high-precision of standard CPUs and GPUs. Instead of 32-bit precision, the algorithms happily run with a reduced precision of 8 bits, so every transaction needs fewer transistors.
http://www.eetimes.com/document.asp?doc_id=1329715 [eetimes.com] では “The TPUs are “likely optimized for a specific math precision possibly 16-bit floating point or even lower precision integer math,” Krewell said.
by
Anonymous Coward
on 2016年05月20日 22時51分
(#3016253)
機械学習用ですらないと推測されている Krewellさんは外部のアナリスト
“It seems the TPU is focused on the inference part of CNN and not the training side,” Krewell said. “Inference only requires less complex math and it appears Google has optimized that part of the equation.
“On the training side, the requirements include very larger data sets which the TPU may not be optimized for. In this regard, Nvidia's Pascal/P100 may still be an appealing product for Google,” he added.
8ビット機? 本当?? (スコア:4, 興味深い)
リンクされてる TechCrunchの記事 [techcrunch.com]の表題に『~なんと、8ビット機だ』とあるけど本当かしら?
原文 [techcrunch.com]見ても
とあるのみで 8ビット機である、みたいな説明は見当たらんのだけど。
Re: (スコア:1)
http://www.eetimes.com/document.asp?doc_id=1329715 [eetimes.com]
では
“The TPUs are “likely optimized for a specific math precision possibly 16-bit floating point or even lower precision integer math,” Krewell said.
意訳:
TPUは、どうやら16ビット浮動小数点の精度あるいはさらに低い精度の整数演算用に特別に最適化されているようだ。と、Krewellは語った。
ってことで訳は正しいですかね
8bitっていう意味ではないですね。
Re:8ビット機? 本当?? (スコア:0)
機械学習用ですらないと推測されている
Krewellさんは外部のアナリスト
“It seems the TPU is focused on the inference part of CNN and not the training side,” Krewell said. “Inference only requires less complex math and it appears Google has optimized that part of the equation.
“On the training side, the requirements include very larger data sets which the TPU may not be optimized for. In this regard, Nvidia's Pascal/P100 may still be an appealing product for Google,” he added.