skywire writes: We’ve all followed the recent story of AlphaGo beating a top Go master. Now IBM researchers Tayfun Gokmen and Yurii Vlasov have described what could be a game changer for machine learning — an array of resistive processing units that would use stochastic techniques to dramatically accelerate the backpropagation algorithm, speeding up neural network training by a factor of 30,000. They argue that such an array would be reliable, low in power use, and buildable with current CMOS fabrication technology.
“Even Google’s AlphaGo still needed thousands of chips to achieve its level of intelligence,” adds Tom’s Hardware. “IBM researchers are now working to power that level of intelligence with a single chip, which means thousands of them put together could lead to even more breakthroughs in AI capabilities in the future.”
Read more of this story at Slashdot.