Yeah, I'm finding that there's little book-format info on deep neural networks as well. Similarly for belief networks. Gotta hunt down some journals, and wield some Google-fu.
The most promising approaches for neural nets these days involve learning generative models using Restricted Boltzmann Machines (RBMs). Geoffrey Hinton at UToronto has some good seminal papers on these, like http://www.cs.toronto.edu/~hinton/absps/ncfast.pdf. See http://www.cs.toronto.edu/~hinton/ for excellent videos of digit generation / classification.
As a note, this approach outperforms all others (sans tweaks), whereas neural nets used to be beaten by SVMs and the like.
Actually, the best recent results in this area involve discriminatively trained stacked auto-encoders using second-order algorithms (for which the code, as of Sept 2010, is still private to the group at the university of toronto), as described in James Martens's paper at this year's ICML: http://www.cs.toronto.edu/~jmartens/docs/Deep_HessianFree.pd...