Machine Learning: A Probabilistic Perspective - Comprehensive Guide to Adaptive Computation & AI Algorithms | Perfect for Data Scientists & ML Engineers
$51.41 $93.49-45%
Free shipping on all orders over $50
7-15 days international
19 people viewing this product right now!
30-day free returns
Secure checkout
36306741
Guranteed safe checkout
DESCRIPTION
A comprehensive introduction to machine learning that uses probabilistic models and inference as a unifying approach.Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach.The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package—PMTK (probabilistic modeling toolkit)—that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
REVIEWS
****** - Verified Buyer
4.5
I love this book. It's cutting-edge and comprehensive as nothing else on the market and has the right mathematical level (for me, at any rate; I find this to be about right, and Hastie et al to be a bit hard). It makes a great desk reference for day-to-day work.Speaking of comprehensiveness, did you want to know about a symmetric version of KL divergence? It's there in 2.8.2. How about dynamic latent dirichlet allocation? See 27.4.2. Examples from cognitive science? See his refs to Tenenbaum and Xu's work. How about a reference to the fact that consistent preferences allow one to represent them as utilities? See 5.7.3.2. The discussion on utilities is brief, but I appreciate this sort of connection-making between disciplines (ML and micro-economics, in this case), even if cursory. Please note: I don't mean to suggest that his comprehensiveness is merely a question of making short references, rather that over and above all the deep & detailed material he presents he still finds the time/space to add these little cross-connections that, to me, make the book a lot more valuable.Minor complaints: I wonder why he didn't give much attention to newer neural network architectures (e.g. lstm). It also seems a bit weird that he seems to think of Bayesianism as the "one true way", and I say this as someone who loves the Bayesian philosophy. These are minor quibbles.
We use cookies to enhance your browsing experience, serve personalized ads or content, and analyze our traffic. By clicking "Allow cookies", you consent to our use of cookies. More Information see our Privacy Policy.