Machine Learning: An Algorithmic Perspective (2nd Edition)
There have been some interesting developments in machine learning over the past four years, since the 1st edition of this book came out. One is the rise of Deep Belief Networks as an area of real research interest (and business interest, as large internet-based companies look to snap up every small company working in the area), while another is the continuing work on statistical interpretations of machine learning algorithms. This second one is very good for the field as an area of research, but it does mean that computer science students, whose statistical background can be rather lacking, find it hard to get started in an area that they are sure should be of interest to them. The hope is that this book, focussing on the algorithms of machine learning as it does, will help such students get a handle on the ideas, and that it will start them on a journey towards mastery of the relevant mathematics and statistics as well as the necessary programming and experimentation. In addition, the libraries available for the Python language have continued to develop, so that there are now many more facilities available for the programmer. This has enabled me to provide a simple implementation of the Support Vector Machine that can be used for experiments, and to simplify the code in a few other places. All of the code that was used to create the examples in the book is available at http://stephenmonika.net/ (in the â€˜Bookâ€™ tab), and use and experimentation with any of this code, as part of any study on machine learning, is strongly encouraged. Some of the changes to the book include:
â€¢ the addition of two new chapters on two of those new areas: Deep Belief Networks (Chapter 17) and Gaussian Processes (Chapter 18).
â€¢ a reordering of the chapters, and some of the material within the chapters, to make a more natural flow.
â€¢ the reworking of the Support Vector Machine material so that there is running code and the suggestions of experiments to be performed.
â€¢ the addition of Random Forests (as Section 13.3), the Perceptron convergence theorem (Section 3.4.1), a proper consideration of accuracy methods (Section 2.2.4), conjugate gradient optimisation for the MLP (Section 9.3.2), and more on the Kalman filter and particle filter in Chapter 16.
â€¢ improved code including better use of naming conventions in Python.
â€¢ various improvements in the clarity of explanation and detail throughout the book.
|Download Ebook||Read Now||File Type||Upload Date|
|May 30, 2020|
Do you like this book? Please share with your friends, let's read it !! :)