Contents

Top 10 Most Cited Machine Learning Articles

1 Star2 Stars3 Stars4 Stars5 Stars (7 votes, average: 5.00 out of 5)
Loading...

Here, we provide you with our list of Top 10 Most Cited Machine Learning Articles based on info in CiteSeer database as of 19 March 2015.

1. Statistical Learning Theory

Author: V Vapnik in 1998

Citations: 9898

Synopsis: Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems.

2. A tutorial on hidden Markov models and selected applications in speech recognition

Author: L R Rabiner in 1989

Citations: 4585

Synopsis: Although initially introduced and studied in the late 1960s and early 1970s, statistical methods of Markov source or hidden Markov modeling have become increasingly popular in the last several years. There are two strong reasons why this has occurred. First the models are very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of use cases. Second, the models, when applied properly, work very well in practice for several important use cases. In this paper we attempt to carefully and methodically review the theoretical aspects of this type of statistical modeling and show how they have been applied to selected problems in machine recognition of speech.

3. Reinforcement Learning, an introduction

Author: R Sutton, A Barto in 1998

Citations: 4148

Synopsis: In this article, we try to give a basic intuitive sense of what reinforcement learning is and how it differs and relates to other fields, e.g., supervised learning and neural networks, genetic algorithms and artificial life, control theory. Intuitively, RL is trial and error (variation and selection, search) plus learning (association, memory). We argue that RL is the only field that seriously addresses the special features of the problem of learning from interaction to achieve long-term goals.

4. Libsvm: a library for support vector machines

Author: C-C Chang, C-J Lin.

Citations: 3829

Synopsis: LIBSVM is a library for Support Vector Machines (SVMs). We have been actively building this package since the year 2000. The goal is to help users to easily apply SVM to their user cases. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems, theoretical convergence, multi-class classification, probability estimates, and parameter selection are discussed in detail

5. Induction of decision trees

Author: J R Quinlan in 1986

Citations: 3634

Synopsis: The technology for building knowledge-based systems by inductive inference from examples has been demonstrated successfully in several practical use cases. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and/or incomplete. A reported shortcoming of the basic algorithm is discussed and two means of overcoming it are compared. The paper concludes with illustrations of current research directions.

6. Bagging predictors

Author: L Breiman in 1996

Citations: 2751

Synopsis: Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.

7. A tutorial on support vector machines for pattern recognition

Author: C J C Burges in 1998

Citations: 2486

Synopsis: The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.

8. Support vector networks

Author: C Cortes, V Vapnik in 1995

Citations: 2375

Synopsis: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the supportvector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.

9. Learning with kernels

Author: B Schölkopf, A J Smola in 2002

Citations: 2234

Synopsis: Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support vector machines combine the so-called kernel trick with the large margin idea. There has been little use of these methods in an online setting suitable for real-time use cases. In this paper we consider online learning in a Reproducing Kernel Hilbert Space. By considering classical stochastic gradient descent within a feature space, and the use of some straightforward tricks, we build simple and computationally efficient algorithms for a wide range of problems such as classification, regression, and novelty detection. In addition to allowing the exploitation of the kernel trick in an online setting, we examine the value of large margins for classification in the online setting with a drifting target. We derive worst case loss bounds and moreover we show the convergence of the hypothesis to the minimiser of the regularised risk functional. We present some experimental results that support the theory as well as illustrating the power of the new algorithms for online novelty detection.

10. Text Categorization with Support Vector Machines: Learning with Many Relevant Features

Author: T Joachims in 1998

Citations: 1872

Synopsis: This paper explores the use of Support Vector Machines (SVMs) for learning text classifiers from examples. It analyzes the particular properties of learning with text data and identifies, why SVMs are appropriate for this task. Empirical results support the theoretical findings. SVMs achieve substantial improvements over the currently best performing methods and they behave robustly over a variety of different learning tasks. Furthermore, they are fully automatic, eliminating the need for manual parameter tuning.

Conclusion

Here, we provide you with our list of Top 10 Most Cited Machine Learning Articles based on info in CiteSeer database as of 19 March 2015.

Recommended Posts
Showing 3 comments
  • fallout 76 cheats

    Great, this is what I was looking for in yahoo

  • casino online

    I came to this site with the introduction of a friend around me and I was very impressed when I found your writing. I’ll come back often after bookmarking! casino online

  • casino online

    Your explanation is organized very easy to understand!!! I understood at once. Could you please post about casino online ?? Please!!

Contact Us

We look forward to your messages. Please drop us a note for any enquiries and we'll get back to you, asap.

Not readable? Change text. captcha txt
Cloud help desk_03technical viability of applying wearable telematics to health insurance_cover