Semi-Supervised Learning

Semi-Supervised Learning
Author: Olivier Chapelle
Publisher: MIT Press
Total Pages: 525
Release: 2010-01-22
Genre: Computers
ISBN: 0262514125

A comprehensive review of an area of machine learning that deals with the use of unlabeled data in classification problems: state-of-the-art algorithms, a taxonomy of the field, applications, benchmark experiments, and directions for future research. In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between supervised learning (in which all training examples are labeled) and unsupervised learning (in which no label data are given). Interest in SSL has increased in recent years, particularly because of application domains in which unlabeled data are plentiful, such as images, text, and bioinformatics. This first comprehensive overview of SSL presents state-of-the-art algorithms, a taxonomy of the field, selected applications, benchmark experiments, and perspectives on ongoing and future research.Semi-Supervised Learning first presents the key assumptions and ideas underlying the field: smoothness, cluster or low-density separation, manifold structure, and transduction. The core of the book is the presentation of SSL methods, organized according to algorithmic strategies. After an examination of generative models, the book describes algorithms that implement the low-density separation assumption, graph-based methods, and algorithms that perform two-step learning. The book then discusses SSL applications and offers guidelines for SSL practitioners by analyzing the results of extensive benchmark experiments. Finally, the book looks at interesting directions for SSL research. The book closes with a discussion of the relationship between semi-supervised learning and transduction.

Introduction to Semi-Supervised Learning

Introduction to Semi-Supervised Learning
Author: Xiaojin Geffner
Publisher: Springer Nature
Total Pages: 116
Release: 2022-05-31
Genre: Computers
ISBN: 3031015487

Semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data are unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data are labeled. The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data are scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semi-supervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semi-supervised learning, and we conclude the book with a brief discussion of open questions in the field. Table of Contents: Introduction to Statistical Machine Learning / Overview of Semi-Supervised Learning / Mixture Models and EM / Co-Training / Graph-Based Semi-Supervised Learning / Semi-Supervised Support Vector Machines / Human Semi-Supervised Learning / Theory and Outlook

Semi-supervised Learning

Semi-supervised Learning
Author: Guoqiang Zhong
Publisher:
Total Pages: 0
Release: 2018
Genre: Supervised learning (Machine learning)
ISBN: 9781536135565

Semi-supervised learning is an important area of machine learning. It deals with problems that involve a lot of unlabeled data and very scarce labeled data. The book focuses on some state-of-the-art research on semi-supervised learning. In the first chapter, Weng, Dornaika and Jin introduce a graph construction algorithm named the constrained data self-representative graph construction (CSRGC). In the second chapter, to reduce the graph construction complexity, Zhang et al. use anchors that were a special subset chosen from the original data to construct the full graph, while randomness was injected into graphs to improve the classification accuracy and deal with the high dimensionality issue. In the third chapter, Dornaika et al. introduces a kernel version of the Flexible Manifold Embedding (KFME) algorithm. In the fourth chapter, Zhang et al. present an efficient and robust graph-based transductive classification method known as the minimum tree cut (MTC), for large scale applications. In the fifth chapter, Salazar, Safont and Vergara investigated the performance of semi-supervised learning methods in two-class classification problems with a scarce population of one of the classes. In the sixth chapter, by breaking the sample identically and independently distributed (i.i.d.) assumption, one novel framework called the field support vector machine (F-SVM) with both classification (F-SVC) and regression (F-SVR) purposes is introduced. In the seventh chapter, Gong employs the curriculum learning methodology by investigating the difficulty of classifying every unlabeled example. As a result, an optimized classification sequence was generated during the iterative propagations, and the unlabeled examples are logically classified from simple to difficult. In the eighth chapter, Tang combines semi-supervised learning with geo-tagged photo streams and concept detection to explore situation recognition. This book is suitable for university students (undergraduate or graduate) in computer science, statistics, electrical engineering, or anyone else who would potentially use machine learning algorithms; professors, who research artificial intelligence, pattern recognition, machine learning, data mining and related fields; and engineers, who apply machine learning models into their products.

Handbook on Neural Information Processing

Handbook on Neural Information Processing
Author: Monica Bianchini
Publisher: Springer Science & Business Media
Total Pages: 547
Release: 2013-04-12
Genre: Technology & Engineering
ISBN: 3642366570

This handbook presents some of the most recent topics in neural information processing, covering both theoretical concepts and practical applications. The contributions include: Deep architectures Recurrent, recursive, and graph neural networks Cellular neural networks Bayesian networks Approximation capabilities of neural networks Semi-supervised learning Statistical relational learning Kernel methods for structured data Multiple classifier systems Self organisation and modal learning Applications to content-based image retrieval, text mining in large document collections, and bioinformatics This book is thought particularly for graduate students, researchers and practitioners, willing to deepen their knowledge on more advanced connectionist models and related learning paradigms.

Semisupervised Learning for Computational Linguistics

Semisupervised Learning for Computational Linguistics
Author: Steven Abney
Publisher: CRC Press
Total Pages: 322
Release: 2007-09-17
Genre: Business & Economics
ISBN: 1420010808

The rapid advancement in the theoretical understanding of statistical and machine learning methods for semisupervised learning has made it difficult for nonspecialists to keep up to date in the field. Providing a broad, accessible treatment of the theory as well as linguistic applications, Semisupervised Learning for Computational Linguistics offer

Machine Learning and Big Data

Machine Learning and Big Data
Author: Uma N. Dulhare
Publisher: John Wiley & Sons
Total Pages: 544
Release: 2020-09-01
Genre: Computers
ISBN: 1119654742

This book is intended for academic and industrial developers, exploring and developing applications in the area of big data and machine learning, including those that are solving technology requirements, evaluation of methodology advances and algorithm demonstrations. The intent of this book is to provide awareness of algorithms used for machine learning and big data in the academic and professional community. The 17 chapters are divided into 5 sections: Theoretical Fundamentals; Big Data and Pattern Recognition; Machine Learning: Algorithms & Applications; Machine Learning's Next Frontier and Hands-On and Case Study. While it dwells on the foundations of machine learning and big data as a part of analytics, it also focuses on contemporary topics for research and development. In this regard, the book covers machine learning algorithms and their modern applications in developing automated systems. Subjects covered in detail include: Mathematical foundations of machine learning with various examples. An empirical study of supervised learning algorithms like Naïve Bayes, KNN and semi-supervised learning algorithms viz. S3VM, Graph-Based, Multiview. Precise study on unsupervised learning algorithms like GMM, K-mean clustering, Dritchlet process mixture model, X-means and Reinforcement learning algorithm with Q learning, R learning, TD learning, SARSA Learning, and so forth. Hands-on machine leaning open source tools viz. Apache Mahout, H2O. Case studies for readers to analyze the prescribed cases and present their solutions or interpretations with intrusion detection in MANETS using machine learning. Showcase on novel user-cases: Implications of Electronic Governance as well as Pragmatic Study of BD/ML technologies for agriculture, healthcare, social media, industry, banking, insurance and so on.

Graph-Based Semi-Supervised Learning

Graph-Based Semi-Supervised Learning
Author: Amarnag Lipovetzky
Publisher: Springer Nature
Total Pages: 111
Release: 2022-05-31
Genre: Computers
ISBN: 3031015711

While labeled data is expensive to prepare, ever increasing amounts of unlabeled data is becoming widely available. In order to adapt to this phenomenon, several semi-supervised learning (SSL) algorithms, which learn from labeled as well as unlabeled data, have been developed. In a separate line of work, researchers have started to realize that graphs provide a natural way to represent data in a variety of domains. Graph-based SSL algorithms, which bring together these two lines of work, have been shown to outperform the state-of-the-art in many applications in speech processing, computer vision, natural language processing, and other areas of Artificial Intelligence. Recognizing this promising and emerging area of research, this synthesis lecture focuses on graph-based SSL algorithms (e.g., label propagation methods). Our hope is that after reading this book, the reader will walk away with the following: (1) an in-depth knowledge of the current state-of-the-art in graph-based SSL algorithms, and the ability to implement them; (2) the ability to decide on the suitability of graph-based SSL methods for a problem; and (3) familiarity with different applications where graph-based SSL methods have been successfully applied. Table of Contents: Introduction / Graph Construction / Learning and Inference / Scalability / Applications / Future Work / Bibliography / Authors' Biographies / Index

Semi-Supervised Learning and Domain Adaptation in Natural Language Processing

Semi-Supervised Learning and Domain Adaptation in Natural Language Processing
Author: Anders Søgaard
Publisher: Springer Nature
Total Pages: 93
Release: 2022-05-31
Genre: Computers
ISBN: 3031021495

This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ("this algorithm never does too badly") than about useful rules of thumb ("in this case this algorithm may perform really well"). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant.

Artificial Neural Networks – ICANN 2009

Artificial Neural Networks – ICANN 2009
Author: Cesare Alippi
Publisher: Springer
Total Pages: 1062
Release: 2009-09-16
Genre: Computers
ISBN: 3642042740

This volume is part of the two-volume proceedings of the 19th International Conf- ence on Artificial Neural Networks (ICANN 2009), which was held in Cyprus during September 14–17, 2009. The ICANN conference is an annual meeting sp- sored by the European Neural Network Society (ENNS), in cooperation with the - ternational Neural Network Society (INNS) and the Japanese Neural Network Society (JNNS). ICANN 2009 was technically sponsored by the IEEE Computational Intel- gence Society. This series of conferences has been held annually since 1991 in various European countries and covers the field of neurocomputing, learning systems and related areas. Artificial neural networks provide an information-processing structure inspired by biological nervous systems. They consist of a large number of highly interconnected processing elements, with the capability of learning by example. The field of artificial neural networks has evolved significantly in the last two decades, with active partici- tion from diverse fields, such as engineering, computer science, mathematics, artificial intelligence, system theory, biology, operations research, and neuroscience. Artificial neural networks have been widely applied for pattern recognition, control, optimization, image processing, classification, signal processing, etc.