<--- Back to Details
First PageDocument Content
Artificial intelligence / Lossless data compression / Feature selection / Lossy compression / Kullback–Leibler divergence / K-nearest neighbor algorithm / Statistics / Data compression / Machine learning
Date: 2010-07-07 01:06:51
Artificial intelligence
Lossless data compression
Feature selection
Lossy compression
Kullback–Leibler divergence
K-nearest neighbor algorithm
Statistics
Data compression
Machine learning

Add to Reading List

Source URL: dml.utdallas.edu

Download Document from Source Website

File Size: 264,41 KB

Share Document on Facebook

Similar Documents

Efficient Sparse Group Feature Selection via Nonconvex Optimization

DocID: 1vj7e - View Document

Schema-summarization in Linked-Data-based feature selection for recommender systems Azzurra Ragone1 , Paolo Tomeo2 , Corrado Magarelli2 , Tommaso Di Noia2 , Matteo Palmonari1 Andrea Maurino1 , Eugenio Di Sciascio2 1 1

DocID: 1uPCf - View Document

Stat 928: Statistical Learning Theory Lecture: 8 Feature Selection in the Non-Orthogonal Case Instructor: Sham Kakade

DocID: 1uzPH - View Document

WISP 2007 SPECIAL ISSUE OF IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 1 A Feature Selection Algorithm for the Regularization of Neuron Models

DocID: 1uxYY - View Document

Job ter Burg, ACE, NCE film editor ! FEATURE FILMS (selection)

DocID: 1usIg - View Document