CS-644B: Pattern Recognition

"We don't see things as they are. We see them as we are." - Anais Nin

Detailed Course Contents

Introduction to Pattern Recognition via Character Recognition

  1. Transducers
  2. Preprocessing
  3. Feature extraction (feature-space representation)
  4. Classification (decision regions)
  5. Grids (square, triangular, hexagonal)
  6. Connectivity
  7. Contour tracing (square & Moore neighborhood tracing)
  8. M.I.T. reading machine for the blind
  9. Hysteresis smoothing (digital filtering)
  10. Types of input to pattern recognition programs

Spatial Smoothing

  1. Regularization
  2. Logical smoothing (salt-and-pepper noise)
  3. Local averaging
  4. Median filtering
  5. Polygonal approximation

Spatial Differentiation

  1. Sobel operator
  2. Roberts cross operator
  3. Laplacian
  4. Unsharp masking

Spatial Moments

  1. Moments of distributions
  2. Moments of area & perimeter
  3. Moments for feature extraction
  4. Moments for pre-processing
  5. Moments as predictors of discrimination performance

Medial Axis Transformations

  1. Distance between sets
  2. Medial Axis (prairie-fire transformation)
  3. Skeletonization
  4. Hilditch's algorithm
  5. Rosenfeld's algorithm
  6. Minkowski metrics
  7. Distance transforms
  8. Skeleton clean-up via distance transforms
  9. Medial axes via distance transforms

Topological Feature Extraction

  1. Convex hulls, concavities and enclosures

Processing Line Drawings

  1. Square, circular, and grid-intersect quantization
  2. Probability of obtaining diagonal elements
  3. Geometric probability (Bertrand's paradox)
  4. Difference encoding & chain correlation functions
  5. Minkowski metric quantization

Detection of Structure in Noisy Pictures and Dot Patterns

  1. Point-to-curve transformations (Hough transform)
  2. Line and circle detection
  3. Hypothesis testing approach
  4. Maximum-entropy quantization
  5. Proximity graphs and perception
  6. Triangulations and Voronoi diagrams
  7. The shape of a set of points
  8. Relative neighbourhood graphs
  9. Sphere-of-influence graphs
  10. Alpha hulls & Beta skeletons

Neural Networks and Bayesian Decision Theory

  1. Formal neurons, linear machines & perceptrons
  2. Continuous and discrete measurements
  3. Minimum risk classification
  4. Minimum error classification
  5. Discriminant functions
  6. The multivariate Gaussian probability density function
  7. Mahalanobis distance classifiers
  8. Parametric decision rules
  9. Independence and the discrete case

Independence of Measurements, Redundancy, and Synergism

  1. Conditional and unconditional independence
  2. Dependence and correlation
  3. The best k measurements are not the k best
  4. Information theory and feature evaluation criteria
  5. Feature selection methods

Neural Networks and Non-parametric Learning

  1. Perceptrons
  2. Non-parametric training of linear machines
  3. Error-correction procedures
  4. The fundamental learning theorem
  5. Multi-layer networks

Estimation of Parameters and Classifier Performance

  1. Properties of estimators
  2. Dimensionality and sample size
  3. Estimation of the probability of misclassification

Nearest Neighbor Decision Rules

  1. The k-nearest neighbor rule
  2. Efficient search methods for nearest neighbors
  3. Decreasing space requirements
  4. Editing training sets
  5. Error bounds

Using Contextual Information in Pattern Recognition

  1. Markov methods
  2. Forward dynamic programming and The Viterbi algorithm
  3. Combined bottom-up and top-down algorithms

Cluster Analysis and Unsupervised Learning

  1. Decision-directed learning
  2. Graph-theoretic methods
  3. Agglomerative and divisive methods

Teaching Activities           Homepage