EECS 274 Computer Vision Model Fitting Fitting Choose





![points • Add random noise ([0, 0. 05]) to each point. • Maximum vote points • Add random noise ([0, 0. 05]) to each point. • Maximum vote](https://slidetodoc.com/presentation_image_h2/68fd8f2ac4511be6eb0c1c6ee2da5c73/image-6.jpg)




































- Slides: 42

EECS 274 Computer Vision Model Fitting

Fitting • Choose a parametric object/some objects to represent a set of points • Three main questions: – what object represents this set of points best? – which of several objects gets which points? – how many objects are there? (you could read line for object here, or circle, or ellipse or. . . ) • Reading: FP Chapter 15

Fitting and the Hough transform • Purports to answer all three questions – in practice, answer isn’t usually all that much help • We do for lines only • A line is the set of points (x, y) such that • Different choices of , d>0 give different lines • For any (x, y) there is a one parameter family of lines through this point, given by • Each point gets to vote for each line in the family; if there is a line that has lots of votes, that should be the line passing through the points

tokens votes • 20 points • 200 bins in each direction • # of votes is indicated by the pixel brightness • Maximum votes is 20 • Note that most points in the vote array are very dark, because they get only one vote.

Hough transform • Construct an array representing , r • For each point, render the curve ( , r) into this array, adding one at each cell • Difficulties – Quantization error: how big should the cells be? (too big, and we cannot distinguish between quite different lines; too small, and noise causes lines to be missed) – Difficulty with noise • How many lines? – count the peaks in the Hough array • Who belongs to which line? – tag the votes • Hardly ever satisfactory in practice, because problems with noise and cell size defeat it
![points Add random noise 0 0 05 to each point Maximum vote points • Add random noise ([0, 0. 05]) to each point. • Maximum vote](https://slidetodoc.com/presentation_image_h2/68fd8f2ac4511be6eb0c1c6ee2da5c73/image-6.jpg)
points • Add random noise ([0, 0. 05]) to each point. • Maximum vote is now 6 votes

points votes

As noise increases, # of max votes decreases difficult to use Hough transform less robustly

• As noise increase, # of max votes in the right bucket goes down, and it is more likely to obtain a large spurious vote in the accumulator • Can be quite difficult to find a line out of noise with Hough transform as the # of votes for the line may be comparable with the # of vote for a spurious line

Choice of model Least squares but assumes error appears only in y Total least squares

Who came from which line? • Assume we know how many lines there are - but which lines are they? – easy, if we know who came from which line • Three strategies – Incremental line fitting – K-means – Probabilistic (later!)



Fitting curves other than lines • In principle, an easy generalisation – The probability of obtaining a point, given a curve, is given by a negative exponential of distance squared • In practice, rather hard – It is generally difficult to compute the distance between a point and a curve

Implicit curves • (u, v) on curve, i. e. , ϕ(u, v)=0 • s=(dx, dy)-(u, v) is normal to the curve

Robustness • As we have seen, squared error can be a source of bias in the presence of noise points – One fix is EM - we’ll do this shortly – Another is an M-estimator • Square nearby, threshold far away – A third is RANSAC • Search for good points

Missing data • So far we assume we know which points belong to the line • In practice, we may have a set of measured points – some of which from a line, – and others of which are noise • Missing data (or label)

Least squares fits the data well

Single outlier (x-coordinate is corrupted) affects the least-squares result

Single outlier (y-coordinate is corrupted) affects the least-squares result

Bad fit

Heavy tail, light tail • • • The red line represents a frequency curve of a long tailed distribution. The blue line represents a frequency curve of a short tailed distribution. The black line is the standard bell curve. .

M-estimators • Often used in robust statistics A point that is several away from the fitted curve will have no effect on the coefficients

Other M-estimators • Defined by influence function • Nonlinear function, solved iteratively • Iterative strategy – Draw a subset of samples randomly – Fit the subset using least squares – Use the remaining points to see fitness • Need to pick a sensible σ, which is referred as scale • Estimate scale at each iteration

Appropriate σ

small σ

large σ

Matching features What do we do about the “bad” matches? Szeliski

RAndom SAmple Consensus Select one match, count inliers

RAndom SAmple Consensus Select one match, count inliers

Least squares fit Find “average” translation vector

RANSAC • Random Sample Consensus • Choose a small subset uniformly at random • Fit to that • Anything that is close to result is signal; all others are noise • Refit • Do this many times and choose the best • Issues – How many times? • Often enough that we are likely to have a good line – How big a subset? • Smallest possible – What does close mean? • Depends on the problem – What is a good line? • One where the number of nearby points is so big it is unlikely to be all outliers


Descriptor Vector • Orientation = blurred gradient • Similarity Invariant Frame – Scale-space position (x, y, s) + orientation ( ) Richard Szeliski Image Stitching 34

RANSAC for Homography

RANSAC for Homography

RANSAC for Homography

Probabilistic model for verification

Finding the panoramas

Finding the panoramas

Finding the panoramas

Results
EECS 274 Computer Vision Model Fitting Fitting Choose
EECS 274 Computer Vision Introduction What is computer
EECS 274 Computer Vision Stereopsis Stereopsis Epipolar geometry
EECS 274 Computer Vision Segmentation by Clustering Segmentation
EECS 274 Computer Vision Affine Structure from Motion
EECS 274 Computer Vision Geometry of Multiple Views
EECS 274 Computer Vision Cameras Cameras Camera models
EECS 274 Computer Vision Pyramid and Texture Filter
EECS 274 Computer Vision Cameras Cameras Camera models
EECS 274 Computer Vision Pyramid and Texture Filter
EECS 274 Computer Vision Sources Shadows and Shading
EECS 274 Computer Vision Object detection Human detection
EECS 274 Computer Vision Tracking Tracking Motivation Obtain
EECS 274 Computer Vision Light and Shading Radiometry
EECS 274 Computer Vision Projective Structure from Motion
EECS 274 Computer Vision Geometric Camera Models Geometric
EECS 274 Computer Vision Projective Structure from Motion
EECS 274 Computer Vision Geometric Camera Calibration Geometric
EECS 274 Computer Vision Segmentation by Clustering Segmentation
NORIP Malm 274 2004 NORIP Malm 274 2004
Fitting Fitting Choose a parametric objectsome objects to
Fitting Fitting Choose a parametric objectsome objects to
Fitting Fitting Choose a parametric objectsome objects to
Choose the University of Lige Choose openness Choose
Choose Fulfillment Choose Earnings Choose Work The Ticket
Choose Fulfillment Choose Earnings Choose Work The Ticket
Choose the University of Lige Choose openness Choose
Lecture 22 Model fitting in more detail Fitting
Diffuse model fitting A Strong Dec 2001 Fitting
100311 Model Fitting Computer Vision CS 143 Brown
Image Processing Computer Vision Projection Model Stereo Vision
Fitting Fitting Weve learned how to detect edges
Fitting CS 678 Spring 2018 Outline Fitting Least
Curve Fitting with Curving Fitting 6 9 Polynomial
Curve Fitting with Curving Fitting 6 9 Polynomial
Line Fitting Line fitting is key to investigating
Curve Fitting with Curving Fitting 6 9 Polynomial
Fitting Fitting Motivation Weve learned how to detect
Transformer TRANSFORMER Fitting and FITTING AND Accessories ACCESSORIES
MM MD LJ fitting LJ fitting 5 kcalmolfit
Line Fitting James Hayes Least squares line fitting
Engineering Computation Curve Fitting Interpolation 1 Curve Fitting