三维点云处理(6)——Model Fitting

    技术2024-11-05  16

    Model Fitting

    Least SquareLoss functionsNon-Linear LSQ Hough TransformAdvantageDisadvantage Random Sample Consensus (RANSAC)AdvantagesDisadvantages

    Least Square

    Definition of “fit” – minimize the perpendicular distance: E = ∑ i = 1 n ( a x i + b y i + c ) 2 E=\sum^n_{i=1}(ax_i+by_i+c)^2 E=i=1n(axi+byi+c)2 The solution is obvious: [ a , b , c ] T [a,b,c]^T [a,b,c]T is the eigenvector of the smallest eigenvalues of A A A: x ^ = min ⁡ x ∣ ∣ A x ∣ ∣ 2 2 , s . t . , ∣ ∣ x ∣ ∣ 2 = 1 , A ∈ R n × m , x ∈ R m \hat{x}=\min_x||Ax||^2_2,s.t.,||x||_2=1,A\in\R^{n\times{m}},x\in\R^m x^=xminAx22,s.t.,x2=1,ARn×m,xRm Linear LSQ problem A x = b Ax=b Ax=b: x ^ = min ⁡ x ∣ ∣ A x − b ∣ ∣ 2 2 , A ∈ R n × m , x ∈ R m , b ∈ R n \hat{x}=\min_x||Ax-b||^2_2,A\in\R^{n\times{m}},x\in\R^m,b\in\R^n x^=xminAxb22,ARn×m,xRm,bRn x ^ = ( A T A ) − 1 A T b \hat{x}=(A^TA)^{-1}A^Tb x^=(ATA)1ATb

    Loss functions

    L1. ρ = ∣ s ∣ \rho=|s| ρ=sL2. ρ = s 2 \rho=s^2 ρ=s2Cauchy. ρ = log ⁡ ( 1 + ∣ s ∣ ) \rho=\log(1+|s|) ρ=log(1+s)Huber. ρ = { s 2 , ∣ s ∣ < δ 2 δ ( ∣ s ∣ − 1 2 δ ) , o t h e r w i s e \rho= \left\{ \begin{array}{cc} s^2,|s|<\delta\\ 2\delta(|s|-\frac{1}{2}\delta), otherwise \end{array} \right. ρ={s2,s<δ2δ(s21δ),otherwise

    Non-Linear LSQ

    A general formulation of LSQ x ^ = min ⁡ x ∣ ∣ f ( x ) ∣ ∣ 2 \hat{x}=\min_x||f(x)||^2 x^=xminf(x)2Function f f f is the non-linear function: coupling the robust loss function with linear LSQOptimization methods Gradient descentGauss-NewtonLevenberg-Marquardt

    Hough Transform

    Model parameterization. E.g., for a line y = a x + b y=ax+b y=ax+b is non-uniform, can’t represent vertical lines (a is infinity) x cos ⁡ θ + y sin ⁡ θ = r x\cos\theta+y\sin\theta=r xcosθ+ysinθ=r is a better model with parameters { θ , r } \{\theta,r\} {θ,r} Selection of resolution Tradeoff between speed and precision Apply smoothing at the parameter space before searching for the highest vote E.g., Gaussian smoothReduce the effect of noise Discretize parameter spaces into binsFor each data point, vote the bins that can generate this data pointFind the bins with most votes

    Advantage

    Robust to noiseRobust to missing points of the shapeCan be extends to lots of models

    Disadvantage

    Doesn’t scale well with complicated models Usually works for models with less than 3 unknown parameters

    Random Sample Consensus (RANSAC)

    Randomly select a minimal subset of points required to solve the modelSolve the modelCompute error function for each point p i = ( x i , y i ) , d i = n T ( p i − p 0 ) ∣ ∣ n ∣ ∣ 2 p_i=(x_i,y_i),d_i=\frac{n^T(p_i-p_0)}{||n||_2} pi=(xi,yi),di=n2nT(pip0) Count the points consistent with the model, d i < τ d_i<\tau di<τ (inlier)Repeat step 1-4 for N N N iterations, choose the model with most inlier pointsDistance threshold τ \tau τ Usually chosen empiricallyChi-square distribution χ 2 \chi^2 χ2 Number of iterations N N N Choose N N N so that with probability p p p, as least one random sample is free outliers, e.g., p = 0.99 p=0.99 p=0.99 e e e: outlier ratio (probability that a point is an outlier) s s s: number of points in a sample (e.g., in line fitting a sample contains 2 points) N N N: sample number N N N (number of RANSAC iteration) p p p: confidence we get at least a good sample that is free from outliers ( 1 − ( 1 − e ) s ) N = 1 − p ⇒ N = log ⁡ ( 1 − p ) log ⁡ ( 1 − ( 1 − e ) s ) (1-(1-e)^s)^N=1-p\Rightarrow{N}=\frac{\log(1-p)}{\log(1-(1-e)^s)} (1(1e)s)N=1pN=log(1(1e)s)log(1p) Terminate when the inlier ratio reach the expected inlier ratio T = ( 1 − e ) ⋅ t o t a l _ n u m _ o f _ d a t a _ p o i n t s T=(1-e){\cdot}total\_num\_of\_data\_points T=(1e)total_num_of_data_pointsRun LSQ to refine the model after selecting the final model and inlier points

    Advantages

    Simple and generalUsually works well in practice, even with low inlier ratio like 10%

    Disadvantages

    Need to determine the inlier threshold τ \tau τNeed large number of samples when inlier ratio is low
    Processed: 0.013, SQL: 9