Random Walks for Mesh Denoising Xianfang Sun Paul

  • Slides: 27
Download presentation
Random Walks for Mesh Denoising Xianfang Sun Paul L. Rosin Ralph R. Martin Frank

Random Walks for Mesh Denoising Xianfang Sun Paul L. Rosin Ralph R. Martin Frank C. Langbein Cardiff University UK

Outline Introduction p Random Walks p Normal Filtering p Vertex Position Updating p Experimental

Outline Introduction p Random Walks p Normal Filtering p Vertex Position Updating p Experimental Results p Conclusion p

Introduction p Mesh models generated by 3 D scanner always contain noise. It is

Introduction p Mesh models generated by 3 D scanner always contain noise. It is necessary to remove the noise from the meshes. p We want to distinguish mesh denoising, mesh smoothing, and mesh fairing. n n n Mesh denoising: remove noises, feature-preserving Mesh smoothing: remove high-frequency information Mesh fairing: smoothing, aesthetically pleasing surface

Introduction (cont. ) p Mesh denoising can be performed in one step or two

Introduction (cont. ) p Mesh denoising can be performed in one step or two steps: n n One-step: directly move vertex Two-step: first adjust face normals, then based on new normals to move vertex p When the result of a signal pass of the vertex displacement is not good, iteration is necessary. p Two schemes of iterative two-step method: (Step 1+Step 2)n n (Step 1) n 1+(Step 2)n 2 Where n Step 1: face normal filtering n Step 2: vertex position updating n

Introduction (cont. ) p Our algorithm is an iterative two-step method: Face normal filtering

Introduction (cont. ) p Our algorithm is an iterative two-step method: Face normal filtering (Iterate n 1 times) p Vertex position updating (Iterate n 2 times) We use random walks for face normal filtering and conjugate-gradient descent method for vertex position updating

Markov Chains and Random Walks p Random walks is closely related to Markov Chain.

Markov Chains and Random Walks p Random walks is closely related to Markov Chain. p Markov Chains: a sequence of random variables with the property that given the present state, the future state is conditionally independent of any earlier state. p Random Walks: A special Markov Chain with sparse transition probability matrix.

Markov Chains and Random Walks (cont. ) p Notation: p Initial probability distribution: p

Markov Chains and Random Walks (cont. ) p Notation: p Initial probability distribution: p nth step probability distribution: p n-step transition probability matrix: p (i, j)th element of p kth step transition probability matrix: p (i, j)th element of : :

Normal Filtering p Motivation: n If the probabilities of stepping from one triangle to

Normal Filtering p Motivation: n If the probabilities of stepping from one triangle to another depend on how similar their normals are, and we average normals according to the final probabilities of random walks, we will give greater weights to triangles with similar normals and less weights to ones that are not so similar. n Similar ideas were used by Smolka and Wojciechowski [2001] for image denoising. n This idea may be used in other mesh processing problems.

Normal Filtering p Normal updating formula : and are the current and updated normal

Normal Filtering p Normal updating formula : and are the current and updated normal of the face i, respectively; F is the face set; is the probability of going from face i to face j after n steps of random walks. depends on {k = 1, …, n} and n, where the probability from face i to face j at the kth step. is

Normal Filtering (cont. ) p Choice of : It is a decreasing function of

Normal Filtering (cont. ) p Choice of : It is a decreasing function of : is the 1 -ring neighbourhood of the face i. p Choice of n: When non-iterative scheme is chosen, n must be large enough to guarantee good results, When iterative scheme is chosen, n can be small.

Normal Filtering (cont. ) p Two computational schemes: In each iteration, We compute i=

Normal Filtering (cont. ) p Two computational schemes: In each iteration, We compute i= 1 to number-of-faces: sequentially from Batch scheme: always use the same old in the last iteration; obtained Progressive scheme: use the newly updated obtained in the current iteration, once the new value is available.

Normal Filtering (cont. ) p Computational Tip: For the case of n>1, because will

Normal Filtering (cont. ) p Computational Tip: For the case of n>1, because will become nonsparse, as n grows, the computational cost will grows quickly, and additional memory will be required to store the whole matrix , we propose: Not to compute , and then use to compute but to update normals sequentially by: and ,

Normal Filtering (cont. ) p Adaptive parameter adjustment: Because choosing a suitable parameter value

Normal Filtering (cont. ) p Adaptive parameter adjustment: Because choosing a suitable parameter value affects the quality of the results, we need to dynamically adjust the parameter. We consider to minimise the cost function: where is the initial noisy normal of face i. And we use stochastic gradient-descent algorithm to update the parameter:

Normal Filtering (cont. ) p Feature-preserving property n n The feature is related to

Normal Filtering (cont. ) p Feature-preserving property n n The feature is related to the face normals; The updated normal is weighted average of its neighbouring normals; The weight function is a decreasing function of the normal difference; Adaptive adjustment of the parameter further improves the feature-preserving property.

Vertex Position Updating p Orthogonality between the normal and the three edges of each

Vertex Position Updating p Orthogonality between the normal and the three edges of each face on the mesh: p Minimise the error function: p Solution by conjugate gradient descent algorithm.

Experiment Results: Choice of Parameters

Experiment Results: Choice of Parameters

Experimental Results: Adaptive Parameter original β=8, A noisy β=5, NA β=8, NA (non-adaptive) β=5,

Experimental Results: Adaptive Parameter original β=8, A noisy β=5, NA β=8, NA (non-adaptive) β=5, A (adaptive)

Experimental Results: Quality Comparison original MF (median) noisy FF (fuzzy) BF (bilateral) RF (random

Experimental Results: Quality Comparison original MF (median) noisy FF (fuzzy) BF (bilateral) RF (random walks)

Experimental Results: Quality Comparison (cont. ) original MF noisy BF FF RF

Experimental Results: Quality Comparison (cont. ) original MF noisy BF FF RF

Experimental Results: Quality Comparison (cont. ) original MF noisy BF FF RF

Experimental Results: Quality Comparison (cont. ) original MF noisy BF FF RF

Experimental Results: Quality Comparison (cont. ) original MF noisy BF FF RF

Experimental Results: Quality Comparison (cont. ) original MF noisy BF FF RF

Experimental Results: Quality Comparison (cont. ) BF MF FF RF original

Experimental Results: Quality Comparison (cont. ) BF MF FF RF original

Experimental Results: Quality Comparison (cont. ) BF MF FF RF original

Experimental Results: Quality Comparison (cont. ) BF MF FF RF original

Experimental Results: Quality Comparison (cont. ) BF MF FF RF original

Experimental Results: Quality Comparison (cont. ) BF MF FF RF original

Experimental Results: Timing Comparison

Experimental Results: Timing Comparison

Conclusins p p p Random walks approach is introduced into mesh denoising – it

Conclusins p p p Random walks approach is introduced into mesh denoising – it may also be used in other mesh processing problems. Adaptively adjust parameter and progressively update face normals is the best implementation of our approach. It is a fast and efficient feature-preserving approach: n Our approach is as fast as the bilateral filtering (BF) approach, however, our approach preserves sharp edges better than the BF approach. n Compared to the fuzzy vector median filtering (FF) approach, our approach is over ten times faster, yet produces a final surface quality similar to or better than that approach.

Thank You! Questions?

Thank You! Questions?