论文部分内容阅读
This paper proposes a new neuralalgorithm to perform the segmentation of an observed scene into regionscorresponding to different moving objects by analyzing a time-varyingimages sequence. The method consists of a classification step, where themotion of small patches is characterized through an optimizationapproach, and a segmentation step merging neighboring patchescharacterized by the same motion. Classification of motion is performedwithout optical flow computation, but considering only the spatial andtemporal image gradients into an appropriate energy function minimizedwith a Hopfield-like neural network giving as output directly the 3Dmotion parameter estimates. Network convergence is accelerated by integrating the quantitative estimation of motion parameters with aqualitative estimate of dominant motion using the geometric theory of differential equations.