This work proposes a model of visual bottom-up attention for dynamic scene analysis. Our work addsmotion saliency calculations to a neural network model withrealistic temporal dynamics [(e.g., building motion salienceon top of De Brecht and Saiki Neural Networks 19:1467–1474, (2006)]. The resulting network elicits strong transientresponses to moving objects and reaches stability withina biologically plausible time interval. The responses arestatistically different comparing between earlier and latermotion neural activity; and between moving and non-movingobjects. We demonstrate the network on a number of syn-thetic and real dynamical movie examples. We show thatthe model captures the motion saliency asymmetry phenom-enon. In addition, the motion salience computation enablessudden-onset moving objects that are less salient in the staticscene to rise above others. Finally, we include strong consid-eration for the neural latencies, the Lyapunov stability, andthe neural properties being reproduced by the model