In this article we investigate the effect on synchrony of adding feedback loops and adaptation to a large class of
feedforward networks. We obtain relatively complete results on synchrony for identical cell networks
with additive input structure and feedback from the final to the initial layer of the network.
These results extend previous work on synchrony in feedforward networks by Aguiar, Dias and Ferreira (2017).
We also describe additive and multiplicative adaptation schemes that are synchrony preserving and
briefly comment on dynamical protocols for running the feedforward network that relate to unsupervised
learning in neural nets and neuroscience.