I recently reported here on feed forward models of spiking neurons. Here is a follow up about an interesting recurrent system.
The computational power of a reciprocally connected group are likely to entail population codes rather than singular neurons encoding for stimuli. As the spiking neurons are either in a state of firing or not, they are not as easy to decode at a specific moment in time as a rate based model which contain an average of time spread information at one moment. Hosaka et al demonstrate a recurrent network organized to generate a synchronous firing according to the cycle of repeated external inputs. The timing of the synchrony depends on the input spatio-temporal pattern and the neural network structure. They conclude that network self-organizes its transformation function from spatio-temporal to temporal information. spike timing dependant plasticity makes the recurrent neural network behave as a filter with only one learned spatio-temporal pattern able to go through the filtering network in synchronous form (for more information on synchrony read here). Although their work includes a Monte-Carlo significance test for the synchrony, the synchrony is based on a global metric. Clearly distributed synchrony in which different cell assemblies in the network synchronise a different times due to the influence of stimuli would have to be considered if the network is to respond to multiple stimuli.