The vertebrate and mammalian visual system is not so simple as a camera or computer, and does not have a shutter or synchronization system for the rod and cone detector cells. The rod and cone detector cells do individually have a fundamental firing/neurotransmitter release frequency, and are turned OFF by photons. So their normal state is to say "I see black, I see black, I see black..." to the bipolar cell it synapses with. When a photon is absorbed the receptor cell turns off and the bipolar cell says "we have light." In aggregate, signals from the rods and cones are processed in the eye's retina by bipolar and horizontal cells that help with edge detection and contrast enhancement processing (like high acutance developers with edge effects or unsharp mask in Photoshop). More complex processing occurs in the visual cortex at the back of your mammalian head, such as shape recognition, horizontal vs. vertical orientation detection, movement detection, focus control etc.
Funnily enough the bipolar and horizontal cells are in front of the rods and cones in vertebrates, which does cause some light scattering and loss of photons, so evolution didn't give us the best design for low light, but the vertebrate retina layer structure might offer some protection for the rods and cones against overly bright light. Squid and octopus have a more practical placement of photosensitive cells at the front layer of the retina, which is good, because sea water absorbs a lot of light and the cephalopods have a big survival advantage by having good low-light sensitivity eyes.