Solinar said:
Actually, that is not the whole story. There is a statistical error rate in both the reading and writing of files on alll storage mediums,
There are some fairly simple ways to verify data transfers; CRC (cyclic redundancy checks) and checksums are for, among other things. Basically, if they detect an error in a block of data, they transfer it again.
Even a one-bit error could cost millions of dollars in environments like the Fannie
May servers which handle billions of dollars in transactions daily, so error detection
on data transmissions are extremely thorough.
Besides, most of the error detection methos in use today came into existence during
a time when the hardware's reliability was weak (they were punching holes in paper
for crying out loud), so for normal everyday use these days some of them are
actually overkill.
In other words, unless you're doing something silly like decompressing lossy files
and recompresssing them with lossy codecs like JPG, you're not going to lose any
data from moving stuff around, at least not with modern computer systems.
You can even be confident in your data's integrity when you use lossless
compression codecs over and over again.
TonyK is right about one thing though, if there were enough data lost in transfers
to affect your friend the archivist, the digital revolution would simply not have
happened. Computers would not be so flaky that Fannie Mae would either have
consumed the world's money and killed off the economy, or lost so much money
that they went out of business and took the US housing market out with it.
We would not have functional spacecraft that didn't crash and burn with every
launch, and there would be no possibility of instrument-only landings of
commercial aircraft, let alone an unmanned rover wandering around Mars and
sending back incredibly photography AND GOING WHERE WE TELL IT TO GO.
Show that to someone from the 1900's and they'll prove Arthur C. Clarke right
yet again -- they'll think it could only be possible through magic.
🙂