MIT researchers will soon present a file system they say is mathematically guaranteed not to lose data during a crash.
While building it, they wrote and rewrote the file system over and
over, finding that the majority of their development time was spent
defining the system components and the relationships between them. "With
all these logics and proofs, there are so many ways to write them down,
and each one of them has subtle implications down the line that we
didn’t really understand." The file system is slow compared to other
modern examples, but the researchers say their formal verification can
also work with faster designs. Associate professor Nickolai Zeldovich
said, "Making sure that the file system can recover from a crash at any
point is tricky because there are so many different places that you
could crash. You literally have to consider every instruction or every
disk operation and think, ‘Well, what if I crash now? What now? What
now?’ And so empirically, people have found lots of bugs in file systems
that have to do with crash recovery, and they keep finding them, even
in very well tested file systems, because it’s just so hard to do.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.