RE: [boost] [filesystem] large file support test added

Beman Dawes <bdawes@acm.org> wrote: <snip>
Normally we would have a regression test that identifies the systems which fail. But we don't test large file support directly because to do so would be a burden on those who run the regression tests; such tests would chew up gigabytes of disk space and might also be very slow. <snip>
why so? Unix file-systems support sparse files, as does NTFS.

At 11:49 AM 9/10/2004, Ben Hutchings wrote:
Beman Dawes <bdawes@acm.org> wrote: <snip>
Normally we would have a regression test that identifies the systems which fail. But we don't test large file support directly because to do so would be a burden on those who run the regression tests; such tests would chew up gigabytes of disk space and might also be very slow. <snip>
why so? Unix file-systems support sparse files, as does NTFS.
I don't want to introduce a requirement that the regression tests need to be run on file systems which either support sparse files or have lots of space available. If someone wants to run tests on a FAT file system, I'd like that to be practical. If there was an overwhelming benefit to some test needing a particular environment, that might be another matter. But it doesn't seem to me that an actual large file test offers much advantage over a surrogate, as long as the surrogate accurately reflects reality. We will see if that is the case. Thanks, --Beman
participants (2)
-
Beman Dawes
-
Ben Hutchings