
Peter Dimov wrote:
Vladimir Prus wrote:
The Response Files section shows example code for parsing a response file using Tokenizer. Why shouldn't one be able to use split_command_line instead? There doesn't seem to be any difference between a command line in string format and a response file.
Good question. The problem is that split_winmain uses Windows-style quoting rules, while tokenizer uses more Unix-like rules. I'm not sure which syntax the response files should use.
My instinctive reaction is to provide both via an options argument to split_command_line (a name that would now be more appropriate). But I haven't devoted much time to thinking this through, so I may be wrong. :-)
In any event, the tokenization isn't much fun. I'd expect the library to provide a convenient mechanism for parsing a response file.
Noted.
Also, response file support doesn't seem proper. Response files should work like this:
@file1 --option=value @file2
If file1 contains --option, it will be overriden by the command line. Similarly, all options in file2 will override both file1 and the command line --option=value. From a cursory look at the example, this doesn't seem to be the case, but I may be wrong.
In the example, options on the command line will override options in both response files. If that behaviour is not correct, I'll have to modify the example by looking over vector of parsed options that's returned by parse_command_line and expanding @ options. Do you think that specifying the same option in @file1 and @file2 is OK, and definition in @file2 should override the previous one? On the command line, two duplicated assignments to a scalar option results in an error.
It should certainly be possible to override @file1 from the command line and from @file2 - this has many valid uses.
Could you explain the uses for overriding @file1 from @file2? - Volodya