
Thorsten Ottosen wrote:
"David B. Held" <dheld@codelogicconsulting.com> wrote in message news:cngf20$cmb$1@sea.gmane.org... [...] | If not, are you suggesting that we construct | such types partially, and then fill in the rest later, so that we can | avoid functions with many arguments?
that's pretty close.
That's a pretty silly policy. What Noah means by "one shot" is that some classes have little things called "invariants", and deferring initialization till after the c'tor may well break those.
[...] But to be fair, this idea is far from new and so I'm not the person that should be given credit.
Oh, please...take the credit. ;)
[...] function leafNode2( $name1, $name2, $link1, $link2, $target1 = selfTarget, $target2 = selfTaget, $title1 = "", $title2 = "", $root = root );
I'm not proud of it, but I just havn't had the time to refactor it yet. It should be obvious that all alguments ending with 1 and 2 should be grouped somehow, maybe as
class Link { $name, $url; $target = selfTarget; $title = ""; ... }
Maybe so. But suppose you didn't need to pass two links? Then is it so obvious that those arguments need to be encapsulated into a single *EXTRA* class?
[...] Doing such a refactoring takes time, but it will surely make the abstraction level of my application much higher.
That does not imply a benefit to me. If it did, then I would argue that a pointer-to-a-pointer is better than a pointer, because it has a higher level of abstraction (and gives you more things to do with the intermediate pointer). By that reasoning, a pointer-to-an-X+1 is "better" than a pointer-to-an-X, and therefore, we should all use pointers that have an infinite level of nesting, to write truly divine code. Of course, the flaw in this argument is the notion of "overgeneralization."
Just for fun, I would like to see what functions you have which you claim should not be refactored. Maybe you're right that doing a refactoring would simply ont be good.
Consider *gasp* data entry applications, where you have records with many fields, which should all be initialized at once. Here's one for some mail tracking software: bool TInputForm::Add(AnsiString ID, double Weight, AnsiString Zip, byte Zone, AnsiString Extra, bool MakeVisible); Now, you might say: "But you should put all those arguments in some kind of package class before passing it to the Add() function." However, that is exactly the *point* of the Add() function! These pieces of data are collected elsewhere, and it is the responsibility of the Add() function to integrate them into a single package object. Yes, it is possible to refactor it into classes that only take a few fields at a time, but I can guarantee that such a refactoring will not be an improvement. I could give examples from lots of other business apps that need to deal with records, some that have far more fields than this. void __fastcall TInputForm::OnInsert(int ManifestID, MDTP::TMailClass Class, AnsiString ID, double Weight, AnsiString Zip, byte Zone, double Postage, AnsiString Extra, AnsiString Username, TDateTime Entered); Here's an event handler that gets called *before* the Add() function above, and thus gets the raw data, and not the packaged object. It wouldn't even make sense to refactor this function. There are no sensible defaults, since the data is always supposed to be passed from the data entry source. There are 10 arguments, for none of which I make apology. It does not now nor did it ever occur to me to attempt to reduce the number of arguments for this function. Doing so would not improve the quality of the code, because there is absolutely no reason to generalize this function. It has a very specific and proscribed purpose, and there is no anticipation of needing to expand its repertoire. Now, in this case, we are looking at functions that only get called from a small number of places (or even just one). Thus, named parameters don't really help any. But the fact is, long argument lists are not intrinsically evil. I could give numerous other examples, but I hope I've made my point. Dave