
Consider the following code, very simple in nature: static const boost::gregorian::date windows_epoch_base(1601, 1, 1); date_time d(windows_epoch_base, microseconds(12908391903991608)); This should resolve to sometime in early 2010 Now, the problem is that this number is already really big, not that many factors of 10 away from the int64 overflow boundary. microseconds() is really just the subsecond_duration<> template, and on my machine it's being parameterized as subsecond_duration<boost::posix_time::time_duration, 1000000>. The code for the constructor is as follows: explicit subsecond_duration(boost::int64_t ss) : base_duration(0,0,0, ss*traits_type::res_adjust() / frac_of_second) {} res_adjust is also resolving to 1000000. So compiler is doing 12908391903991608*res_adjust first, which causes an overflow. I think it should be doing ss * (traits_type::res_adjust() / frac_of_second) instead. I don't know a whole lot about date_time internals, so I thought I would ask here before just submitting a bug. Is the proposed fix correct? I made the change locally and it does indeed appear to resolve the problem. Zach