tokenizer bug? core dump

\\n ok, \\r\\n core dump what happend? { const string test_string = "\"message ; \\r\\n message\" \"quote \\\"\" 'escape \\\\'"; typedef tokenizer<escaped_list_separator<char> > Tok; escaped_list_separator<char> sep("\\"," ;","\"\'"); Tok tok(test_string, sep); for(Tok::iterator beg=tok.begin(); beg!=tok.end();++beg) cout << "[" << *beg << "]" << endl; } (gdb) bt #0 0x281047b4 in kill () from /usr/lib/libc.so.4 #1 0x28144b26 in abort () from /usr/lib/libc.so.4 #2 0x28097444 in __terminate () from /usr/lib/libstdc++.so.3 #3 0x28097461 in __terminate () from /usr/lib/libstdc++.so.3 #4 0x280977fb in __sjthrow () from /usr/lib/libstdc++.so.3 #5 0x804c5ea in boost::iterator_adaptor<boost::detail::token_iterator_base<char const *>, boost::detail::tokenizer_policy<basic_string<char, string_char_traits<char>, __default_alloc_template<false, 0> >, boost::escaped_list_separator<char, string_char_traits<char> > >, basic_string<char, string_char_traits<char>, __default_alloc_template<false, 0> >, basic_string<char, string_char_traits<char>, __default_alloc_template<false, 0> > const &, basic_string<char, string_char_traits<char>, __default_alloc_template<false, 0> > const *, forward_iterator_tag, int>::iterator_adaptor () #6 0x804bdd9 in boost::tokenizer<boost::escaped_list_separator<char, string_char_traits<char> >, char const *, basic_string<char, string_char_traits<char>, __default_alloc_template<false, 0> > >::begin () #7 0x804a64f in main () #8 0x804a4b5 in _start ()

O, i did not catch the exception. in the code, i found tokenizer only support '\n' except the tokenizer need. will more escape prepare to add? a not good at english man.
participants (1)
-
wugui