How do I tokenize a string in C++?
Java有一种方便的拆分方法:
1 2 | String str ="The quick brown fox"; String[] results = str.split(""); |
在C++中有一个简单的方法吗?
BoostTokenizer类可以使这类事情变得非常简单:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | #include <iostream> #include <string> #include <boost/foreach.hpp> #include <boost/tokenizer.hpp> using namespace std; using namespace boost; int main(int, char**) { string text ="token, test string"; char_separator<char> sep(","); tokenizer< char_separator<char> > tokens(text, sep); BOOST_FOREACH (const string& t, tokens) { cout << t <<"." << endl; } } |
更新C++ 11:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | #include <iostream> #include <string> #include <boost/tokenizer.hpp> using namespace std; using namespace boost; int main(int, char**) { string text ="token, test string"; char_separator<char> sep(","); tokenizer<char_separator<char>> tokens(text, sep); for (const auto& t : tokens) { cout << t <<"." << endl; } } |
下面是一个非常简单的例子:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | #include <vector> #include <string> using namespace std; vector<string> split(const char *str, char c = ' ') { vector<string> result; do { const char *begin = str; while(*str != c && *str) str++; result.push_back(string(begin, str)); } while (0 != *str++); return result; } |
使用
使用Strutk。在我看来,没有必要围绕标记化构建一个类,除非Strtok不提供您需要的东西。它可能不是,但是在15年多的时间里编写C和C++中的各种解析代码,我一直使用Strutk。下面是一个例子
1 2 3 4 5 6 7 | char myString[] ="The quick brown fox"; char *p = strtok(myString,""); while (p) { printf ("Token: %s ", p); p = strtok(NULL,""); } |
一些注意事项(可能不适合您的需要)。字符串在这个过程中被"销毁",这意味着EOS字符被放在delimter点的内联位置。正确的用法可能要求您生成字符串的非常量版本。您还可以在解析过程中更改分隔符列表。
在我看来,上面的代码要比为它编写一个单独的类简单得多,也更容易使用。对我来说,这是语言所提供的功能之一,它可以很好地、干净地完成它。它只是一个基于C的解决方案。它是适当的,很容易,而且你不需要编写很多额外的代码:—)
另一种快速的方法是使用
1 2 3 4 5 6 | stringstream ss("bla bla"); string s; while (getline(ss, s, ' ')) { cout << s << endl; } |
如果您愿意,可以制作一个简单的
您可以使用流、迭代器和复制算法来相当直接地完成这项工作。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | #include <string> #include <vector> #include <iostream> #include <istream> #include <ostream> #include <iterator> #include <sstream> #include int main() { std::string str ="The quick brown fox"; // construct a stream from the string std::stringstream strstr(str); // use stream iterators to copy the stream to the vector as whitespace separated strings std::istream_iterator<std::string> it(strstr); std::istream_iterator<std::string> end; std::vector<std::string> results(it, end); // send the vector to stdout. std::ostream_iterator<std::string> oit(std::cout); std::copy(results.begin(), results.end(), oit); } |
没有冒犯的人,但是对于这样一个简单的问题,你让事情变得太复杂了。使用Boost有很多原因。但对于这样简单的事情,就像用20英尺长的雪橇打苍蝇。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | void split( vector<string> & theStringVector, /* Altered/returned value */ const string & theString, const string & theDelimiter) { UASSERT( theDelimiter.size(), >, 0); // My own ASSERT macro. size_t start = 0, end = 0; while ( end != string::npos) { end = theString.find( theDelimiter, start); // If at end, use length=maxLength. Else use length=end-start. theStringVector.push_back( theString.substr( start, (end == string::npos) ? string::npos : end - start)); // If at end, use start=maxSize. Else use start=end+delimiter. start = ( ( end > (string::npos - theDelimiter.size()) ) ? string::npos : end + theDelimiter.size()); } } |
例如(对于Doug的情况)
1 2 3 4 5 6 7 8 9 10 11 12 | #define SHOW(I,X) cout <<"[" << (I) <<"]\t" # X" = "" << (X) <<""" << endl int main() { vector<string> v; split( v,"A:PEP:909:Inventory Item",":" ); for (unsigned int i = 0; i < v.size(); i++) SHOW( i, v[i] ); } |
是的,我们可以让split()返回一个新的向量,而不是传入一个。包装和超载是很简单的。但根据我所做的,我经常发现重新使用预先存在的对象比总是创建新的对象要好。(只要我不忘记清空中间的向量!)
参考:http://www.cplusplus.com/reference/string/string/。
(我最初写了一个对道格问题的回应:基于分隔符(关闭)的C++字符串修改和提取。但自从马丁·约克在这里用一个指针结束了这个问题…我只是概括一下我的代码。)
Boost具有强大的拆分功能:Boost::Algorithm::Split。
样本程序:
1 2 3 4 5 6 7 8 9 10 11 12 | #include <vector> #include <boost/algorithm/string.hpp> int main() { auto s ="a,b, c ,,e,f,"; std::vector<std::string> fields; boost::split(fields, s, boost::is_any_of(",")); for (const auto& field : fields) std::cout <<""" << field <<"" "; return 0; } |
输出:
1 2 3 4 5 6 7 | "a" "b" " c" "" "e" "f" "" |
的解决方案:使用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | #include <iostream> #include <regex> #include <string> using namespace std; int main() { string str("The quick brown fox"); regex reg("\\s+"); sregex_token_iterator iter(str.begin(), str.end(), reg, -1); sregex_token_iterator end; vector<string> vec(iter, end); for (auto a : vec) { cout << a << endl; } } |
我知道你需要一个C++解决方案,但是你可能会认为这有帮助:
QT
1 2 3 4 5 6 | #include <QString> ... QString str ="The quick brown fox"; QStringList results = str.split(""); |
在这个例子中,与boost相比,它的优势在于它是一个直接的一对一映射到您的帖子代码。
请参阅Qt文档中的更多内容
下面是一个标记器类示例,它可以满足您的需要
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | //Header file class Tokenizer { public: static const std::string DELIMITERS; Tokenizer(const std::string& str); Tokenizer(const std::string& str, const std::string& delimiters); bool NextToken(); bool NextToken(const std::string& delimiters); const std::string GetToken() const; void Reset(); protected: size_t m_offset; const std::string m_string; std::string m_token; std::string m_delimiters; }; //CPP file const std::string Tokenizer::DELIMITERS(" \t "); Tokenizer::Tokenizer(const std::string& s) : m_string(s), m_offset(0), m_delimiters(DELIMITERS) {} Tokenizer::Tokenizer(const std::string& s, const std::string& delimiters) : m_string(s), m_offset(0), m_delimiters(delimiters) {} bool Tokenizer::NextToken() { return NextToken(m_delimiters); } bool Tokenizer::NextToken(const std::string& delimiters) { size_t i = m_string.find_first_not_of(delimiters, m_offset); if (std::string::npos == i) { m_offset = m_string.length(); return false; } size_t j = m_string.find_first_of(delimiters, i); if (std::string::npos == j) { m_token = m_string.substr(i); m_offset = m_string.length(); return true; } m_token = m_string.substr(i, j - i); m_offset = j; return true; } |
例子:
1 2 3 4 5 6 | std::vector <std::string> v; Tokenizer s("split this string",""); while (s.NextToken()) { v.push_back(s.GetToken()); } |
这是一个简单的纯STL解决方案(~5行!)使用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | #include <string> #include <vector> void tokenize(std::string str, std::vector<string> &token_v){ size_t start = str.find_first_not_of(DELIMITER), end=start; while (start != std::string::npos){ // Find next occurence of delimiter end = str.find(DELIMITER, start); // Push back the token found into vector token_v.push_back(str.substr(start, end-start)); // Skip all occurences of the delimiter to find new start start = str.find_first_not_of(DELIMITER, end); } } |
现场试一试!
pystring的小图书馆,implements束的Python字符串的功能,包括病毒分离的方法: P / < >
1 2 3 4 5 6 7 8 9 | #include <string> #include <vector> #include"pystring.h" std::vector<std::string> chunks; pystring::split("this string", chunks); // also can specify a separator pystring::split("this-string", chunks,"-"); |
我posted这个答案为similar问题。 Don’t reinvent"轮"。我用的第一部libraries和fastest和现在flexible我来:在C + +的字符串应用图书馆。 P / < >
这里是一个例子的技术使用到它,我已经在其他posted上stackoverflow。 P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | #include <iostream> #include <vector> #include <string> #include <strtk.hpp> const char *whitespace =" \t \f"; const char *whitespace_and_punctuation =" \t \f;,="; int main() { { // normal parsing of a string into a vector of strings std::string s("Somewhere down the road"); std::vector<std::string> result; if( strtk::parse( s, whitespace, result ) ) { for(size_t i = 0; i < result.size(); ++i ) std::cout << result[i] << std::endl; } } { // parsing a string into a vector of floats with other separators // besides spaces std::string t("3.0, 3.14; 4.0"); std::vector<float> values; if( strtk::parse( s, whitespace_and_punctuation, values ) ) { for(size_t i = 0; i < values.size(); ++i ) std::cout << values[i] << std::endl; } } { // parsing a string into specific variables std::string u("angle = 45; radius = 9.9"); std::string w1, w2; float v1, v2; if( strtk::parse( s, whitespace_and_punctuation, w1, v1, w2, v2) ) { std::cout <<"word" << w1 <<", value" << v1 << std::endl; std::cout <<"word" << w2 <<", value" << v2 << std::endl; } } return 0; } |
检查这个例子。它可能会帮助你……
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | #include <iostream> #include <sstream> using namespace std; int main () { string tmps; istringstream is ("the dellimiter is the space"); while (is.good ()) { is >> tmps; cout << tmps <<" "; } return 0; } |
您可以简单地使用正则表达式库,并使用正则表达式来解决这个问题。
使用表达式(w+)和1中的变量(或$1,具体取决于正则表达式的库实现)。
MFC/ATL有一个非常好的标记器。来自MSDN:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | CAtlString str("%First Second#Third" ); CAtlString resToken; int curPos= 0; resToken= str.Tokenize("% #",curPos); while (resToken !="") { printf("Resulting token: %s ", resToken); resToken= str.Tokenize("% #",curPos); }; Output Resulting Token: First Resulting Token: Second Resulting Token: Third |
我认为这就是字符串流上的
1 | string word; sin >> word; |
对于简单的事情,我只使用以下内容:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | unsigned TokenizeString(const std::string& i_source, const std::string& i_seperators, bool i_discard_empty_tokens, std::vector<std::string>& o_tokens) { unsigned prev_pos = 0; unsigned pos = 0; unsigned number_of_tokens = 0; o_tokens.clear(); pos = i_source.find_first_of(i_seperators, pos); while (pos != std::string::npos) { std::string token = i_source.substr(prev_pos, pos - prev_pos); if (!i_discard_empty_tokens || token !="") { o_tokens.push_back(i_source.substr(prev_pos, pos - prev_pos)); number_of_tokens++; } pos++; prev_pos = pos; pos = i_source.find_first_of(i_seperators, pos); } if (prev_pos < i_source.length()) { o_tokens.push_back(i_source.substr(prev_pos)); number_of_tokens++; } return number_of_tokens; } |
懦弱的免责声明:我编写实时数据处理软件,其中的数据通过二进制文件、套接字或某些API调用(I/O卡、相机)进入。我从不将此函数用于比启动时读取外部配置文件更复杂或时间紧迫的事情。
许多overly一个复杂suggestions睾丸。试试这个简单的标准::字符串的解决办法: P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | using namespace std; string someText = ... string::size_type tokenOff = 0, sepOff = tokenOff; while (sepOff != string::npos) { sepOff = someText.find(' ', sepOff); string::size_type tokenLen = (sepOff == string::npos) ? sepOff : sepOff++ - tokenOff; string token = someText.substr(tokenOff, tokenLen); if (!token.empty()) /* do something with token */; tokenOff = sepOff; } |
如果你愿意使用c,你可以使用strtok函数。使用多线程时应该注意多线程问题。
亚当·皮尔斯的回答提供了一个手工旋转的标记器,它接受了一个
1 2 3 4 5 6 7 8 9 | auto start = find(cbegin(str), cend(str), ' '); vector<string> tokens{ string(cbegin(str), start) }; while (start != cend(str)) { const auto finish = find(++start, cend(str), ' '); tokens.push_back(string(start, finish)); start = finish; } |
LIve示例
如果您希望通过使用标准功能来抽象复杂性,正如Freund所建议的那样,
1 2 3 | vector<string> tokens; for (auto i = strtok(data(str),""); i != nullptr; i = strtok(nullptr,"")) tokens.push_back(i); |
如果你不能访问C++ 17,你就需要在这个例子中替换EDCOX1的6个字:
虽然示例中没有演示,但是
前面的两种方法都不能就地生成标记化的
fox" }
1 2 | istringstream is{ str }; const vector<string> tokens{ istream_iterator<string>(is), istream_iterator<string>() }; |
LIve示例
该方案所需的
如果上述选项都不足以满足您的标记化技术需求,那么最灵活的选项是使用
1 2 | const regex re{"\\s*((?:[^\\\\,]|\\\\.)*?)\\s*(?:,|$)" }; const vector<string> tokens{ sregex_token_iterator(cbegin(str), cend(str), re, 1), sregex_token_iterator() }; |
LIve示例
看来奇怪到我,与所有我们的速度conscious nerds在这里,所以没有人腹部被提出的版本,用途的compile时代产生了看表的农药delimiter(例如实施进一步的下降)。用看了表和iterators应该击败性病::在regex效率,如果你不需要成为regex打败的,就用它,其标准号为C + +和超级flexible 11。 P / < >
有一些suggested regex但已经为noobs睾丸的packaged的例子,应该做的到底是什么expects运算: P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | std::vector<std::string> split(std::string::const_iterator it, std::string::const_iterator end, std::regex e = std::regex{"\\w+"}){ std::smatch m{}; std::vector<std::string> ret{}; while (std::regex_search (it,end,m,e)) { ret.emplace_back(m.str()); std::advance(it, m.position() + m.length()); //next start position = match position + match length } return ret; } std::vector<std::string> split(const std::string &s, std::regex e = std::regex{"\\w+"}){ //comfort version calls flexible version return split(s.cbegin(), s.cend(), std::move(e)); } int main () { std::string str {"Some people, excluding those present, have been compile time constants - since puberty."}; auto v = split(str); for(const auto&s:v){ std::cout << s << std::endl; } std::cout <<"crazy version:" << std::endl; v = split(str, std::regex{"[^e]+"}); //using e as delim shows flexibility for(const auto&s:v){ std::cout << s << std::endl; } return 0; } |
如果我们需要更快的完成和接受的约束,所有字符必须是8位,我们可以使看了表的时间用在compile metaprogramming: P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | template<bool...> struct BoolSequence{}; //just here to hold bools template<char...> struct CharSequence{}; //just here to hold chars template<typename T, char C> struct Contains; //generic template<char First, char... Cs, char Match> //not first specialization struct Contains<CharSequence<First, Cs...>,Match> : Contains<CharSequence<Cs...>, Match>{}; //strip first and increase index template<char First, char... Cs> //is first specialization struct Contains<CharSequence<First, Cs...>,First>: std::true_type {}; template<char Match> //not found specialization struct Contains<CharSequence<>,Match>: std::false_type{}; template<int I, typename T, typename U> struct MakeSequence; //generic template<int I, bool... Bs, typename U> struct MakeSequence<I,BoolSequence<Bs...>, U>: //not last MakeSequence<I-1, BoolSequence<Contains<U,I-1>::value,Bs...>, U>{}; template<bool... Bs, typename U> struct MakeSequence<0,BoolSequence<Bs...>,U>{ //last using Type = BoolSequence<Bs...>; }; template<typename T> struct BoolASCIITable; template<bool... Bs> struct BoolASCIITable<BoolSequence<Bs...>>{ /* could be made constexpr but not yet supported by MSVC */ static bool isDelim(const char c){ static const bool table[256] = {Bs...}; return table[static_cast<int>(c)]; } }; using Delims = CharSequence<'.',',',' ',':',' '>; //list your custom delimiters here using Table = BoolASCIITable<typename MakeSequence<256,BoolSequence<>,Delims>::Type>; |
*在地方制作的
1 2 3 4 5 6 | template<typename T_It> std::pair<T_It,T_It> getNextToken(T_It begin,T_It end){ begin = std::find_if(begin,end,std::not1(Table{})); //find first non delim or end auto second = std::find_if(begin,end,Table{}); //find first delim or end return std::make_pair(begin,second); } |
用它也是简单的: P / < >
1 2 3 4 5 6 7 8 9 10 11 | int main() { std::string s{"Some people, excluding those present, have been compile time constants - since puberty."}; auto it = std::begin(s); auto end = std::end(s); while(it != std::end(s)){ auto token = getNextToken(it,end); std::cout << std::string(token.first,token.second) << std::endl; it = token.second; } return 0; } |
这里的生活的例子:http:/ / / / ideone.com gktklq P / < >
我知道这个问题已经被回答了,但我想做出贡献。也许我的解决方案有点简单,但这就是我想到的:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | vector<string> get_words(string const& text) { vector<string> result; string tmp = text; size_t first_pos = 0; size_t second_pos = tmp.find("");; while (second_pos != string::npos) { if (first_pos != second_pos) { string word = tmp.substr(first_pos, second_pos - first_pos); result.push_back(word); } tmp = tmp.substr(second_pos + 1); second_pos = tmp.find(""); } result.push_back(tmp); return result; } |
如果在我的代码中有更好的方法或者有什么问题,请评论。
这里的一个方法是,你是否在控制allows空戳标记都包括(像是strsep)或被排除(像strtok)。 P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | #include <string.h> // for strchr and strlen /* * want_empty_tokens==true : include empty tokens, like strsep() * want_empty_tokens==false : exclude empty tokens, like strtok() */ std::vector<std::string> tokenize(const char* src, char delim, bool want_empty_tokens) { std::vector<std::string> tokens; if (src and *src != '\0') // defensive while( true ) { const char* d = strchr(src, delim); size_t len = (d)? d-src : strlen(src); if (len or want_empty_tokens) tokens.push_back( std::string(src, len) ); // capture token if (d) src += len+1; else break; } return tokens; } |
没有直接的方法可以做到这一点。请参阅此代码项目源代码以了解如何为此生成类。
你可以利用的提升::让_找到_ iterator。这是similar到: P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | template<typename CH> inline vector< basic_string<CH> > tokenize( const basic_string<CH> &Input, const basic_string<CH> &Delimiter, bool remove_empty_token ) { typedef typename basic_string<CH>::const_iterator string_iterator_t; typedef boost::find_iterator< string_iterator_t > string_find_iterator_t; vector< basic_string<CH> > Result; string_iterator_t it = Input.begin(); string_iterator_t it_end = Input.end(); for(string_find_iterator_t i = boost::make_find_iterator(Input, boost::first_finder(Delimiter, boost::is_equal())); i != string_find_iterator_t(); ++i) { if(remove_empty_token){ if(it != i->begin()) Result.push_back(basic_string<CH>(it,i->begin())); } else Result.push_back(basic_string<CH>(it,i->begin())); it = i->end(); } if(it != it_end) Result.push_back(basic_string<CH>(it,it_end)); return Result; } |
这是我的瑞士菜?军刀式的字符串标记器,用于按空格拆分字符串,计算单引号和双引号包装的字符串,并从结果中剥离这些字符。我使用RegexBuddy4.x生成了大部分代码片段,但我添加了用于去除引号和其他一些东西的自定义处理。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | #include <string> #include <locale> #include <regex> std::vector<std::wstring> tokenize_string(std::wstring string_to_tokenize) { std::vector<std::wstring> tokens; std::wregex re(LR"(("[^"]*"|'[^']*'|[^"' ]+))", std::regex_constants::collate); std::wsregex_iterator next( string_to_tokenize.begin(), string_to_tokenize.end(), re, std::regex_constants::match_not_null ); std::wsregex_iterator end; const wchar_t single_quote = L'\''; const wchar_t double_quote = L'"'; while ( next != end ) { std::wsmatch match = *next; const std::wstring token = match.str( 0 ); next++; if (token.length() > 2 && (token.front() == double_quote || token.front() == single_quote)) tokens.emplace_back( std::wstring(token.begin()+1, token.begin()+token.length()-1) ); else tokens.emplace_back(token); } return tokens; } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | #include <iostream> #include <boost/tokenizer.hpp> #include <string> using namespace std; using namespace boost; typedef tokenizer<char_separator<wchar_t>, wstring::const_iterator, wstring> Tok; int main() { wstring s; while (getline(wcin, s)) { char_separator<wchar_t> sep(L""); // list of separator characters Tok tok(s, sep); for (Tok::iterator beg = tok.begin(); beg != tok.end(); ++beg) { wcout << *beg << L"\t"; // output (or store in vector) } wcout << L" "; } return 0; } |
简单的C + +代码(标准的C + + 98),accepts多delimiters(specified在性病:用途:字符串),只vectors,字符串和iterators。 P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | #include <iostream> #include <vector> #include <string> #include <stdexcept> std::vector<std::string> split(const std::string& str, const std::string& delim){ std::vector<std::string> result; if (str.empty()) throw std::runtime_error("Can not tokenize an empty string!"); std::string::const_iterator begin, str_it; begin = str_it = str.begin(); do { while (delim.find(*str_it) == std::string::npos && str_it != str.end()) str_it++; // find the position of the first delimiter in str std::string token = std::string(begin, str_it); // grab the token if (!token.empty()) // empty token only when str starts with a delimiter result.push_back(token); // push the token into a vector<string> while (delim.find(*str_it) != std::string::npos && str_it != str.end()) str_it++; // ignore the additional consecutive delimiters begin = str_it; // process the remaining tokens } while (str_it != str.end()); return result; } int main() { std::string test_string =".this is.a.../.simple;;test;;;END"; std::string delim ="; ./"; // string containing the delimiters std::vector<std::string> tokens = split(test_string, delim); for (std::vector<std::string>::const_iterator it = tokens.begin(); it != tokens.end(); it++) std::cout << *it << std::endl; } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | /// split a string into multiple sub strings, based on a separator string /// for example, if separator="::", /// /// s ="abc" ->"abc" /// /// s ="abc::def xy::st:" ->"abc","def xy" and"st:", /// /// s ="::abc::" ->"abc" /// /// s ="::" -> NO sub strings found /// /// s ="" -> NO sub strings found /// /// then append the sub-strings to the end of the vector v. /// /// the idea comes from the findUrls() function of"Accelerated C++", chapt7, /// findurls.cpp /// void split(const string& s, const string& sep, vector<string>& v) { typedef string::const_iterator iter; iter b = s.begin(), e = s.end(), i; iter sep_b = sep.begin(), sep_e = sep.end(); // search through s while (b != e){ i = search(b, e, sep_b, sep_e); // no more separator found if (i == e){ // it's not an empty string if (b != e) v.push_back(string(b, e)); break; } else if (i == b){ // the separator is found and right at the beginning // in this case, we need to move on and search for the // next separator b = i + sep.length(); } else{ // found the separator v.push_back(string(b, i)); b = i; } } } |
*提升图书馆也很好,但他们是不是总是可用的。做这行的事情由手也是很好的exercise脑。在这里,我们只是用"性病::(搜索)算法从stl,看到上面的代码。 P / < >
我一直在找的方式来拆分字符串由separator之任何长度的,所以我开始writing它从划痕,现有的解决方案??我的西装。 P / < >
这里是我的小算法,只使用stl: P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | //use like this //std::vector<std::wstring> vec = Split<std::wstring> (L"Hello##world##!", L"##"); template <typename valueType> static std::vector <valueType> Split (valueType text, const valueType& delimiter) { std::vector <valueType> tokens; size_t pos = 0; valueType token; while ((pos = text.find(delimiter)) != valueType::npos) { token = text.substr(0, pos); tokens.push_back (token); text.erase(0, pos + delimiter.length()); } tokens.push_back (text); return tokens; } |
它可以用与separator之任何长度的和的形式,尽我的身体。instantiate与无论是字符串或wstring型。 P / < >
所有的算法确实是searches农药delimiter,得到的部分字符串,达到delimiter,deletes的delimiter和searches再次finds它直到它没有更多。 P / < >
希望它帮助。 P / < >
我让lexer / tokenizer之前与只使用标准的libraries。这里的代码: P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | #include <iostream> #include <string> #include <vector> #include <sstream> using namespace std; string seps(string& s) { if (!s.size()) return""; stringstream ss; ss << s[0]; for (int i = 1; i < s.size(); i++) { ss << '|' << s[i]; } return ss.str(); } void Tokenize(string& str, vector<string>& tokens, const string& delimiters ="") { seps(str); // Skip delimiters at beginning. string::size_type lastPos = str.find_first_not_of(delimiters, 0); // Find first"non-delimiter". string::size_type pos = str.find_first_of(delimiters, lastPos); while (string::npos != pos || string::npos != lastPos) { // Found a token, add it to the vector. tokens.push_back(str.substr(lastPos, pos - lastPos)); // Skip delimiters. Note the"not_of" lastPos = str.find_first_not_of(delimiters, pos); // Find next"non-delimiter" pos = str.find_first_of(delimiters, lastPos); } } int main(int argc, char *argv[]) { vector<string> t; string s ="Tokens for everyone!"; Tokenize(s, t,"|"); for (auto c : t) cout << c << endl; system("pause"); return 0; } |
如果输入字符串的最大长度的tokenized也认识到,人可以exploit这implement和非常快速的版本。我sketching的基本理念下面,是inspired by都strtok()和"suffix阵列"的数据结构被描述乔恩宾利的"programming perls"第二版,第三章15。的C + +类,在这个情况下只给一些组织和便利的使用。实施的表演可以容易扩展为removing领先和trailing whitespace字符数在戳标记。 P / < >
basically人可以replace的separator个字符与字符串terminating是"0"字符数和集pointers 改性withing戳标记的字符串。在极端的情况下当字符串由唯一的separators,把一个字符串的长度加1 resulting空戳标记。它也到实用的字符串复制到改性。 P / < >
头文件: P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | class TextLineSplitter { public: TextLineSplitter( const size_t max_line_len ); ~TextLineSplitter(); void SplitLine( const char *line, const char sep_char = ',', ); inline size_t NumTokens( void ) const { return mNumTokens; } const char * GetToken( const size_t token_idx ) const { assert( token_idx < mNumTokens ); return mTokens[ token_idx ]; } private: const size_t mStorageSize; char *mBuff; char **mTokens; size_t mNumTokens; inline void ResetContent( void ) { memset( mBuff, 0, mStorageSize ); // mark all items as empty: memset( mTokens, 0, mStorageSize * sizeof( char* ) ); // reset counter for found items: mNumTokens = 0L; } }; |
implementattion文件: P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | TextLineSplitter::TextLineSplitter( const size_t max_line_len ): mStorageSize ( max_line_len + 1L ) { // allocate memory mBuff = new char [ mStorageSize ]; mTokens = new char* [ mStorageSize ]; ResetContent(); } TextLineSplitter::~TextLineSplitter() { delete [] mBuff; delete [] mTokens; } void TextLineSplitter::SplitLine( const char *line, const char sep_char /* = ',' */, ) { assert( sep_char != '\0' ); ResetContent(); strncpy( mBuff, line, mMaxLineLen ); size_t idx = 0L; // running index for characters do { assert( idx < mStorageSize ); const char chr = line[ idx ]; // retrieve current character if( mTokens[ mNumTokens ] == NULL ) { mTokens[ mNumTokens ] = &mBuff[ idx ]; } // if if( chr == sep_char || chr == '\0' ) { // item or line finished // overwrite separator with a 0-terminating character: mBuff[ idx ] = '\0'; // count-up items: mNumTokens ++; } // if } while( line[ idx++ ] ); } |
在scenario of usage会: P / < >
1 2 3 4 5 6 7 8 | // create an instance capable of splitting strings up to 1000 chars long: TextLineSplitter spl( 1000 ); spl.SplitLine("Item1,,Item2,Item3" ); for( size_t i = 0; i < spl.NumTokens(); i++ ) { printf("%s ", spl.GetToken( i ) ); } |
输出: P / < >
1 2 3 4 | Item1 Item2 Item3 |
这个简单的环到tokenise只与标准库文件 P / < >
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | #include <iostream.h> #include <stdio.h> #include <string.h> #include <math.h> #include <conio.h> class word { public: char w[20]; word() { for(int j=0;j<=20;j++) {w[j]='\0'; } } }; void main() { int i=1,n=0,j=0,k=0,m=1; char input[100]; word ww[100]; gets(input); n=strlen(input); for(i=0;i<=m;i++) { if(context[i]!=' ') { ww[k].w[j]=context[i]; j++; } else { k++; j=0; m++; } } } |