How to tweak the NLTK sentence tokenizer
我正在使用NLTK分析一些经典的文本,我遇到了逐句标记文本的麻烦。例如,以下是我从《白鲸记》中得到的一个片段:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | import nltk sent_tokenize = nltk.data.load('tokenizers/punkt/english.pickle') ''' (Chapter 16) A clam for supper? a cold clam; is THAT what you mean, Mrs. Hussey?" says I,"but that's a rather cold and clammy reception in the winter time, ain't it, Mrs. Hussey?" ''' sample = 'A clam for supper? a cold clam; is THAT what you mean, Mrs. Hussey?" says I,"but that\'s a rather cold and clammy reception in the winter time, ain\'t it, Mrs. Hussey?"' print" ----- ".join(sent_tokenize.tokenize(sample)) ''' OUTPUT "A clam for supper? ----- a cold clam; is THAT what you mean, Mrs. ----- Hussey? ----- " says I,"but that\'s a rather cold and clammy reception in the winter time, ain\'t it, Mrs. ----- Hussey? ----- " ''' |
考虑到梅尔维尔的语法有点过时,我不期望这里的完美,但是NLTK应该能够处理终端双引号和像"夫人"这样的标题,因为标记器是一个无监督的训练算法的结果,然而,我不知道如何修补它。
有人建议使用更好的句子标记器吗?我更喜欢一个简单的启发式的,我可以黑客而不是训练我自己的解析器。
您需要向记号赋予器提供缩写列表,如下所示:
1 2 3 4 5 6 | from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters punkt_param = PunktParameters() punkt_param.abbrev_types = set(['dr', 'vs', 'mr', 'mrs', 'prof', 'inc']) sentence_splitter = PunktSentenceTokenizer(punkt_param) text ="is THAT what you mean, Mrs. Hussey?" sentences = sentence_splitter.tokenize(text) |
句子现在是:
1 | ['is THAT what you mean, Mrs. Hussey?'] |
更新:如果句子的最后一个单词有一个撇号或一个引号(如hussey?’),这不起作用。因此,解决这个问题的一个快速而肮脏的方法是在撇号和引号前面加上空格,这些引号跟在句尾符号后面(.!?)
1 | text = text.replace('?"', '?"').replace('!"', '!"').replace('."', '."') |
你可以修改NLTK预先培训过的英语句子标记器,通过添加到集合
1 2 3 | extra_abbreviations = ['dr', 'vs', 'mr', 'mrs', 'prof', 'inc', 'i.e'] sentence_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') sentence_tokenizer._params.abbrev_types.update(extra_abbreviations) |
请注意,缩写词必须在没有最后期限的情况下指定,但必须包括任何内部期限,如上文
您可以通过将
我不知道一个干净的方法来防止像
- 将
Mrs. Hussey 到Mrs._Hussey 的所有事件都管理好, - 然后用
sent_tokenize.tokenize 把文本分成句子, - 然后对每个句子,将
Mrs._Hussey 与Mrs. Hussey 分开。
我希望我知道一个更好的方法,但这可能在紧要关头起作用。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | import nltk import re import functools mangle = functools.partial(re.sub, r'([MD]rs?[.]) ([A-Z])', r'\1_\2') unmangle = functools.partial(re.sub, r'([MD]rs?[.])_([A-Z])', r'\1 \2') sent_tokenize = nltk.data.load('tokenizers/punkt/english.pickle') sample = '''"A clam for supper? a cold clam; is THAT what you mean, Mrs. Hussey?" says I,"but that\'s a rather cold and clammy reception in the winter time, ain\'t it, Mrs. Hussey?"''' sample = mangle(sample) sentences = [unmangle(sent) for sent in sent_tokenize.tokenize( sample, realign_boundaries = True)] print u" ----- ".join(sentences) |
产量
1 2 3 4 5 | "A clam for supper? ----- a cold clam; is THAT what you mean, Mrs. Hussey?" ----- says I,"but that's a rather cold and clammy reception in the winter time, ain't it, Mrs. Hussey?" |
所以我遇到了类似的问题,并尝试了上面的vpekar解决方案。
也许我的是某种边缘情况,但我在应用替换后观察到了相同的行为,但是,当我尝试用放在它们前面的引号替换标点时,我得到了我要查找的输出。据推测,不遵守MLA并不比将原报价保留为一句话重要。
更清楚地说:
1 | text = text.replace('?"', '"?').replace('!"', '"!').replace('."', '".') |
如果MLA很重要的话,尽管你可以在重要的地方返回并逆转这些变化。