Python Data Science 简明教程

Python - Word Tokenization

单词标记化是对大量文本进行单词切分的过程。在自然语言处理任务中,这是必需的,其中每个单词都需要被获取,并接受进一步的分析,例如对其进行分类并计算其特定情感等。自然语言工具包 (NLTK) 是用于完成此任务的一个库。在使用 Python 程序对单词进行标记化之前,请先安装 NLTK。

Word tokenization is the process of splitting a large sample of text into words. This is a requirement in natural language processing tasks where each word needs to be captured and subjected to further analysis like classifying and counting them for a particular sentiment etc. The Natural Language Tool kit(NLTK) is a library used to achieve this. Install NLTK before proceeding with the python program for word tokenization.

conda install -c anaconda nltk

接下来,我们使用 word_tokenize 方法将段落分割为单个单词。

Next we use the word_tokenize method to split the paragraph into individual words.

import nltk

word_data = "It originated from the idea that there are readers who prefer learning new skills from the comforts of their drawing rooms"
nltk_tokens = nltk.word_tokenize(word_data)
print (nltk_tokens)

当我们执行上面的代码时,它会产生以下结果。

When we execute the above code, it produces the following result.

['It', 'originated', 'from', 'the', 'idea', 'that', 'there', 'are', 'readers',
'who', 'prefer', 'learning', 'new', 'skills', 'from', 'the',
'comforts', 'of', 'their', 'drawing', 'rooms']

Tokenizing Sentences

正如对单词进行标记化一样,我们还可以对段落中的句子进行标记化。我们使用 sent_tokenize 方法来完成此任务。以下是一个示例。

We can also tokenize the sentences in a paragraph like we tokenized the words. We use the method sent_tokenize to achieve this. Below is an example.

import nltk
sentence_data = "Sun rises in the east. Sun sets in the west."
nltk_tokens = nltk.sent_tokenize(sentence_data)
print (nltk_tokens)

当我们执行上面的代码时,它会产生以下结果。

When we execute the above code, it produces the following result.

['Sun rises in the east.', 'Sun sets in the west.']