Postgresql 中文操作指南
12.8. Testing and Debugging Text Search #
自定义文本搜索配置的行为很容易变得混乱。本节中描述的功能可用于测试文本搜索对象。您可以测试完整配置,或分别测试解析器和词典。
The behavior of a custom text search configuration can easily become confusing. The functions described in this section are useful for testing text search objects. You can test a complete configuration, or test parsers and dictionaries separately.
12.8.1. Configuration Testing #
函数 ts_debug 允许轻松测试文本搜索配置。
The function ts_debug allows easy testing of a text search configuration.
ts_debug([ config regconfig, ] document text,
OUT alias text,
OUT description text,
OUT token text,
OUT dictionaries regdictionary[],
OUT dictionary regdictionary,
OUT lexemes text[])
returns setof record
ts_debug 显示 document 的每个标记的信息,这些标记是由解析器生成的并由配置的词典处理的。它使用 config 指定的配置,或者 default_text_search_config 如果省略该参数。
ts_debug displays information about every token of document as produced by the parser and processed by the configured dictionaries. It uses the configuration specified by config, or default_text_search_config if that argument is omitted.
ts_debug 为解析器在文本中识别的每个标记返回一行。返回的列是
ts_debug returns one row for each token identified in the text by the parser. The columns returned are
这是一个简单的示例:
Here is a simple example:
SELECT * FROM ts_debug('english', 'a fat cat sat on a mat - it ate a fat rats');
alias | description | token | dictionaries | dictionary | lexemes
-----------+-----------------+-------+----------------+--------------+---------
asciiword | Word, all ASCII | a | {english_stem} | english_stem | {}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | fat | {english_stem} | english_stem | {fat}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | cat | {english_stem} | english_stem | {cat}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | sat | {english_stem} | english_stem | {sat}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | on | {english_stem} | english_stem | {}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | a | {english_stem} | english_stem | {}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | mat | {english_stem} | english_stem | {mat}
blank | Space symbols | | {} | |
blank | Space symbols | - | {} | |
asciiword | Word, all ASCII | it | {english_stem} | english_stem | {}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | ate | {english_stem} | english_stem | {ate}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | a | {english_stem} | english_stem | {}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | fat | {english_stem} | english_stem | {fat}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | rats | {english_stem} | english_stem | {rat}
为了更全面地演示,我们首先为英语创建一个 public.english 配置和 Ispell 词典:
For a more extensive demonstration, we first create a public.english configuration and Ispell dictionary for the English language:
CREATE TEXT SEARCH CONFIGURATION public.english ( COPY = pg_catalog.english );
CREATE TEXT SEARCH DICTIONARY english_ispell (
TEMPLATE = ispell,
DictFile = english,
AffFile = english,
StopWords = english
);
ALTER TEXT SEARCH CONFIGURATION public.english
ALTER MAPPING FOR asciiword WITH english_ispell, english_stem;
SELECT * FROM ts_debug('public.english', 'The Brightest supernovaes');
alias | description | token | dictionaries | dictionary | lexemes
-----------+-----------------+-------------+-------------------------------+----------------+-------------
asciiword | Word, all ASCII | The | {english_ispell,english_stem} | english_ispell | {}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | Brightest | {english_ispell,english_stem} | english_ispell | {bright}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem | {supernova}
在此示例中,单词 Brightest 被解析器识别为 ASCII word(别名 asciiword)。对于此标记类型,词典列表为 english_ispell 和 english_stem。这个词被 english_ispell 识别,它将其简化为名词 bright。english_ispell 词典不知道 supernovaes 这个单词,因此它传递给下一个词典,并且幸运地被识别(事实上 english_stem 是识别所有内容的 Snowball 词典;这就是将它放在词典列表末尾的原因)。
In this example, the word Brightest was recognized by the parser as an ASCII word (alias asciiword). For this token type the dictionary list is english_ispell and english_stem. The word was recognized by english_ispell, which reduced it to the noun bright. The word supernovaes is unknown to the english_ispell dictionary so it was passed to the next dictionary, and, fortunately, was recognized (in fact, english_stem is a Snowball dictionary which recognizes everything; that is why it was placed at the end of the dictionary list).
单词 The 已被 english_ispell 字典识别为停用词( Section 12.6.1),并且不会被索引。空格也被丢弃,因为该配置根本没有提供针对它们的字典。
The word The was recognized by the english_ispell dictionary as a stop word (Section 12.6.1) and will not be indexed. The spaces are discarded too, since the configuration provides no dictionaries at all for them.
您可以通过明确指定想要查看的列来减小输出的宽度:
You can reduce the width of the output by explicitly specifying which columns you want to see:
SELECT alias, token, dictionary, lexemes
FROM ts_debug('public.english', 'The Brightest supernovaes');
alias | token | dictionary | lexemes
-----------+-------------+----------------+-------------
asciiword | The | english_ispell | {}
blank | | |
asciiword | Brightest | english_ispell | {bright}
blank | | |
asciiword | supernovaes | english_stem | {supernova}
12.8.2. Parser Testing #
以下函数允许直接测试文本搜索解析器。
The following functions allow direct testing of a text search parser.
ts_parse(parser_name text, document text,
OUT tokid integer, OUT token text) returns setof record
ts_parse(parser_oid oid, document text,
OUT tokid integer, OUT token text) returns setof record
ts_parse 解析给定的 document 并返回一系列记录,每条记录针对由解析产生的每个标记。每条记录都包含一个 tokid,显示分配的标记类型和 token,即标记的文本。例如:
ts_parse parses the given document and returns a series of records, one for each token produced by parsing. Each record includes a tokid showing the assigned token type and a token which is the text of the token. For example:
SELECT * FROM ts_parse('default', '123 - a number');
tokid | token
-------+--------
22 | 123
12 |
12 | -
1 | a
12 |
1 | number
ts_token_type(parser_name text, OUT tokid integer,
OUT alias text, OUT description text) returns setof record
ts_token_type(parser_oid oid, OUT tokid integer,
OUT alias text, OUT description text) returns setof record
ts_token_type 返回一个表,其中描述指定的解析器可以识别的每种标记类型。对于每种标记类型,该表给出解析器用于给该类型标记贴上标签的整数 tokid,在配置命令中给标记类型命名的 alias 和一个简短的 description。例如:
ts_token_type returns a table which describes each type of token the specified parser can recognize. For each token type, the table gives the integer tokid that the parser uses to label a token of that type, the alias that names the token type in configuration commands, and a short description. For example:
SELECT * FROM ts_token_type('default');
tokid | alias | description
-------+-----------------+------------------------------------------
1 | asciiword | Word, all ASCII
2 | word | Word, all letters
3 | numword | Word, letters and digits
4 | email | Email address
5 | url | URL
6 | host | Host
7 | sfloat | Scientific notation
8 | version | Version number
9 | hword_numpart | Hyphenated word part, letters and digits
10 | hword_part | Hyphenated word part, all letters
11 | hword_asciipart | Hyphenated word part, all ASCII
12 | blank | Space symbols
13 | tag | XML tag
14 | protocol | Protocol head
15 | numhword | Hyphenated word, letters and digits
16 | asciihword | Hyphenated word, all ASCII
17 | hword | Hyphenated word, all letters
18 | url_path | URL path
19 | file | File or path name
20 | float | Decimal notation
21 | int | Signed integer
22 | uint | Unsigned integer
23 | entity | XML entity
12.8.3. Dictionary Testing #
ts_lexize 函数方便字典测试。
The ts_lexize function facilitates dictionary testing.
ts_lexize(dict regdictionary, token text) returns text[]
ts_lexize 如果输入 token 是词典所知的,则返回词素数组,如果标记是词典所知的但它是停用词,则返回一个空数组,如果它是一个未知的单词,则返回 NULL。
ts_lexize returns an array of lexemes if the input token is known to the dictionary, or an empty array if the token is known to the dictionary but it is a stop word, or NULL if it is an unknown word.
示例:
Examples:
SELECT ts_lexize('english_stem', 'stars');
ts_lexize
-----------
{star}
SELECT ts_lexize('english_stem', 'a');
ts_lexize
-----------
{}
Note
ts_lexize 函数期望单个 token,而不是文本。这里有一个可能造成混淆的情况:
The ts_lexize function expects a single token, not text. Here is a case where this can be confusing:
SELECT ts_lexize('thesaurus_astro', 'supernovae stars') is null;
?column?
----------
t
同义词词典 thesaurus_astro 确实知道短语 supernovae stars,但是 ts_lexize 会失败,因为它不解析输入文本,而是将它当作一个单个标记。使用 plainto_tsquery 或 to_tsvector 来测试同义词词典,例如:
The thesaurus dictionary thesaurus_astro does know the phrase supernovae stars, but ts_lexize fails since it does not parse the input text but treats it as a single token. Use plainto_tsquery or to_tsvector to test thesaurus dictionaries, for example:
SELECT plainto_tsquery('supernovae stars');
plainto_tsquery
-----------------
'sn'