Table 9-37,
Table 9-38 and
Table 9-39
summarize the functions and operators that are provided
for full text searching. See Chapter 12 for a detailed
explanation of PostgreSQL's text search
facility.
Table 9-37. Text Search Operators
Operator
Description
Example
Result
@@
tsvector matches tsquery ?
to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat')
t
@@@
deprecated synonym for @@
to_tsvector('fat cats ate rats') @@@ to_tsquery('cat & rat')
t
||
concatenate tsvectors
'a:1 b:2'::tsvector || 'c:1 d:2 b:3'::tsvector
'a':1 'b':2,5 'c':3 'd':4
&&
AND tsquerys together
'fat | rat'::tsquery && 'cat'::tsquery
( 'fat' | 'rat' ) & 'cat'
||
OR tsquerys together
'fat | rat'::tsquery || 'cat'::tsquery
( 'fat' | 'rat' ) | 'cat'
!!
negate a tsquery
!! 'cat'::tsquery
!'cat'
@>
tsquery contains another ?
'cat'::tsquery @> 'cat & rat'::tsquery
f
<@
tsquery is contained in ?
'cat'::tsquery <@ 'cat & rat'::tsquery
t
Note: The tsquery containment operators consider only the lexemes
listed in the two queries, ignoring the combining operators.
In addition to the operators shown in the table, the ordinary B-tree
comparison operators (=, <, etc) are defined
for types tsvector and tsquery. These are not very
useful for text searching but allow, for example, unique indexes to be
built on columns of these types.
Note: All the text search functions that accept an optional regconfig
argument will use the configuration specified by
default_text_search_config
when that argument is omitted.
The functions in
Table 9-39
are listed separately because they are not usually used in everyday text
searching operations. They are helpful for development and debugging
of new text search configurations.
Table 9-39. Text Search Debugging Functions
Function
Return Type
Description
Example
Result
ts_debug([configregconfig, ] documenttext, OUT aliastext, OUT descriptiontext, OUT tokentext, OUT dictionariesregdictionary[], OUT dictionaryregdictionary, OUT lexemestext[])
setof record
test a configuration
ts_debug('english', 'The Brightest supernovaes')
(asciiword,"Word, all ASCII",The,{english_stem},english_stem,{}) ...
ts_lexize(dictregdictionary, tokentext)
text[]
test a dictionary
ts_lexize('english_stem', 'stars')
{star}
ts_parse(parser_nametext, documenttext, OUT tokidinteger, OUT tokentext)
setof record
test a parser
ts_parse('default', 'foo - bar')
(1,foo) ...
ts_parse(parser_oidoid, documenttext, OUT tokidinteger, OUT tokentext)
setof record
test a parser
ts_parse(3722, 'foo - bar')
(1,foo) ...
ts_token_type(parser_nametext, OUT tokidinteger, OUT aliastext, OUT descriptiontext)
setof record
get token types defined by parser
ts_token_type('default')
(1,asciiword,"Word, all ASCII") ...
ts_token_type(parser_oidoid, OUT tokidinteger, OUT aliastext, OUT descriptiontext)
setof record
get token types defined by parser
ts_token_type(3722)
(1,asciiword,"Word, all ASCII") ...
ts_stat(sqlquerytext, [weightstext, ] OUT wordtext, OUT ndocinteger, OUT nentryinteger)