[Groonga-commit] groonga/groonga at 9824c95 [master] doc: fix label

Back to archive index

Kouhei Sutou null+****@clear*****
Mon Mar 16 16:23:35 JST 2015


Kouhei Sutou	2015-03-16 16:23:35 +0900 (Mon, 16 Mar 2015)

  New Revision: 9824c95f7d65903f5f8f6ab79ee1ae76a5c2c0cc
  https://github.com/groonga/groonga/commit/9824c95f7d65903f5f8f6ab79ee1ae76a5c2c0cc

  Message:
    doc: fix label

  Modified files:
    doc/source/reference/tables.rst
    doc/source/reference/tokenizers.rst

  Modified: doc/source/reference/tables.rst (+4 -4)
===================================================================
--- doc/source/reference/tables.rst    2015-03-16 16:09:18 +0900 (bfc0317)
+++ doc/source/reference/tables.rst    2015-03-16 16:23:35 +0900 (9b75925)
@@ -69,7 +69,7 @@ prefix is omitted in the table.)
 | search       |            |              |             |             |
 +--------------+------------+--------------+-------------+-------------+
 
-.. _token-no-key
+.. _token-no-key:
 
 ``TABLE_NO_KEY``
 ^^^^^^^^^^^^^^^^
@@ -81,7 +81,7 @@ You cannot use ``TABLE_NO_KEY`` for lexicon for fulltext search
 because lexicon stores tokens as key. ``TABLE_NO_KEY`` is useful for
 no key records such as log.
 
-.. _token-hash-key
+.. _token-hash-key:
 
 ``TABLE_HASH_KEY``
 ^^^^^^^^^^^^^^^^^^
@@ -92,7 +92,7 @@ functions such as common prefix search and predictive search.
 ``TABLE_HASH_KEY`` is useful for index for exact search such as tag
 search.
 
-.. _token-pat-key
+.. _token-pat-key:
 
 ``TABLE_PAT_KEY``
 ^^^^^^^^^^^^^^^^^
@@ -102,7 +102,7 @@ search.
 ``TABLE_PAT_KEY`` is useful for lexicon for fulltext search and
 index for range search.
 
-.. _token-dat-key
+.. _token-dat-key:
 
 ``TABLE_DAT_KEY``
 ^^^^^^^^^^^^^^^^^

  Modified: doc/source/reference/tokenizers.rst (+14 -14)
===================================================================
--- doc/source/reference/tokenizers.rst    2015-03-16 16:09:18 +0900 (6b7a470)
+++ doc/source/reference/tokenizers.rst    2015-03-16 16:23:35 +0900 (5e0e12e)
@@ -122,7 +122,7 @@ Here is a list of built-in tokenizers:
   * ``TokenMecab``
   * ``TokenRegexp``
 
-.. _token-bigram
+.. _token-bigram:
 
 ``TokenBigram``
 ^^^^^^^^^^^^^^^
@@ -220,7 +220,7 @@ for non-ASCII characters.
 .. include:: ../example/reference/tokenizers/token-bigram-non-ascii-with-normalizer.log
 .. tokenize TokenBigram "日本語の勉強" NormalizerAuto
 
-.. _token-bigram-split-symbol
+.. _token-bigram-split-symbol:
 
 ``TokenBigramSplitSymbol``
 ^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -233,7 +233,7 @@ tokenizes symbols by bigram tokenize method:
 .. include:: ../example/reference/tokenizers/token-bigram-split-symbol-with-normalizer.log
 .. tokenize TokenBigramSplitSymbol "100cents!!!" NormalizerAuto
 
-.. _token-bigram-split-symbol-alpha
+.. _token-bigram-split-symbol-alpha:
 
 ``TokenBigramSplitSymbolAlpha``
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -247,7 +247,7 @@ alphabets by bigram tokenize method:
 .. include:: ../example/reference/tokenizers/token-bigram-split-symbol-alpha-with-normalizer.log
 .. tokenize TokenBigramSplitSymbolAlpha "100cents!!!" NormalizerAuto
 
-.. _token-bigram-split-symbol-alpha-digit
+.. _token-bigram-split-symbol-alpha-digit:
 
 ``TokenBigramSplitSymbolAlphaDigit``
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -261,7 +261,7 @@ symbols, alphabets and digits by bigram tokenize method:
 .. include:: ../example/reference/tokenizers/token-bigram-split-symbol-alpha-digit-with-normalizer.log
 .. tokenize TokenBigramSplitSymbolAlphaDigit "100cents!!!" NormalizerAuto
 
-.. _token-bigramIgnoreBlank
+.. _token-bigram-ignore-blank:
 
 ``TokenBigramIgnoreBlank``
 ^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -285,7 +285,7 @@ Here is a result by ``TokenBigramIgnoreBlank``:
 .. include:: ../example/reference/tokenizers/token-bigram-ignore-blank-with-white-spaces.log
 .. tokenize TokenBigramIgnoreBlank "日 本 語 ! ! !" NormalizerAuto
 
-.. _token-bigramIgnoreBlank-split-symbol
+.. _token-bigram-ignore-blank-split-symbol:
 
 ``TokenBigramIgnoreBlankSplitSymbol``
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -317,7 +317,7 @@ Here is a result by ``TokenBigramIgnoreBlankSplitSymbol``:
 .. include:: ../example/reference/tokenizers/token-bigram-ignore-blank-split-symbol-with-white-spaces-and-symbol.log
 .. tokenize TokenBigramIgnoreBlankSplitSymbol "日 本 語 ! ! !" NormalizerAuto
 
-.. _token-bigramIgnoreBlank-split-symbol-alpha
+.. _token-bigram-ignore-blank-split-symbol-alpha:
 
 ``TokenBigramIgnoreBlankSplitSymbolAlpha``
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -349,7 +349,7 @@ Here is a result by ``TokenBigramIgnoreBlankSplitSymbolAlpha``:
 .. include:: ../example/reference/tokenizers/token-bigram-ignore-blank-split-symbol-with-white-spaces-and-symbol-and-alphabet.log
 .. tokenize TokenBigramIgnoreBlankSplitSymbolAlpha "Hello 日 本 語 ! ! !" NormalizerAuto
 
-.. _token-bigramIgnoreBlank-split-symbol-alpha-digit
+.. _token-bigram-ignore-blank-split-symbol-alpha-digit:
 
 ``TokenBigramIgnoreBlankSplitSymbolAlphaDigit``
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -382,7 +382,7 @@ Here is a result by ``TokenBigramIgnoreBlankSplitSymbolAlphaDigit``:
 .. include:: ../example/reference/tokenizers/token-bigram-ignore-blank-split-symbol-with-white-spaces-and-symbol-and-alphabet-digit.log
 .. tokenize TokenBigramIgnoreBlankSplitSymbolAlphaDigit "Hello 日 本 語 ! ! ! 777" NormalizerAuto
 
-.. _token-unigram
+.. _token-unigram:
 
 ``TokenUnigram``
 ^^^^^^^^^^^^^^^^
@@ -395,7 +395,7 @@ token. ``TokenUnigram`` uses 1 character per token.
 .. include:: ../example/reference/tokenizers/token-unigram.log
 .. tokenize TokenUnigram "100cents!!!" NormalizerAuto
 
-.. _token-trigram
+.. _token-trigram:
 
 ``TokenTrigram``
 ^^^^^^^^^^^^^^^^
@@ -408,22 +408,22 @@ token. ``TokenTrigram`` uses 3 characters per token.
 .. include:: ../example/reference/tokenizers/token-trigram.log
 .. tokenize TokenTrigram "10000cents!!!!!" NormalizerAuto
 
-.. _token-delimit
+.. _token-delimit:
 
 ``TokenDelimit``
 ^^^^^^^^^^^^^^^^
 
-.. _token-delimit-null
+.. _token-delimit-null:
 
 ``TokenDelimitNull``
 ^^^^^^^^^^^^^^^^^^^^
 
-.. _token-mecab
+.. _token-mecab:
 
 ``TokenMecab``
 ^^^^^^^^^^^^^^
 
-.. _token-regexp
+.. _token-regexp:
 
 ``TokenRegexp``
 ^^^^^^^^^^^^^^^
-------------- next part --------------
HTML����������������������������...
Download 



More information about the Groonga-commit mailing list
Back to archive index