[Groonga-commit] groonga/groonga at a4367a0 [master] doc: fix wrong file path

Back to archive index
Yasuhiro Horimoto null+****@clear*****
Tue Jan 8 10:28:58 JST 2019


Yasuhiro Horimoto	2019-01-08 10:28:58 +0900 (Tue, 08 Jan 2019)

  Revision: a4367a0b6ce9374d10d0a98f6cc4317568f218b2
  https://github.com/groonga/groonga/commit/a4367a0b6ce9374d10d0a98f6cc4317568f218b2

  Message:
    doc: fix wrong file path

  Modified files:
    doc/source/reference/tokenizer/summary.rst
    doc/source/reference/tokenizers/token_bigram.rst
    doc/source/reference/tokenizers/token_bigram_ignore_blank_split_symbol.rst

  Modified: doc/source/reference/tokenizer/summary.rst (+1 -1)
===================================================================
--- doc/source/reference/tokenizer/summary.rst    2019-01-08 10:05:24 +0900 (ba299fe9d)
+++ doc/source/reference/tokenizer/summary.rst    2019-01-08 10:28:58 +0900 (fdb4e23cb)
@@ -39,7 +39,7 @@ try :ref:`token-bigram` tokenizer by
 :doc:`/reference/commands/tokenize`:
 
 .. groonga-command
-.. include:: ../example/reference/tokenizers/tokenize-example.log
+.. include:: ../../example/reference/tokenizers/tokenize-example.log
 .. tokenize TokenBigram "Hello World"
 
 "tokenize" is the process that extracts zero or more tokens from a

  Modified: doc/source/reference/tokenizers/token_bigram.rst (+1 -1)
===================================================================
--- doc/source/reference/tokenizers/token_bigram.rst    2019-01-08 10:05:24 +0900 (dbe2c431c)
+++ doc/source/reference/tokenizers/token_bigram.rst    2019-01-08 10:28:58 +0900 (40d2c0220)
@@ -56,7 +56,7 @@ If no normalizer is used, ``TokenBigram`` uses pure bigram (all tokens
 except the last token have two characters) tokenize method:
 
 .. groonga-command
-.. include:: ../example/reference/tokenizers/token-bigram-no-normalizer.log
+.. include:: ../../example/reference/tokenizers/token-bigram-no-normalizer.log
 .. tokenize TokenBigram "Hello World"
 
 If normalizer is used, ``TokenBigram`` uses white-space-separate like

  Modified: doc/source/reference/tokenizers/token_bigram_ignore_blank_split_symbol.rst (+2 -2)
===================================================================
--- doc/source/reference/tokenizers/token_bigram_ignore_blank_split_symbol.rst    2019-01-08 10:05:24 +0900 (5e86071a9)
+++ doc/source/reference/tokenizers/token_bigram_ignore_blank_split_symbol.rst    2019-01-08 10:28:58 +0900 (8bb797b0b)
@@ -41,11 +41,11 @@ has symbols and non-ASCII characters.
 Here is a result by :ref:`token-bigram` :
 
 .. groonga-command
-.. include:: ../example/reference/tokenizers/token-bigram-with-white-spaces-and-symbol.log
+.. include:: ../../example/reference/tokenizers/token-bigram-with-white-spaces-and-symbol.log
 .. tokenize TokenBigram "日 本 語 ! ! !" NormalizerAuto
 
 Here is a result by ``TokenBigramIgnoreBlankSplitSymbol``:
 
 .. groonga-command
-.. include:: ../example/reference/tokenizers/token-bigram-ignore-blank-split-symbol-with-white-spaces-and-symbol.log
+.. include:: ../../example/reference/tokenizers/token-bigram-ignore-blank-split-symbol-with-white-spaces-and-symbol.log
 .. tokenize TokenBigramIgnoreBlankSplitSymbol "日 本 語 ! ! !" NormalizerAuto
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.osdn.me/mailman/archives/groonga-commit/attachments/20190108/766f7341/attachment-0001.html>


More information about the Groonga-commit mailing list
Back to archive index