Piro / YUKI Hiroshi
null+****@clear*****
Tue Sep 23 19:05:17 JST 2014
Piro / YUKI Hiroshi 2014-09-23 19:05:17 +0900 (Tue, 23 Sep 2014) New Revision: 246466deab8df80a0517eb48d783671f42dd412c https://github.com/droonga/droonga.org/commit/246466deab8df80a0517eb48d783671f42dd412c Message: Don't run regular commands as the root Modified files: tutorial/1.0.6/dump-restore/index.md tutorial/1.0.6/groonga/index.md Modified: tutorial/1.0.6/dump-restore/index.md (+26 -26) =================================================================== --- tutorial/1.0.6/dump-restore/index.md 2014-09-23 18:57:14 +0900 (6b90d60) +++ tutorial/1.0.6/dump-restore/index.md 2014-09-23 19:05:17 +0900 (a4e280d) @@ -43,7 +43,7 @@ First, install a command line tool named `drndump` via rubygems: After that, establish that the `drndump` command has been installed successfully: - # drndump --version + $ drndump --version drndump 1.0.0 ### Dump all data in a Droonga cluster @@ -104,15 +104,15 @@ For example, if your cluster is constructed from two nodes `node0` (`192.168.100 Note to these things: - * You must specify valid host name or IP address of one of nodes in the cluster, via the option `--host`. + * You must specify valid host name of one of nodes in the cluster, via the option `--host`. * You must specify valid host name or IP address of the computer you are logged in, via the option `--receiver-host`. - It is used by the Droonga cluster, to send messages. + It is used by the Droonga cluster, to send response messages. * The result includes complete commands to construct a dataset, same to the source. The result is printed to the standard output. To save it as a JSONs file, you'll use a redirection like: - # drndump --host=node0 \ + $ drndump --host=node0 \ --receiver-host=node2 \ > dump.jsons @@ -130,7 +130,7 @@ Install the command included in the package `droonga-client`, via rubygems: After that, establish that the `droonga-send` command has been installed successfully: - # droonga-send --version + $ droonga-send --version droonga-send 0.1.9 ### Prepare an empty Droonga cluster @@ -141,8 +141,8 @@ If you are reading this tutorial sequentially, you'll have an existing cluster a Make it empty with these commands: ~~~ -# endpoint="http://node0:10041" -# curl "$endpoint/d/table_remove?name=Location" | jq "." +$ endpoint="http://node0:10041" +$ curl "$endpoint/d/table_remove?name=Location" | jq "." [ [ 0, @@ -151,7 +151,7 @@ Make it empty with these commands: ], true ] -# curl "$endpoint/d/table_remove?name=Store" | jq "." +$ curl "$endpoint/d/table_remove?name=Store" | jq "." [ [ 0, @@ -160,7 +160,7 @@ Make it empty with these commands: ], true ] -# curl "$endpoint/d/table_remove?name=Term" | jq "." +$ curl "$endpoint/d/table_remove?name=Term" | jq "." [ [ 0, @@ -174,8 +174,8 @@ Make it empty with these commands: After that the cluster becomes empty. Confirm it: ~~~ -# endpoint="http://node0:10041" -# curl "$endpoint/d/table_list" | jq "." +$ endpoint="http://node0:10041" +$ curl "$endpoint/d/table_list" | jq "." [ [ 0, @@ -219,7 +219,7 @@ After that the cluster becomes empty. Confirm it: ] ] ] -# curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." [ [ 0, @@ -245,7 +245,7 @@ You just have to pour the contents of the dump file to an empty cluster, by the To restore the cluster from the dump file, run a command line like: ~~~ -# droonga-send --server=node0 \ +$ droonga-send --server=node0 \ dump.jsons ~~~ @@ -258,7 +258,7 @@ Note to these things: Then the data is completely restored. Confirm it: ~~~ -# curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." [ [ 0, @@ -333,16 +333,16 @@ Construct two clusters by `droonga-engine-catalog-modify` and make one cluster e # droonga-engine-catalog-modify --source=~/droonga/catalog.json \ --update \ --replica-hosts=node1 - # endpoint="http://node1:10041" - # curl "$endpoint/d/table_remove?name=Location" - # curl "$endpoint/d/table_remove?name=Store" - # curl "$endpoint/d/table_remove?name=Term" + $ endpoint="http://node1:10041" + $ curl "$endpoint/d/table_remove?name=Location" + $ curl "$endpoint/d/table_remove?name=Store" + $ curl "$endpoint/d/table_remove?name=Term" After that there are two clusters: one contains `node0` with data, another contains `node1` with no data. Confirm it: ~~~ -# curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" | jq "." { "nodes": { "node0:10031/droonga": { @@ -350,7 +350,7 @@ After that there are two clusters: one contains `node0` with data, another conta } } } -# curl "http://node0:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "http://node0:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." [ [ 0, @@ -401,7 +401,7 @@ After that there are two clusters: one contains `node0` with data, another conta ] ] ] -# curl "http://node1:10041/droonga/system/status" | jq "." +$ curl "http://node1:10041/droonga/system/status" | jq "." { "nodes": { "node1:10031/droonga": { @@ -409,7 +409,7 @@ After that there are two clusters: one contains `node0` with data, another conta } } } -# curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." [ [ 0, @@ -436,7 +436,7 @@ To copy data between two clusters, run the `droonga-engine-absorb-data` command ~~~ (on node0 or node1) -# droonga-engine-absorb-data --source-host=node0 \ +$ droonga-engine-absorb-data --source-host=node0 \ --destination-host=node1 Start to absorb data from node0 to node1 @@ -452,7 +452,7 @@ Done. After that contents of these two clusters are completely synchronized. Confirm it: ~~~ -# curl "http://node0:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "http://node0:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." [ [ 0, @@ -503,7 +503,7 @@ After that contents of these two clusters are completely synchronized. Confirm i ] ] ] -# curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "." [ [ 0, @@ -573,7 +573,7 @@ Run following command lines to unite these two clusters: After that there is just one cluster - yes, it's the initial state. ~~~ -# curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" | jq "." { "nodes": { "node0:10031/droonga": { Modified: tutorial/1.0.6/groonga/index.md (+16 -16) =================================================================== --- tutorial/1.0.6/groonga/index.md 2014-09-23 18:57:14 +0900 (9d5e5d2) +++ tutorial/1.0.6/groonga/index.md 2014-09-23 19:05:17 +0900 (da1216b) @@ -226,7 +226,7 @@ Let's make sure that the cluster works, by a Droonga command, `system.status`. You can see the result via HTTP, like: ~~~ -# curl "http://node0:10041/droonga/system/status" | jq "." +$ curl "http://node0:10041/droonga/system/status" | jq "." { "nodes": { "node0:10031/droonga": { @@ -243,7 +243,7 @@ The result says that two nodes are working correctly. Because it is a cluster, another endpoint returns same result. ~~~ -# curl "http://node1:10041/droonga/system/status" | jq "." +$ curl "http://node1:10041/droonga/system/status" | jq "." { "nodes": { "node0:10031/droonga": { @@ -273,8 +273,8 @@ Requests are completely same to ones for a Groonga server. To create a new table `Store`, you just have to send a GET request for the `table_create` command, like: ~~~ -# endpoint="http://node0:10041" -# curl "$endpoint/d/table_create?name=Store&flags=TABLE_PAT_KEY&key_type=ShortText" | jq "." +$ endpoint="http://node0:10041" +$ curl "$endpoint/d/table_create?name=Store&flags=TABLE_PAT_KEY&key_type=ShortText" | jq "." [ [ 0, @@ -293,7 +293,7 @@ All requests will be distributed to suitable nodes in the cluster. Next, create new columns `name` and `location` to the `Store` table by the `column_create` command, like: ~~~ -# curl "$endpoint/d/column_create?table=Store&name=name&flags=COLUMN_SCALAR&type=ShortText" | jq "." +$ curl "$endpoint/d/column_create?table=Store&name=name&flags=COLUMN_SCALAR&type=ShortText" | jq "." [ [ 0, @@ -302,7 +302,7 @@ Next, create new columns `name` and `location` to the `Store` table by the `colu ], true ] -# curl "$endpoint/d/column_create?table=Store&name=location&flags=COLUMN_SCALAR&type=WGS84GeoPoint" | jq "." +$ curl "$endpoint/d/column_create?table=Store&name=location&flags=COLUMN_SCALAR&type=WGS84GeoPoint" | jq "." [ [ 0, @@ -316,7 +316,7 @@ Next, create new columns `name` and `location` to the `Store` table by the `colu Create indexes also. ~~~ -# curl "$endpoint/d/table_create?name=Term&flags=TABLE_PAT_KEY&key_type=ShortText&default_tokenizer=TokenBigram&normalizer=NormalizerAuto" | jq "." +$ curl "$endpoint/d/table_create?name=Term&flags=TABLE_PAT_KEY&key_type=ShortText&default_tokenizer=TokenBigram&normalizer=NormalizerAuto" | jq "." [ [ 0, @@ -325,7 +325,7 @@ Create indexes also. ], true ] -# curl "$endpoint/d/column_create?table=Term&name=store_name&flags=COLUMN_INDEX|WITH_POSITION&type=Store&source=name" | jq "." +$ curl "$endpoint/d/column_create?table=Term&name=store_name&flags=COLUMN_INDEX|WITH_POSITION&type=Store&source=name" | jq "." [ [ 0, @@ -334,7 +334,7 @@ Create indexes also. ], true ] -# curl "$endpoint/d/table_create?name=Location&flags=TABLE_PAT_KEY&key_type=WGS84GeoPoint" | jq "." +$ curl "$endpoint/d/table_create?name=Location&flags=TABLE_PAT_KEY&key_type=WGS84GeoPoint" | jq "." [ [ 0, @@ -343,7 +343,7 @@ Create indexes also. ], true ] -# curl "$endpoint/d/column_create?table=Location&name=store&flags=COLUMN_INDEX&type=Store&source=location" | jq "." +$ curl "$endpoint/d/column_create?table=Location&name=store&flags=COLUMN_INDEX&type=Store&source=location" | jq "." [ [ 0, @@ -362,7 +362,7 @@ OK, now the table has been created successfully. Let's see it by the `table_list` command: ~~~ -# curl "$endpoint/d/table_list" | jq "." +$ curl "$endpoint/d/table_list" | jq "." [ [ 0, @@ -421,7 +421,7 @@ Let's see it by the `table_list` command: Because it is a cluster, another endpoint returns same result. ~~~ -# curl "http://node1:10041/d/table_list" | jq "." +$ curl "http://node1:10041/d/table_list" | jq "." [ [ 0, @@ -533,7 +533,7 @@ stores.json: Then, send it as a POST request of the `load` command, like: ~~~ -# curl --data "@stores.json" "$endpoint/d/load?table=Store" | jq "." +$ curl --data "@stores.json" "$endpoint/d/load?table=Store" | jq "." [ [ 0, @@ -555,7 +555,7 @@ OK, all data is now ready. As the starter, let's select initial ten records with the `select` command: ~~~ -# curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "." [ [ 0, @@ -611,7 +611,7 @@ As the starter, let's select initial ten records with the `select` command: Of course you can specify conditions via the `query` option: ~~~ -# curl "$endpoint/d/select?table=Store&query=Columbus&match_columns=name&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&query=Columbus&match_columns=name&output_columns=name&limit=10" | jq "." [ [ 0, @@ -638,7 +638,7 @@ Of course you can specify conditions via the `query` option: ] ] ] -# curl "$endpoint/d/select?table=Store&filter=name@'Ave'&output_columns=name&limit=10" | jq "." +$ curl "$endpoint/d/select?table=Store&filter=name@'Ave'&output_columns=name&limit=10" | jq "." [ [ 0, -------------- next part -------------- HTML����������������������������...Download