Consul初步使用

环境

node1 10.86.40.247 server
node2 10.86.43.135 server
node3 10.86.40.193 server
node4 10.86.42.205 client

搭建集群

分别在node1,node2, node3上面执行下面的命令:
node1: sudo ./consul agent -server -bootstrap-expect 3 -data-dir consul.d -node=node1 -bind=10.86.40.247 -dc=dc1 -config-dir consul.d
日志输出:

[node1 /home/q/workspace]$ sudo ./consul agent -server -bootstrap-expect 3 -data-dir consul.d -node=node1 -bind=10.86.40.247 -dc=dc1
WARNING: 'dc' is deprecated. Use 'datacenter' instead
==> WARNING: Expect Mode enabled, expecting 3 servers
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v0.9.3'
           Node ID: 'd1e34000-7d12-5371-97b5-c3456cef7585'
         Node name: 'node1'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 10.86.40.247 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
    2017/09/18 16:07:30 [INFO] raft: Initial configuration (index=0): []
    2017/09/18 16:07:30 [INFO] raft: Node at 10.86.40.247:8300 [Follower] entering Follower state (Leader: "")
    2017/09/18 16:07:30 [INFO] serf: EventMemberJoin: node1.dc1 10.86.40.247
    2017/09/18 16:07:30 [INFO] serf: EventMemberJoin: node1 10.86.40.247
    2017/09/18 16:07:30 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2017/09/18 16:07:30 [INFO] consul: Adding LAN server node1 (Addr: tcp/10.86.40.247:8300) (DC: dc1)
    2017/09/18 16:07:30 [INFO] consul: Handled member-join event for server "node1.dc1" in area "wan"
    2017/09/18 16:07:30 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2017/09/18 16:07:30 [INFO] agent: Started HTTP server on 127.0.0.1:8500
    2017/09/18 16:07:37 [ERR] agent: failed to sync remote state: No cluster leader
    2017/09/18 16:07:39 [WARN] raft: no known peers, aborting election
    2017/09/18 16:08:04 [ERR] agent: Coordinate update error: No cluster leader
    2017/09/18 16:08:13 [ERR] agent: failed to sync remote state: No cluster leader

node2: sudo ./consul agent -server -bootstrap-expect 3 -data-dir consul.d -node=node2 -bind=10.86.43.135 -ui-dir ./dist -dc=dc1 -config-dir consul.d
日志输出

[node2 /home/q/workspace]$ sudo ./consul agent -server -bootstrap-expect 3 -data-dir consul.d -node=node2 -bind=10.86.43.135 -ui-dir ./dist  -dc=dc1
WARNING: 'dc' is deprecated. Use 'datacenter' instead
==> WARNING: Expect Mode enabled, expecting 3 servers
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v0.9.3'
           Node ID: '904c120c-a1e5-488d-ca87-1f5ca80f2b3e'
         Node name: 'node2'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 10.86.43.135 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
    2017/09/18 16:11:42 [INFO] raft: Initial configuration (index=0): []
    2017/09/18 16:11:42 [INFO] raft: Node at 10.86.43.135:8300 [Follower] entering Follower state (Leader: "")
    2017/09/18 16:11:42 [INFO] serf: EventMemberJoin: node2.dc1 10.86.43.135
    2017/09/18 16:11:42 [INFO] serf: EventMemberJoin: node2 10.86.43.135
    2017/09/18 16:11:42 [INFO] consul: Handled member-join event for server "node2.dc1" in area "wan"
    2017/09/18 16:11:42 [INFO] consul: Adding LAN server node2 (Addr: tcp/10.86.43.135:8300) (DC: dc1)
    2017/09/18 16:11:42 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2017/09/18 16:11:42 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2017/09/18 16:11:42 [INFO] agent: Started HTTP server on 127.0.0.1:8500
    2017/09/18 16:11:47 [WARN] raft: no known peers, aborting election
    2017/09/18 16:11:49 [ERR] agent: failed to sync remote state: No cluster leader
    2017/09/18 16:12:12 [ERR] agent: Coordinate update error: No cluster leader
    2017/09/18 16:12:26 [ERR] agent: failed to sync remote state: No cluster leader

node3: sudo ./consul agent -server -bootstrap-expect 3 -data-dir consul.d -node=node3 -bind=10.86.40.193 -ui-dir ./dist -dc=dc1 -config-dir consul.d
日志输出

[node3 /home/q/workspace]$ sudo ./consul agent -server -bootstrap-expect 3 -data-dir consul.d -node=node3 -bind=10.86.40.193 -ui-dir ./dist  -dc=dc1
WARNING: 'dc' is deprecated. Use 'datacenter' instead
==> WARNING: Expect Mode enabled, expecting 3 servers
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v0.9.3'
           Node ID: 'd6152326-865c-0f43-3ecf-f880123e0505'
         Node name: 'node3'
        Datacenter: 'dc1' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 10.86.40.193 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
    2017/09/18 16:12:02 [INFO] raft: Initial configuration (index=0): []
    2017/09/18 16:12:02 [INFO] raft: Node at 10.86.40.193:8300 [Follower] entering Follower state (Leader: "")
    2017/09/18 16:12:02 [INFO] serf: EventMemberJoin: node3.dc1 10.86.40.193
    2017/09/18 16:12:02 [INFO] serf: EventMemberJoin: node3 10.86.40.193
    2017/09/18 16:12:02 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2017/09/18 16:12:02 [INFO] consul: Adding LAN server node3 (Addr: tcp/10.86.40.193:8300) (DC: dc1)
    2017/09/18 16:12:02 [INFO] consul: Handled member-join event for server "node3.dc1" in area "wan"
    2017/09/18 16:12:02 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2017/09/18 16:12:02 [INFO] agent: Started HTTP server on 127.0.0.1:8500
    2017/09/18 16:12:10 [ERR] agent: failed to sync remote state: No cluster leader
    2017/09/18 16:12:11 [WARN] raft: no known peers, aborting election
    2017/09/18 16:12:31 [ERR] agent: Coordinate update error: No cluster leader
    2017/09/18 16:12:37 [ERR] agent: failed to sync remote state: No cluster leader

从上面3个node的启动日志来看,可以发现其实这3个node之间彼此是不知道对方存在的,比如在node1机器上执行:

[node1 /home/q/workspace]$ ./consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  10.86.40.247:8301  alive   server  0.9.3  2         dc1  <all>

发现只能看到自己,执行:

[node1 /home/q/workspace]$ ./consul info
agent:
	check_monitors = 0
	check_ttls = 0
	checks = 0
	services = 0
build:
	prerelease =
	revision = 112c060
	version = 0.9.3
consul:
	bootstrap = false
	known_datacenters = 1
	leader = false
	leader_addr =
	server = true
raft:
	applied_index = 0
	commit_index = 0
	fsm_pending = 0
	last_contact = never
	last_log_index = 0
	last_log_term = 0
	last_snapshot_index = 0
	last_snapshot_term = 0
	latest_configuration = []
	latest_configuration_index = 0
	num_peers = 0
	protocol_version = 2
	protocol_version_max = 3
	protocol_version_min = 0
	snapshot_version_max = 1
	snapshot_version_min = 0
	state = Follower
	term = 0
runtime:
	arch = amd64
	cpu_count = 4
	goroutines = 68
	max_procs = 4
	os = linux
	version = go1.9
serf_lan:
	coordinate_resets = 0
	encrypted = false
	event_queue = 0
	event_time = 1
	failed = 0
	health_score = 0
	intent_queue = 0
	left = 0
	member_time = 1
	members = 1
	query_queue = 0
	query_time = 1
serf_wan:
	coordinate_resets = 0
	encrypted = false
	event_queue = 0
	event_time = 1
	failed = 0
	health_score = 0
	intent_queue = 0
	left = 0
	member_time = 1
	members = 1
	query_queue = 0
	query_time = 1

可以看到输出中的bootstrap = false以及state = Follower。如果你此时依次查看node2和node3的话,会发现都是这样的情况。
说明整个Cluster并未完成Bootstrap过程。为了触发Cluster Bootstrap的过程,我们可以手动进行操作。
我们在node1上面执行:

[node1 /home/q/workspace]$ ./consul join 10.86.43.135
Successfully joined cluster by contacting 1 nodes.
[node1 /home/q/workspace]$ ./consul join 10.86.40.193
Successfully joined cluster by contacting 1 nodes.

通过consul join <ip>命令我们把node2和node3加入到集群中,此时在node(1-3)这3台中任意一台机器执行consul members会发现现在集群中
有3个成员:

[node3 /home/q/workspace]$ ./consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  10.86.40.247:8301  alive   server  0.9.3  2         dc1  <all>
node2  10.86.43.135:8301  alive   server  0.9.3  2         dc1  <all>
node3  10.86.40.193:8301  alive   server  0.9.3  2         dc1  <all>

此时查看我们最早启动的node1的日志会看到类似下面的日志输出:

   2017/09/18 16:13:54 [INFO] agent: (LAN) joining: [10.86.43.135]
    2017/09/18 16:13:54 [INFO] serf: EventMemberJoin: node2 10.86.43.135
    2017/09/18 16:13:54 [INFO] agent: (LAN) joined: 1 Err: <nil>
    2017/09/18 16:13:54 [INFO] consul: Adding LAN server node2 (Addr: tcp/10.86.43.135:8300) (DC: dc1)
    2017/09/18 16:13:54 [INFO] serf: EventMemberJoin: node2.dc1 10.86.43.135
    2017/09/18 16:13:54 [INFO] consul: Handled member-join event for server "node2.dc1" in area "wan"
    2017/09/18 16:13:58 [ERR] agent: failed to sync remote state: No cluster leader
    2017/09/18 16:14:07 [INFO] agent: (LAN) joining: [10.86.40.193]
    2017/09/18 16:14:07 [INFO] serf: EventMemberJoin: node3 10.86.40.193
    2017/09/18 16:14:07 [INFO] agent: (LAN) joined: 1 Err: <nil>
    2017/09/18 16:14:07 [INFO] consul: Adding LAN server node3 (Addr: tcp/10.86.40.193:8300) (DC: dc1)
    2017/09/18 16:14:07 [INFO] consul: Existing Raft peers reported by node3, disabling bootstrap mode
    2017/09/18 16:14:07 [INFO] serf: EventMemberJoin: node3.dc1 10.86.40.193
    2017/09/18 16:14:07 [INFO] consul: Handled member-join event for server "node3.dc1" in area "wan"
2017/09/18 16:14:12 [DEBUG] raft-net: 10.86.40.247:8300 accepted connection from: 10.86.40.193:60676
    2017/09/18 16:14:12 [WARN] raft: Failed to get previous log: 1 log not found (last: 0)
    2017/09/18 16:14:12 [INFO] consul: New leader elected: node3
2017/09/18 16:14:13 [DEBUG] raft-net: 10.86.40.247:8300 accepted connection from: 10.86.40.193:48461
    2017/09/18 16:14:14 [INFO] agent: Synced node info

从日志中我们可以看到node2和node3加入了集群,同时node3被选为了leader。在node3机器上执行:

[node3 /home/q/workspace]$ ./consul info
agent:
	check_monitors = 0
	check_ttls = 0
	checks = 0
	services = 0
build:
	prerelease =
	revision = 112c060
	version = 0.9.3
consul:
	bootstrap = false
	known_datacenters = 1
	leader = true
	leader_addr = 10.86.40.193:8300
	server = true
raft:
	applied_index = 22
	commit_index = 22
	fsm_pending = 0
	last_contact = 0
	last_log_index = 22
	last_log_term = 2
	last_snapshot_index = 0
	last_snapshot_term = 0
	latest_configuration = [{Suffrage:Voter ID:10.86.40.193:8300 Address:10.86.40.193:8300} {Suffrage:Voter ID:10.86.40.247:8300 Address:10.86.40.247:8300} {Suffrage:Voter ID:10.86.43.135:8300 Address:10.86.43.135:8300}]
	latest_configuration_index = 1
	num_peers = 2
	protocol_version = 2
	protocol_version_max = 3
	protocol_version_min = 0
	snapshot_version_max = 1
	snapshot_version_min = 0
	state = Leader
	term = 2
runtime:
	arch = amd64
	cpu_count = 4
	goroutines = 94
	max_procs = 4
	os = linux
	version = go1.9
serf_lan:
	coordinate_resets = 0
	encrypted = false
	event_queue = 0
	event_time = 2
	failed = 0
	health_score = 0
	intent_queue = 0
	left = 0
	member_time = 3
	members = 3
	query_queue = 0
	query_time = 1
serf_wan:
	coordinate_resets = 0
	encrypted = false
	event_queue = 0
	event_time = 1
	failed = 0
	health_score = 0
	intent_queue = 0
	left = 0
	member_time = 5
	members = 3
	query_queue = 0
	query_time = 1

从上面的日志输出中可以看到state = Leader, 说明node3确实被选为了leader。

如果leader离开了集群

在node3上执行:

[node3 /home/q/workspace]$ ./consul leave
Graceful leave complete

会发现node3日志输出:

  2017/09/18 16:32:53 [INFO] consul: server starting leave
    2017/09/18 16:32:53 [INFO] raft: Updating configuration with RemoveServer (10.86.40.193:8300, ) to [{Suffrage:Voter ID:10.86.40.247:8300 Address:10.86.40.247:8300} {Suffrage:Voter ID:10.86.43.135:8300 Address:10.86.43.135:8300}]
    2017/09/18 16:32:53 [INFO] raft: Removed ourself, transitioning to follower
    2017/09/18 16:32:53 [INFO] raft: Node at 10.86.40.193:8300 [Follower] entering Follower state (Leader: "")
    2017/09/18 16:32:53 [INFO] consul: cluster leadership lost
    2017/09/18 16:32:53 [INFO] raft: aborting pipeline replication to peer {Voter 10.86.40.247:8300 10.86.40.247:8300}
    2017/09/18 16:32:53 [INFO] raft: aborting pipeline replication to peer {Voter 10.86.43.135:8300 10.86.43.135:8300}
    2017/09/18 16:32:53 [WARN] consul.coordinate: Batch update failed: node is not the leader
    2017/09/18 16:32:54 [INFO] serf: EventMemberLeave: node3.dc1 10.86.40.193
    2017/09/18 16:32:54 [INFO] consul: Handled member-leave event for server "node3.dc1" in area "wan"
    2017/09/18 16:32:55 [INFO] serf: EventMemberLeave: node3 10.86.40.193
    2017/09/18 16:32:55 [INFO] consul: Removing LAN server node3 (Addr: tcp/10.86.40.193:8300) (DC: dc1)
    2017/09/18 16:32:55 [INFO] agent: Requesting shutdown
    2017/09/18 16:32:55 [INFO] consul: shutting down server
    2017/09/18 16:32:55 [INFO] manager: shutting down
    2017/09/18 16:32:55 [INFO] agent: consul server down
    2017/09/18 16:32:55 [INFO] agent: shutdown complete
    2017/09/18 16:32:55 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (tcp)
    2017/09/18 16:32:55 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (udp)
    2017/09/18 16:32:55 [INFO] agent: Stopping HTTP server 127.0.0.1:8500
    2017/09/18 16:32:55 [INFO] agent: Waiting for endpoints to shut down
    2017/09/18 16:32:55 [INFO] agent: Endpoints down
    2017/09/18 16:32:55 [INFO] Exit code:  0

然后我们查看node1的日志:

  2017/09/18 16:32:54 [INFO] serf: EventMemberLeave: node3.dc1 10.86.40.193
    2017/09/18 16:32:54 [INFO] consul: Handled member-leave event for server "node3.dc1" in area "wan"
    2017/09/18 16:32:55 [INFO] serf: EventMemberLeave: node3 10.86.40.193
    2017/09/18 16:32:55 [INFO] consul: Removing LAN server node3 (Addr: tcp/10.86.40.193:8300) (DC: dc1)
    2017/09/18 16:33:01 [WARN] raft: Heartbeat timeout from "10.86.40.193:8300" reached, starting election
    2017/09/18 16:33:01 [INFO] raft: Node at 10.86.40.247:8300 [Candidate] entering Candidate state in term 3
2017/09/18 16:33:02 [DEBUG] raft-net: 10.86.40.247:8300 accepted connection from: 10.86.43.135:47716
    2017/09/18 16:33:02 [INFO] raft: Duplicate RequestVote for same term: 3
    2017/09/18 16:33:10 [INFO] raft: Node at 10.86.40.247:8300 [Follower] entering Follower state (Leader: "")
    2017/09/18 16:33:10 [INFO] consul: New leader elected: node2
2017/09/18 16:33:10 [DEBUG] raft-net: 10.86.40.247:8300 accepted connection from: 10.86.43.135:59184

可以看到node3被移除,node2被选为了leader
同时在node1上执行consul members:

[node1 /home/q/workspace]$ ./consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  10.86.40.247:8301  alive   server  0.9.3  2         dc1  <all>
node2  10.86.43.135:8301  alive   server  0.9.3  2         dc1  <all>
node3  10.86.40.193:8301  left    server  0.9.3  2         dc1  <all>

结果显示node3为left状态。

重新启动node3

在我们把node3重启以后,还没有加入到集群之前,在node3上执行:

[node3 /home/q/workspace]$ ./consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node3  10.86.40.193:8301  alive   server  0.9.3  2         dc1  <all>

当我们把node3加入到集群以后:

[node3 /home/q/workspace]$ ./consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  10.86.40.247:8301  alive   server  0.9.3  2         dc1  <all>
node2  10.86.43.135:8301  alive   server  0.9.3  2         dc1  <all>
node3  10.86.40.193:8301  alive   server  0.9.3  2         dc1  <all>

启动client

在node4上面执行:

[node4 /home/q/workspace]$ sudo ./consul agent -data-dir=consul.d  -node=node4 -bind=10.86.42.205 -dc=dc1 -config-dir consul.d
WARNING: 'dc' is deprecated. Use 'datacenter' instead
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v0.9.3'
           Node ID: '4dab5b13-ef3b-63c4-c690-373040c9a795'
         Node name: 'node4'
        Datacenter: 'dc1' (Segment: '')
            Server: false (Bootstrap: false)
       Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 10.86.42.205 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
    2017/09/18 16:50:00 [INFO] serf: EventMemberJoin: node4 10.86.42.205
    2017/09/18 16:50:00 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
    2017/09/18 16:50:00 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
    2017/09/18 16:50:00 [INFO] agent: Started HTTP server on 127.0.0.1:8500
    2017/09/18 16:50:00 [WARN] manager: No servers available
    2017/09/18 16:50:00 [ERR] agent: failed to sync remote state: No known Consul servers

我第一次吧node4节点的dc写错了,写成了dc,导致使用node1节点加入node4节点的时候出现异常:

./consul join 10.86.42.205
Error joining address '10.86.42.205': Unexpected response code: 500 (1 error(s) occurred:
* Failed to join 10.86.42.205: Member 'node4' part of wrong datacenter 'dc')
Failed to join any nodes.

同时日志中输出:

* Failed to join 10.86.42.205: Member 'node4' part of wrong datacenter 'dc'
    2017/09/18 16:49:33 [ERR] http: Request PUT /v1/agent/join/10.86.42.205, error: 1 error(s) occurred:
* Failed to join 10.86.42.205: Member 'node4' part of wrong datacenter 'dc' from=127.0.0.1:48340

将node4的datacenter修改正确以后再次在node1上执行:

[node1 /home/q/workspace]$ ./consul join 10.86.42.205
Successfully joined cluster by contacting 1 nodes.
[node1 /home/q/workspace]$ ./consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  10.86.40.247:8301  alive   server  0.9.3  2         dc1  <all>
node2  10.86.43.135:8301  alive   server  0.9.3  2         dc1  <all>
node3  10.86.40.193:8301  alive   server  0.9.3  2         dc1  <all>
node4  10.86.42.205:8301  alive   client  0.9.3  2         dc1  <default>

可以看到node4正确的加入了集群,并且type为client。

服务注册与发现

Consul支持两种服务注册的方式
– 一种是通过Consul的服务注册HTTP API,由服务自身在启动后调用API注册自己,
– 另外一种则是通过在配置文件中定义服务的方式进行注册。
Consul文档中建议使用后面一种方式来做服务 配置和服务注册
在上面的例子中,node1,2,3为server,node4为client。
我们在启动node1的时候使用命令中有-data-dir=consul.d来指定配置文件所在目录,因此consul.d目录下的.json文件就会被Consul agent当作配置文件来读取。
我们在node1机器上执行:

echo '{"service": {"name": "db1", "tags": ["rails", "mysql", "dev"], "port": 9997}}' | sudo tee consul.d/db1.json
{"service": {"name": "db1", "tags": ["rails", "mysql", "dev"], "port": 9997}}
./consul reload
Configuration reload triggered
curl http://localhost:8500/v1/catalog/service/db1
[{"ID":"d1e34000-7d12-5371-97b5-c3456cef7585","Node":"node1","Address":"10.86.40.247","Datacenter":"dc1","TaggedAddresses":{"lan":"10.86.40.247","wan":"10.86.40.247"},"NodeMeta":{"consul-network-segment":""},"ServiceID":"db1","ServiceName":"db1","ServiceTags":["rails","mysql","dev"],"ServiceAddress":"","ServicePort":9997,"ServiceEnableTagOverride":false,"CreateIndex":736,"ModifyIndex":736}]

然后我们在node4上执行:

[node4 /home/q/workspace]$ curl http://localhost:8500/v1/catalog/service/db1
[{"ID":"d1e34000-7d12-5371-97b5-c3456cef7585","Node":"node1","Address":"10.86.40.247","Datacenter":"dc1","TaggedAddresses":{"lan":"10.86.40.247","wan":"10.86.40.247"},"NodeMeta":{"consul-network-segment":""},"ServiceID":"db1","ServiceName":"db1","ServiceTags":["rails","mysql","dev"],"ServiceAddress":"","ServicePort":9997,"ServiceEnableTagOverride":false,"CreateIndex":736,"ModifyIndex":736}]

可以看到在node4上是可以发现这个服务的。 但是查看node2,node4以及node4的consul.d的话,会发现之前在node1上创建的db1.json并不会同步过来。

本文版权归作者所有,禁止一切形式的转载,复制等操作
赞赏

微信赞赏支付宝赞赏

发表评论

电子邮件地址不会被公开。 必填项已用*标注