3 主 3 从Redis集群搭建与扩缩容
# 110.3 主 3 从Redis集群搭建与扩缩容
讲了理论,接下来就来搭建 Redis 集群了,然后演示扩缩容。
# 搭建集群
搭建 6 个 Redis,命令如下:
docker run -d --name redis-node-1 --net host --privileged=true -v /data/redis/share/redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381
docker run -d --name redis-node-2 --net host --privileged=true -v /data/redis/share/redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382
docker run -d --name redis-node-3 --net host --privileged=true -v /data/redis/share/redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383
docker run -d --name redis-node-4 --net host --privileged=true -v /data/redis/share/redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384
docker run -d --name redis-node-5 --net host --privileged=true -v /data/redis/share/redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385
docker run -d --name redis-node-6 --net host --privileged=true -v /data/redis/share/redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386
2
3
4
5
6
7
8
9
10
11
参数说明:
-
--net host
:使用宿主机的 IP 和端口,后续再详细说明 -
--cluster-enabled yes
:开启 Redis 集群 -
--appendonly yes
:开启持久化 -
--port 6386
: Redis 端口号
启动后记得检查下:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
81e86792f202 redis:6.0.8 "docker-…" 3 seconds ago Up 3 seconds redis-node-6
7068824d11f1 redis:6.0.8 "docker-…" 4 seconds ago Up 3 seconds redis-node-5
320ab7278e23 redis:6.0.8 "docker-…" 5 seconds ago Up 4 seconds redis-node-4
e3434e59134c redis:6.0.8 "docker-…" 5 seconds ago Up 4 seconds redis-node-3
0ece6a0184f6 redis:6.0.8 "docker-…" 6 seconds ago Up 5 seconds redis-node-2
576590ff1efb redis:6.0.8 "docker-…" 7 seconds ago Up 6 seconds redis-node-1
2
3
4
5
6
7
# 构建主从关系
先配置结点 1:
$ docker exec -it redis-node-1 bash
$ redis-cli --cluster create 192.168.2.242:6381 192.168.2.242:6382 192.168.2.242:6383 192.168.2.242:6384 192.168.2.242:6385 192.168.2.242:6386 --cluster-replicas 1
2
3
4
--cluster-replicas 1 表示为每个 master 创建一个 slave 节点。注意替换为自己的真实 IP 地址
运行结果:
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.2.242:6385 to 192.168.2.242:6381
Adding replica 192.168.2.242:6386 to 192.168.2.242:6382
Adding replica 192.168.2.242:6384 to 192.168.2.242:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[0-5460] (5461 slots) master
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[5461-10922] (5462 slots) master
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[10923-16383] (5461 slots) master
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
replicates c58465a41e9bab6a60e0307389d723116982fccc
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
Can I set the above configuration? (type 'yes' to accept):
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
可以看到输出了一些配置信息,和我们想要的架构是一样的,然后我们输入 yes:
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.2.242:6381)
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
在第 7、19 和 22 行开始处的 M,说明这是 master,并且后面有 IP 和端口号;
而 10,13,16 行的 S 说明是 slave,从机。
至此,3 主 3 从的 Redis 集群配置完成了,我们可以检查下。
在 6381 结点进入容器,cluster info 信息如下:
> redis-cli -p 6381
>127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:183
cluster_stats_messages_pong_sent:182
cluster_stats_messages_sent:365
cluster_stats_messages_ping_received:177
cluster_stats_messages_pong_received:183
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:365
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
第 4 行:分配了 16384 个槽。
第 8 行:已知结点有 6 个
cluster nodes 信息如下:
> cluster nodes
64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381@16381 myself,master - 0 1694077036000 1 connected 0-5460
93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384@16384 slave c58465a41e9bab6a60e0307389d723116982fccc 0 1694077035000 3 connected
03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386@16386 slave 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 0 1694077039232 2 connected
d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385@16385 slave 64ca2b283f305bd9b138c070fc572f4ad82c19de 0 1694077038227 1 connected
c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383@16383 master - 0 1694077038000 3 connected 10923-16383
9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382@16382 master - 0 1694077037000 2 connected 5461-10922
2
3
4
5
6
7
该信息也能看到从属关系,例如第二行 6384 端口,有个 slave,表明其是个 slave,后面带的是其 master 的 id(c5 开头的);然后第 5 行,开头的是 master 的 id(c5 开头)
目前主从关系如下:
- 6381(主),6385(从)
- 6382(主),6486(从)
- 6383(主),6384(从)
# 测试主从复制
接下来我们在结点 1 新增一个 key,观察主从复制是否成功
$ docker exec -it redis-node-1 bash
$ redis-cli -p 6381
127.0.0.1:6381> keys *
(empty array)
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 192.168.2.242:6383
2
3
4
5
6
7
8
9
为什么存储数据失败了?这是因为这个 key,经过计算后(12706),这个槽位是在 6383 这个结点存储的,而不是当前结点,因此失败。
还记得之前配置集群的时候的输出信息吗?例如 6383 这个结点,就会显示 slots 的范围:
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[10923-16383] (5461 slots) master
2
那怎么解决呢?首先问题产生的原因:目前是使用了 Redis 集群,连接也应该是连接整个集群,而不是某个结点。因此我们可以在连接的时候,加一个 -c
参数,表明是以集群(cluster)的方法连接,防止路由失效:
redis-cli -c -p 6381
然后再存储数据,可以看到正常存储了(重定向到了 6383 端口的结点),并且后续也跳转到了该端口:
127.0.0.1:6381> set k1 v1
-> Redirected to slot [12706] located at 192.168.2.242:6383
OK
192.168.2.242:6383>
2
3
4
5
# 查看集群信息
还有一个命令可以查看集群信息:
redis-cli --cluster check 192.168.2.242:6381
运行结果:可以看到和配置集群的时候,输出很类似
192.168.2.242:6381 (64ca2b28...) -> 0 keys | 5461 slots | 1 slaves.
192.168.2.242:6383 (c58465a4...) -> 1 keys | 5461 slots | 1 slaves.
192.168.2.242:6382 (9cf28549...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 1 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.2.242:6381)
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
该命令后续我们经常会用到。
# 容错切换迁移
接下来我们让结点 1 宕机,观察其从机是否会切换为主机。
停止结点 1:
docker stop redis-node-1
等几秒钟(等心跳检测完成),再次查看集群信息:
docker exec -it redis-node-2 bash
redis-cli -c -p 6382
> cluster nodes
64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381@16381 master,fail - 1694088052195 1694088048000 1 disconnected
9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382@16382 myself,master - 0 1694088335000 2 connected 5461-10922
93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384@16384 slave c58465a41e9bab6a60e0307389d723116982fccc 0 1694088339357 3 connected
c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383@16383 master - 0 1694088338000 3 connected 10923-16383
d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385@16385 master - 0 1694088338348 7 connected 0-5460
03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386@16386 slave 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 0 1694088336000 2 connected
2
3
4
5
6
7
8
9
10
11
可以看到第 6 行,6381 端口后面有个 fail ,说明已经宕机了;此外第 10 行,6385 后面有个 master,说明已经由从机变为了主机,也就是完成了主从的切换。
如果 6381 从宕机恢复了呢?那么就会变成从机:
docker start redis-node-1
docker exec -it redis-node-1
redis-cli -c -p 6381 ;
cluster nodes;
127.0.0.1:6381> cluster nodes
d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385@16385 master - 0 1694090267000 7 connected 0-5460
03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386@16386 slave 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 0 1694090265000 2 connected
93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384@16384 slave c58465a41e9bab6a60e0307389d723116982fccc 0 1694090268698 3 connected
9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382@16382 master - 0 1694090266686 2 connected 5461-10922
64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381@16381 myself,slave d48a488e849f587975c6fe7eecb21bd02ecb53cb 0 1694090266000 7 connected
c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383@16383 master - 0 1694090267695 3 connected 10923-16383
2
3
4
5
6
7
8
9
10
11
12
13
14
15
第 14 行,可以看到后面有个 slave, 表明目前是从机。
如果想要还原之前的 3 主 3 从关系,就先停止 6385,然后再启动 6381:
docker stop redis-node-5
# 等几秒后再启动结点5,最后先确定下cluster nodes信息中有fail后再启动结点5
docker start redis-node-5
docker exec -it redis-node-1 bash
redis-cli -c -p 6381
127.0.0.1:6381> cluster nodes
d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385@16385 slave 64ca2b283f305bd9b138c070fc572f4ad82c19de 0 1694090597705 8 connected
03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386@16386 slave 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 0 1694090596694 2 connected
93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384@16384 slave c58465a41e9bab6a60e0307389d723116982fccc 0 1694090595690 3 connected
9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382@16382 master - 0 1694090595000 2 connected 5461-10922
64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381@16381 myself,master - 0 1694090592000 8 connected 0-5460
c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383@16383 master - 0 1694090595000 3 connected 10923-16383
2
3
4
5
6
7
8
9
10
11
12
13
14
15
可以看到 6381 变成了 master,6382 变成了 slave
# 主从扩容
接下来我们新增两个 Redis,加入集群,完成扩容。示意图如下。
先将 6387 加入到集群:
然后将 6388 作为从机加入集群:
新增两个 Redis:
docker run -d --name redis-node-7 --net host --privileged=true -v /data/redis/share/redis-node-7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387
docker run -d --name redis-node-8 --net host --privileged=true -v /data/redis/share/redis-node-8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388
2
3
注意用 ps 检查下容器是否启动成功
将 6387 作为 master 结点加入集群,先进入容器内的 Redis:
docker exec -it redis-node-7 /bin/bash
redis-cli -p 6387
2
加入集群,命令格式:redis-cli --cluster add-node 当前Redis的IP:6387 集群的IP:6381
6387 就是将要作为 master 新增节点
6381 就是原来集群节点里面的领路人,相当于 6387 拜 6381 的码头,从而找到组织加入集群
redis-cli --cluster add-node 192.168.2.242:6387 192.168.2.242:6381
运行结果:
>>> Adding node 192.168.2.242:6387 to cluster 192.168.2.242:6381
>>> Performing Cluster Check (using node 192.168.2.242:6381)
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.2.242:6387 to make it join the cluster.
[OK] New node added correctly.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
在最后一行可以看到,加入结点成功。
检查下集群情况:
redis-cli --cluster check 真实ip地址:6381
运行结果:
192.168.2.242:6381 (64ca2b28...) -> 0 keys | 5461 slots | 1 slaves.
192.168.2.242:6387 (8f06b46a...) -> 0 keys | 0 slots | 0 slaves.
192.168.2.242:6382 (9cf28549...) -> 0 keys | 5462 slots | 1 slaves.
192.168.2.242:6383 (c58465a4...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.2.242:6381)
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 8f06b46a0f7d9fbca3467ada5b6271fee17119c4 192.168.2.242:6387
slots: (0 slots) master
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
可以看到目前新结点 6387 是没有槽位的(第 2、11 行,0 slots)
重新分配槽位:
# 命令格式:
redis-cli --cluster reshard IP地址:端口号
# 实例:
redis-cli --cluster reshard 192.168.2.242:6381
2
3
4
5
运行结果:会询问你如何分配(最后一行,从 1 到 16384)
>>> Performing Cluster Check (using node 192.168.2.242:6381)
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 8f06b46a0f7d9fbca3467ada5b6271fee17119c4 192.168.2.242:6387
slots: (0 slots) master
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)?
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
我们打算均衡分配槽位,计算可知 16384 ÷ 4 = 4096,因此输入 4096
之后会咨询分配给谁:
What is the receiving node ID?
这里我们输入 6387 的 ID,也就是 8f 开头的字符串
M: 8f06b46a0f7d9fbca3467ada5b6271fee17119c4 192.168.2.242:6387
slots: (0 slots) master
2
之后会咨询是否全部分配,这里我们输入 all
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:
2
3
4
接下来会咨询是否执行计划,这里输入 yes。然后就等其慢慢的重新分配槽位
Moving slot 12283 from c58465a41e9bab6a60e0307389d723116982fccc
Moving slot 12284 from c58465a41e9bab6a60e0307389d723116982fccc
Moving slot 12285 from c58465a41e9bab6a60e0307389d723116982fccc
Moving slot 12286 from c58465a41e9bab6a60e0307389d723116982fccc
Moving slot 12287 from c58465a41e9bab6a60e0307389d723116982fccc
Do you want to proceed with the proposed reshard plan (yes/no)?
2
3
4
5
6
再次检查集群情况:
redis-cli --cluster check 真实ip地址:6381
192.168.2.242:6381 (64ca2b28...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.242:6387 (8f06b46a...) -> 0 keys | 4096 slots | 0 slaves.
192.168.2.242:6382 (9cf28549...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.242:6383 (c58465a4...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.2.242:6381)
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
M: 8f06b46a0f7d9fbca3467ada5b6271fee17119c4 192.168.2.242:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
此时看到每台 master 都有了槽位,并且 6387 的槽位,是其他 master 均匀分配了一点自己原本的槽位给 6387
为什么 6387 是 3 个新的区间,以前的还是连续?重新分配成本太高,所以前 3 家各自匀出来一部分,从 6381/6382/6383 三个旧节点分别匀出 1364 个坑位给新节点 6387
接下来我们给 6387 分配一个 slave:
# 命令格式
redis-cli --cluster add-node ip:新slave端口 ip:新master端口 --cluster-slave --cluster-master-id 新主机节点ID
# 这个是6387的编号,按照自己实际情况
redis-cli --cluster add-node 192.168.2.242:6388 192.168.2.242:6387 --cluster-slave --cluster-master-id 8f06b46a0f7d9fbca3467ada5b6271fee17119c4
2
3
4
5
6
运行结果:配置成功
>>> Adding node 192.168.2.242:6388 to cluster 192.168.2.242:6387
>>> Performing Cluster Check (using node 192.168.2.242:6387)
M: 8f06b46a0f7d9fbca3467ada5b6271fee17119c4 192.168.2.242:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.2.242:6388 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 192.168.2.242:6387.
[OK] New node added correctly.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
最后我们可以看看集群情况:
redis-cli --cluster check 集群内某台IP:端口
运行结果:4 主 4 从
192.168.2.242:6382 (9cf28549...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.242:6381 (64ca2b28...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.242:6383 (c58465a4...) -> 1 keys | 4096 slots | 1 slaves.
192.168.2.242:6387 (8f06b46a...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.2.242:6382)
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[1365-5460] (4096 slots) master
1 additional replica(s)
S: fc72004d1b14054541d688645559765b39f3db58 192.168.2.242:6388
slots: (0 slots) slave
replicates 8f06b46a0f7d9fbca3467ada5b6271fee17119c4
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
M: 8f06b46a0f7d9fbca3467ada5b6271fee17119c4 192.168.2.242:6387
slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
1 additional replica(s)
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# 主从缩容
目的:6387 和 6388 下线
步骤:
- 先删除从节点
- 重新分配槽位(这里为了方便, 将 6387 的槽位全部分给 1 个节点)
- 再删除主节点
在集群中将 4 号从节点 6388 删除:
# 命令格式:
redis-cli --cluster del-node ip:从机端口 从机6388节点ID
# 实例
redis-cli --cluster del-node 192.168.2.242:6388 fc72004d1b14054541d688645559765b39f3db58
2
3
4
5
节点 ID 可以通过检查集群情况来获得。执行完后再检查下集群情况,确保 6388 被删除了
将 6387 的槽号清空,重新分配,本例将清出来的槽号都给 6381
redis-cli --cluster reshard 192.168.2.242:6381
然后会咨询要挪走多少个槽位,这里我们还是输入 4096(也就是 6387 负责的槽位数量)
How many slots do you want to move (from 1 to 16384)?
然后询问要用哪个节点接受这些槽位,这里我们输入 6381 的结点 ID
What is the receiving node ID?
那么是挪走哪个节点的槽位呢?这里我们输入 6387 的结点 ID,然后输入 done(因为只有一个节点,输入完后就输入 done 表明输入完毕)
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: 8f06b46a0f7d9fbca3467ada5b6271fee17119c4
Source node #2:
2
3
4
5
检查集群情况:6387 没有了槽位,6381 则有了 6387 之前的槽位
redis-cli --cluster check 192.168.2.242:6381
192.168.2.242:6381 (64ca2b28...) -> 0 keys | 8192 slots | 1 slaves.
192.168.2.242:6387 (8f06b46a...) -> 0 keys | 0 slots | 0 slaves.
192.168.2.242:6382 (9cf28549...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.242:6383 (c58465a4...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.2.242:6381)
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[0-6826],[10923-12287] (8192 slots) master
1 additional replica(s)
M: 8f06b46a0f7d9fbca3467ada5b6271fee17119c4 192.168.2.242:6387
slots: (0 slots) master
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
将 6387 删除
redis-cli --cluster del-node ip:端口 6387节点ID
再次检查集群情况:删除成功
$ redis-cli --cluster check 192.168.2.242:6381
192.168.2.242:6381 (64ca2b28...) -> 0 keys | 8192 slots | 1 slaves.
192.168.2.242:6382 (9cf28549...) -> 0 keys | 4096 slots | 1 slaves.
192.168.2.242:6383 (c58465a4...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.2.242:6381)
M: 64ca2b283f305bd9b138c070fc572f4ad82c19de 192.168.2.242:6381
slots:[0-6826],[10923-12287] (8192 slots) master
1 additional replica(s)
S: d48a488e849f587975c6fe7eecb21bd02ecb53cb 192.168.2.242:6385
slots: (0 slots) slave
replicates 64ca2b283f305bd9b138c070fc572f4ad82c19de
S: 03d4a2159587b46964c6fe51a7f2c84931fc01d9 192.168.2.242:6386
slots: (0 slots) slave
replicates 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207
S: 93f1a7c9c011e08c051db9b80ed1c058c42187ef 192.168.2.242:6384
slots: (0 slots) slave
replicates c58465a41e9bab6a60e0307389d723116982fccc
M: 9cf2854967048e5ee0532c4ccb4ffe44d8d2b207 192.168.2.242:6382
slots:[6827-10922] (4096 slots) master
1 additional replica(s)
M: c58465a41e9bab6a60e0307389d723116982fccc 192.168.2.242:6383
slots:[12288-16383] (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
(完)