Redis数据迁移
总阅读 次
- 1. Redis单实例数据迁移Cluster方案
- 1.1. 目标集群介绍
- 1.2. RDB文件数据恢复
- 1.2.1. 1. 去除所有slave节点
- 1.2.2. 2. 停掉所有slave服务
- 1.2.3. 3. 将所有master的slot 迁移到 其中1个 主 master
- 1.2.4. 4. 此 主master 关闭aof功能,让其以rdb文件恢复数据
- 1.2.5. 5. copy 原rdb 文件到目标 节点的 rdb文件位置
- 1.2.6. 6. 重启此 master redis
- 1.2.7. 7. 登录此redis,校验数据完整性
- 1.2.8. 8. 重平衡所有的slot到各master节点
- 1.2.9. 9. 指令开启主 master的 aof功能,生成此master 的aof文件,其他节点默认开启,可看到aof文件已默认生成
- 1.2.10. 10. 打开主master配置的aof功能,重启主 master
- 1.2.11. 11. 启动添加slave节点
- 1.2.12. 12. 集群确认,以及主从切换正常
- 1.3. import 直接导入
Redis单实例数据迁移Cluster方案
[TOC]
方案1:RDB文件数据恢复
方案2:import 直接导入( 建议)
目标集群介绍
以生产环境12主12从为例,集群搭建 略
集群信息如下:
1 | 9e051ce8d91148d5079164bb8cc436f0da478cec 172.16.33.251:7000@17000 master - 0 1620720637417 1 connected 0-1364 |
各配置文件路径如下:1
2
3
4config路径:/etc/redis/
redis按照包:/data/redis-5.0.2/
redis-cli指令路径:/usr/local/bin/
持久化路径:/data/redis-cluster/
查看各redis服务情况:1
服务状态查看:systemctl status redis_7000.service
RDB文件数据恢复
原数据源只能导出RDB文件,且为单文件
新集群配置持久化为AOF
迁移步骤如下:
1. 去除所有slave节点
例:( 执行所有)1
redis-cli --cluster del-node 172.16.34.3:7003 2a4c1c17c460eb9b15dfde6ea1e5507f11dade12 -a ****
去除slave后cluster nodes 状态:1
2
3
4
5
6
7
8
9
10
11
129e051ce8d91148d5079164bb8cc436f0da478cec 172.16.33.251:7000@17000 myself,master - 0 1620886282000 25 connected 15019-16383
efce1b72390ee8cd862a0b442263c40e90910f60 172.16.33.253:7000@17000 master - 0 1620886281000 29 connected 4097-5462
29e5b98fbe4a56516779c66f49843aae694d868b 172.16.34.2:7001@17001 master - 0 1620886284000 33 connected 9560-10924
cd787484b0dfa4ac2c3832e86cd1fd37306366ed 172.16.34.1:7001@17001 master - 0 1620886283000 36 connected 1365 13655-15018
44c63289731aacec61959961d9061e70885e670a 172.16.34.3:7000@17000 master - 0 1620886281528 28 connected 2731-4096
19fffc48fad2d724a3d8f3a70ef05bcdc453bc31 172.16.33.252:7001@17001 master - 0 1620886285535 32 connected 8195-9559
f2ac6cada2bd5a3cc2aee46da3d85a7127b38313 172.16.33.252:7000@17000 master - 0 1620886282000 27 connected 1366-2730
426ccc5185f423cf0ce8456ab0c168948103d2b8 172.16.34.2:7000@17000 master - 0 1620886280000 31 connected 6829-8194
ab1e21eb94f59cbf5e963da838950b369bc800dd 172.16.33.251:7001@17001 master - 0 1620886282000 26 connected 0-1364
f487b73102c6b5776c95588870261d6ac8c5ae83 172.16.34.1:7000@17000 master - 0 1620886284032 34 connected 10925-12289
d7f46f5bdc5f22a16253994bceadfbf155e8b3a5 172.16.33.253:7001@17001 master - 0 1620886283531 30 connected 5463-6828
00f5deaaf6bca42156ee10f815741285a84c86ef 172.16.34.3:7001@17001 master - 0 1620886284533 35 connected 12290-13654
2. 停掉所有slave服务
例:( 执行所有)
1 | systemctl stop redis_7003 |
3. 将所有master的slot 迁移到 其中1个 主 master
迁移前:
redis-cli --cluster check 172.16.33.251:7000 -a ****
检测集群
可查看到各节点预分槽分数量和分布1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43172.16.33.251:7000 (9e051ce8...) -> 0 keys | 1365 slots | 0 slaves.
172.16.33.253:7000 (efce1b72...) -> 0 keys | 1366 slots | 0 slaves.
172.16.34.2:7001 (29e5b98f...) -> 0 keys | 1365 slots | 0 slaves.
172.16.34.1:7001 (cd787484...) -> 0 keys | 1365 slots | 0 slaves.
172.16.34.3:7000 (44c63289...) -> 0 keys | 1366 slots | 0 slaves.
172.16.33.252:7001 (19fffc48...) -> 0 keys | 1365 slots | 0 slaves.
172.16.33.252:7000 (f2ac6cad...) -> 0 keys | 1365 slots | 0 slaves.
172.16.34.2:7000 (426ccc51...) -> 0 keys | 1366 slots | 0 slaves.
172.16.33.251:7001 (ab1e21eb...) -> 0 keys | 1365 slots | 0 slaves.
172.16.34.1:7000 (f487b731...) -> 0 keys | 1365 slots | 0 slaves.
172.16.33.253:7001 (d7f46f5b...) -> 0 keys | 1366 slots | 0 slaves.
172.16.34.3:7001 (00f5deaa...) -> 0 keys | 1365 slots | 0 slaves.
[OK] 0 keys in 12 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.16.33.251:7000)
M: 9e051ce8d91148d5079164bb8cc436f0da478cec 172.16.33.251:7000
slots:[15019-16383] (1365 slots) master
M: efce1b72390ee8cd862a0b442263c40e90910f60 172.16.33.253:7000
slots:[4097-5462] (1366 slots) master
M: 29e5b98fbe4a56516779c66f49843aae694d868b 172.16.34.2:7001
slots:[9560-10924] (1365 slots) master
M: cd787484b0dfa4ac2c3832e86cd1fd37306366ed 172.16.34.1:7001
slots:[1365],[13655-15018] (1365 slots) master
M: 44c63289731aacec61959961d9061e70885e670a 172.16.34.3:7000
slots:[2731-4096] (1366 slots) master
M: 19fffc48fad2d724a3d8f3a70ef05bcdc453bc31 172.16.33.252:7001
slots:[8195-9559] (1365 slots) master
M: f2ac6cada2bd5a3cc2aee46da3d85a7127b38313 172.16.33.252:7000
slots:[1366-2730] (1365 slots) master
M: 426ccc5185f423cf0ce8456ab0c168948103d2b8 172.16.34.2:7000
slots:[6829-8194] (1366 slots) master
M: ab1e21eb94f59cbf5e963da838950b369bc800dd 172.16.33.251:7001
slots:[0-1364] (1365 slots) master
M: f487b73102c6b5776c95588870261d6ac8c5ae83 172.16.34.1:7000
slots:[10925-12289] (1365 slots) master
M: d7f46f5bdc5f22a16253994bceadfbf155e8b3a5 172.16.33.253:7001
slots:[5463-6828] (1366 slots) master
M: 00f5deaaf6bca42156ee10f815741285a84c86ef 172.16.34.3:7001
slots:[12290-13654] (1365 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
迁移指令:
例:( 执行所有)
1 | redis-cli --cluster reshard 172.16.33.251:7000 --cluster-from ab1e21eb94f59cbf5e963da838950b369bc800dd --cluster-to 9e051ce8d91148d5079164bb8cc436f0da478cec --cluster-slots 1365 --cluster-yes --cluster-timeout 5000 --cluster-pipeline 10 --cluster-replace -a ***** |
迁移slot后检测:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43172.16.33.251:7000 (9e051ce8...) -> 0 keys | 16384 slots | 0 slaves.
172.16.33.253:7000 (efce1b72...) -> 0 keys | 0 slots | 0 slaves.
172.16.34.2:7001 (29e5b98f...) -> 0 keys | 0 slots | 0 slaves.
172.16.34.1:7001 (cd787484...) -> 0 keys | 0 slots | 0 slaves.
172.16.34.3:7000 (44c63289...) -> 0 keys | 0 slots | 0 slaves.
172.16.33.252:7001 (19fffc48...) -> 0 keys | 0 slots | 0 slaves.
172.16.33.252:7000 (f2ac6cad...) -> 0 keys | 0 slots | 0 slaves.
172.16.34.2:7000 (426ccc51...) -> 0 keys | 0 slots | 0 slaves.
172.16.33.251:7001 (ab1e21eb...) -> 0 keys | 0 slots | 0 slaves.
172.16.34.1:7000 (f487b731...) -> 0 keys | 0 slots | 0 slaves.
172.16.33.253:7001 (d7f46f5b...) -> 0 keys | 0 slots | 0 slaves.
172.16.34.3:7001 (00f5deaa...) -> 0 keys | 0 slots | 0 slaves.
[OK] 0 keys in 12 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 172.16.33.251:7000)
M: 9e051ce8d91148d5079164bb8cc436f0da478cec 172.16.33.251:7000
slots:[0-16383] (16384 slots) master
M: efce1b72390ee8cd862a0b442263c40e90910f60 172.16.33.253:7000
slots: (0 slots) master
M: 29e5b98fbe4a56516779c66f49843aae694d868b 172.16.34.2:7001
slots: (0 slots) master
M: cd787484b0dfa4ac2c3832e86cd1fd37306366ed 172.16.34.1:7001
slots: (0 slots) master
M: 44c63289731aacec61959961d9061e70885e670a 172.16.34.3:7000
slots: (0 slots) master
M: 19fffc48fad2d724a3d8f3a70ef05bcdc453bc31 172.16.33.252:7001
slots: (0 slots) master
M: f2ac6cada2bd5a3cc2aee46da3d85a7127b38313 172.16.33.252:7000
slots: (0 slots) master
M: 426ccc5185f423cf0ce8456ab0c168948103d2b8 172.16.34.2:7000
slots: (0 slots) master
M: ab1e21eb94f59cbf5e963da838950b369bc800dd 172.16.33.251:7001
slots: (0 slots) master
M: f487b73102c6b5776c95588870261d6ac8c5ae83 172.16.34.1:7000
slots: (0 slots) master
M: d7f46f5bdc5f22a16253994bceadfbf155e8b3a5 172.16.33.253:7001
slots: (0 slots) master
M: 00f5deaaf6bca42156ee10f815741285a84c86ef 172.16.34.3:7001
slots: (0 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
16384个slot全部移动到了172.16.33.251:7000节点
4. 此 主master 关闭aof功能,让其以rdb文件恢复数据
编辑config文件
设置
appendonly no
5. copy 原rdb 文件到目标 节点的 rdb文件位置
cp ../s00208nprediswehcat3-0 dump_7000.rdb
6. 重启此 master redis
sudo systemctl restart redis_7000
7. 登录此redis,校验数据完整性
1 | redis-cli -p 7000 -a **** |
8. 重平衡所有的slot到各master节点
redis-cli --cluster rebalance 172.16.33.251:7000 --cluster-use-empty-masters -a ******
重平衡后:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43172.16.33.251:7000 (9e051ce8...) -> 55 keys | 1365 slots | 0 slaves.
172.16.34.3:7000 (44c63289...) -> 56 keys | 1366 slots | 0 slaves.
172.16.34.2:7001 (29e5b98f...) -> 56 keys | 1366 slots | 0 slaves.
172.16.33.251:7001 (ab1e21eb...) -> 50 keys | 1366 slots | 0 slaves.
172.16.34.3:7001 (00f5deaa...) -> 61 keys | 1366 slots | 0 slaves.
172.16.33.252:7001 (19fffc48...) -> 58 keys | 1365 slots | 0 slaves.
172.16.34.1:7001 (cd787484...) -> 55 keys | 1365 slots | 0 slaves.
172.16.33.252:7000 (f2ac6cad...) -> 50 keys | 1365 slots | 0 slaves.
172.16.34.2:7000 (426ccc51...) -> 63 keys | 1365 slots | 0 slaves.
172.16.33.253:7001 (d7f46f5b...) -> 57 keys | 1365 slots | 0 slaves.
172.16.33.253:7000 (efce1b72...) -> 64 keys | 1365 slots | 0 slaves.
172.16.34.1:7000 (f487b731...) -> 59 keys | 1365 slots | 0 slaves.
[OK] 684 keys in 12 masters.
0.04 keys per slot on average.
>>> Performing Cluster Check (using node 172.16.33.251:7000)
M: 9e051ce8d91148d5079164bb8cc436f0da478cec 172.16.33.251:7000
slots:[15019-16383] (1365 slots) master
M: 44c63289731aacec61959961d9061e70885e670a 172.16.34.3:7000
slots:[0-1365] (1366 slots) master
M: 29e5b98fbe4a56516779c66f49843aae694d868b 172.16.34.2:7001
slots:[1366-2731] (1366 slots) master
M: ab1e21eb94f59cbf5e963da838950b369bc800dd 172.16.33.251:7001
slots:[2732-4097] (1366 slots) master
M: 00f5deaaf6bca42156ee10f815741285a84c86ef 172.16.34.3:7001
slots:[4098-5463] (1366 slots) master
M: 19fffc48fad2d724a3d8f3a70ef05bcdc453bc31 172.16.33.252:7001
slots:[5464-6828] (1365 slots) master
M: cd787484b0dfa4ac2c3832e86cd1fd37306366ed 172.16.34.1:7001
slots:[6829-8193] (1365 slots) master
M: f2ac6cada2bd5a3cc2aee46da3d85a7127b38313 172.16.33.252:7000
slots:[8194-9558] (1365 slots) master
M: 426ccc5185f423cf0ce8456ab0c168948103d2b8 172.16.34.2:7000
slots:[9559-10923] (1365 slots) master
M: d7f46f5bdc5f22a16253994bceadfbf155e8b3a5 172.16.33.253:7001
slots:[10924-12288] (1365 slots) master
M: efce1b72390ee8cd862a0b442263c40e90910f60 172.16.33.253:7000
slots:[12289-13653] (1365 slots) master
M: f487b73102c6b5776c95588870261d6ac8c5ae83 172.16.34.1:7000
slots:[13654-15018] (1365 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
9. 指令开启主 master的 aof功能,生成此master 的aof文件,其他节点默认开启,可看到aof文件已默认生成
1 | redis-cli -c -p 7000 -a ***** |
注意,不做这一步,后面重启该节点的redis,则该节点数据会丢失!!!
10. 打开主master配置的aof功能,重启主 master
编辑config文件
设置
appendonly yes
重启:sudo systemctl restart redis_7000
11. 启动添加slave节点
删除所有slave节点的数据文件:appendonly_7002.aof,dump_7002.rdb,node_7002.conf
启动所有slave节点,例:sudo systemctl start redis_7002
根据原来的主从关系添加从节点到指定的原master节点下:
例:( 执行所有)1
redis-cli --cluster add-node 172.16.33.251:7002 172.16.33.251:7000 --cluster-slave --cluster-master-id f2ac6cada2bd5a3cc2aee46da3d85a7127b38313 -a *****
12. 集群确认,以及主从切换正常
登录新集群cluster nodes查看, 集群关系正常,手动停掉其中一个master,再次查看,会看到对应的从已切换为主,且整个集群读写正常,再此启动该master,此master自动为从,数据正常无丢失,集群即为正常。
import 直接导入
此方法可直接从原redis 导入数据到 目标 cluster集群,不需要中间的rdb文件拷贝恢复,由于我们的数据源是在azure,并不能直接打通环境,而且只能导出rdb文件,所有本地再起一个单机的备用redis,此单机和目标cluster没有任何关联,只用来容纳从azure迁移下来的数据,从单机到单机很方便,不需要移动slot槽即可迁移完毕,作为原数据源。
1. 创建一个单机redis服务7004端口
拷贝各配置文件,改配置为172.16.33.251 的7004端口启动,且关闭AOF,使其可以通过rdb恢复数据,不需要设置密码
2. copy 源rdb文件 替换 到7004 redis服务rdb文件dump_7004.rdb
3. 启动7004端口redis服务,检测数据
1 | redis-cli -p 7004 |
4. 设置目标cluster集群的密码为空
修改对应的conf文件,注释掉密码,重启各节点服务,验证已无auth认证
5. 导入集群
1 | redis-cli --cluster import 172.16.33.251:7000 --cluster-from 172.16.33.251:7004 --cluster-replace |
注意:测试下来发现参数–cluster-replace没有用,如果集群中已经包含了某个key,在导入的时候会失败,不会覆盖,只有清空集群key才能导入。
6. 确认导入后数据完整性
依次跳转到各redis节点服务,查看数据量,最后计算数据总量与原7004中数据总量一致。
此方法不需要复杂的slot迁移与重平衡,相对比较容易。