redis分布式缓存:Redis持久化(RDB-redis data base 数据日志& AOF-append only file操作日志)模拟丢失,服务器关闭进行数据恢复rdb+aof,事务管理

    技术2024-11-11  9

    1.RDB(Redis DataBase)

    1.1Rdb 保存的是dump.rdb文件

    [cevent@hadoop213 redis-3.0.4]$ vim redis.conf (1)120 ################################ SNAPSHOTTING ############################# ### (2)121 # (3)122 # Save the DB on disk: (4)123 # (5)124 # save <seconds> <changes> (6)125 # (7)126 # Will save the DB if both the given number of seconds and the given (8)127 # number of write operations against the DB occurred. (9)128 # (10)129 # In the example below the behaviour will be to save: (11)130 # after 900 sec (15 min) if at least 1 key changed (12)131 # after 300 sec (5 min) if at least 10 keys changed (13)132 # after 60 sec if at least 10000 keys changed (14)133 # (15)134 # Note: you can disable saving completely by commenting out all "save" lin es. (16)135 # (17)136 # It is also possible to remove all the previously configured save (18)137 # points by adding a save directive with a single empty string argument (19)138 # like in the following example: (20)139 # (21)140 # save "" (22)141 (23)142 save 900 1 (24)143 save 120 10 设定120秒内改10次keys,生成dump.rdb (25)144 save 60 10000 [cevent@hadoop213 redis-3.0.4]$ rm -rf dump.rdb 删除原有的dump.rdb [cevent@hadoop213 redis-3.0.4]$ ps -ef | grep redis 查询redis启动装填 cevent 5292 3027 0 13:51 pts/0 00:00:00 grep redis [cevent@hadoop213 redis-3.0.4]$ lsof -i :6379 查询redis-cli状态

    1.2启动redis执行修改key,规定120秒以内进行10次以上修改key

    [cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf [cevent@hadoop213 redis-3.0.4]$ redis-cli -p 6379 127.0.0.1:6379> set k1 v1 OK 127.0.0.1:6379> set k2 v2 OK 127.0.0.1:6379> set k3 v3 OK 127.0.0.1:6379> set k4 v4 OK 127.0.0.1:6379> set k5 v5 OK 127.0.0.1:6379> set k6 v6 OK 127.0.0.1:6379> set k7 v7 OK 127.0.0.1:6379> set k8 v8 OK 127.0.0.1:6379> set k9 v9 OK 127.0.0.1:6379> set k10 v10 OK 127.0.0.1:6379> set k11 v11 OK 127.0.0.1:6379> set k12 v12 OK 127.0.0.1:6379>

    1.3查看dump.rdb

    [cevent@hadoop213 redis-3.0.4]$ ll 总用量 152 -rw-rw-r--. 1 cevent cevent 31391 9月 8 2015 00-RELEASENOTES -rw-rw-r--. 1 cevent cevent 53 9月 8 2015 BUGS -rw-rw-r--. 1 cevent cevent 1439 9月 8 2015 CONTRIBUTING -rw-rw-r--. 1 cevent cevent 1487 9月 8 2015 COPYING drwxrwxr-x. 6 cevent cevent 4096 7月 1 17:51 deps -rw-rw-r--. 1 cevent cevent 485 7月 3 13:51 dump.rdb -rw-rw-r--. 1 cevent cevent 11 9月 8 2015 INSTALL -rw-rw-r--. 1 cevent cevent 151 9月 8 2015 Makefile -rw-rw-r--. 1 cevent cevent 4223 9月 8 2015 MANIFESTO -rw-rw-r--. 1 cevent cevent 5201 9月 8 2015 README -rw-rw-r--. 1 cevent cevent 41404 7月 3 13:47 redis.conf -rwxrwxr-x. 1 cevent cevent 271 9月 8 2015 runtest -rwxrwxr-x. 1 cevent cevent 280 9月 8 2015 runtest-cluster -rwxrwxr-x. 1 cevent cevent 281 9月 8 2015 runtest-sentinel -rw-rw-r--. 1 cevent cevent 7109 9月 8 2015 sentinel.conf drwxrwxr-x. 2 cevent cevent 4096 7月 1 17:52 src drwxrwxr-x. 10 cevent cevent 4096 9月 8 2015 tests drwxrwxr-x. 5 cevent cevent 4096 9月 8 2015 utils [cevent@hadoop213 redis-3.0.4]$ ll 总用量 152 -rw-rw-r--. 1 cevent cevent 31391 9月 8 2015 00-RELEASENOTES -rw-rw-r--. 1 cevent cevent 53 9月 8 2015 BUGS -rw-rw-r--. 1 cevent cevent 1439 9月 8 2015 CONTRIBUTING -rw-rw-r--. 1 cevent cevent 1487 9月 8 2015 COPYING drwxrwxr-x. 6 cevent cevent 4096 7月 1 17:51 deps -rw-rw-r--. 1 cevent cevent 525 7月 3 13:55 dump.rdb -rw-rw-r--. 1 cevent cevent 11 9月 8 2015 INSTALL -rw-rw-r--. 1 cevent cevent 151 9月 8 2015 Makefile -rw-rw-r--. 1 cevent cevent 4223 9月 8 2015 MANIFESTO -rw-rw-r--. 1 cevent cevent 5201 9月 8 2015 README -rw-rw-r--. 1 cevent cevent 41404 7月 3 13:47 redis.conf -rwxrwxr-x. 1 cevent cevent 271 9月 8 2015 runtest -rwxrwxr-x. 1 cevent cevent 280 9月 8 2015 runtest-cluster -rwxrwxr-x. 1 cevent cevent 281 9月 8 2015 runtest-sentinel -rw-rw-r--. 1 cevent cevent 7109 9月 8 2015 sentinel.conf drwxrwxr-x. 2 cevent cevent 4096 7月 1 17:52 src drwxrwxr-x. 10 cevent cevent 4096 9月 8 2015 tests drwxrwxr-x. 5 cevent cevent 4096 9月 8 2015 utils

    1.4假设数据丢失 且服务器关闭

    OK 127.0.0.1:6379> set k11 v11 OK 127.0.0.1:6379> set k12 v12 OK 127.0.0.1:6379> FLUSHALL 数据清洗 OK 127.0.0.1:6379> shutdown not connected> exit

    1.5原rdb保存查询文件redis.conf

    120 ################################ SNAPSHOTTING ############################# ### 121 # 122 # Save the DB on disk: redis中会将snapshotting中的DB保存在硬盘中 123 # 124 # save <seconds> <changes> 命令 125 # 126 # Will save the DB if both the given number of seconds and the given 保存DB动作 127 # number of write operations against the DB occurred. 128 # 129 # In the example below the behaviour will be to save: 130 # after 900 sec (15 min) if at least 1 key changed 如果在900s/15分钟内,1个key被改变,就会进行dump.rdb存储 131 # after 300 sec (5 min) if at least 10 keys changed如果在300s/5分钟内,10个key被改变,就会进行dump.rdb存储 132 # after 60 sec if at least 10000 keys changed 如果在60s/1分钟内,keys值由1万次变更,进行dump.rdb存储 133 # 134 # Note: you can disable saving completely by commenting out all "save" lin es. 可以禁用 135 # 136 # It is also possible to remove all the previously configured save 137 # points by adding a save directive with a single empty string argument 138 # like in the following example: 139 # 140 # save "" 禁用设置 141 142 save 900 1 默认 143 save 300 10 144 save 60 10000 145 172 # RDB files created with checksum disabled have a checksum of zero that will 173 # tell the loading code to skip the check. 174 rdbchecksum yes 175 176 # The filename where to dump the DB 177 dbfilename dump.rdb 178 179 # The working directory. 180 # 181 # The DB will be written inside this directory, with the filename specified 182 # above using the 'dbfilename' configuration directive. 183 # 184 # The Append Only File will also be created inside this directory. 185 # 186 # Note that you must specify a directory here, not a file name. 187 dir ./ 默认dir目录位置

    1.6数据恢复

    [cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf 未使用拷贝的dump_bk.rdb [cevent@hadoop213 redis-3.0.4]$ redis-cli -p 6379 127.0.0.1:6379> keys * (empty list or set) 无数据 127.0.0.1:6379> SHUTDOWN not connected> exit

    【删除原dump.rdb 修改为dump_bk.rdb 实现数据恢复】

    [cevent@hadoop213 redis-3.0.4]$ ll 总用量 156 -rw-rw-r--. 1 cevent cevent 31391 9月 8 2015 00-RELEASENOTES -rw-rw-r--. 1 cevent cevent 53 9月 8 2015 BUGS -rw-rw-r--. 1 cevent cevent 1439 9月 8 2015 CONTRIBUTING -rw-rw-r--. 1 cevent cevent 1487 9月 8 2015 COPYING drwxrwxr-x. 6 cevent cevent 4096 7月 1 17:51 deps -rw-rw-r--. 1 cevent cevent 525 7月 3 14:10 dump_bk.rdb -rw-rw-r--. 1 cevent cevent 18 7月 3 14:11 dump.rdb -rw-rw-r--. 1 cevent cevent 11 9月 8 2015 INSTALL -rw-rw-r--. 1 cevent cevent 151 9月 8 2015 Makefile -rw-rw-r--. 1 cevent cevent 4223 9月 8 2015 MANIFESTO -rw-rw-r--. 1 cevent cevent 5201 9月 8 2015 README -rw-rw-r--. 1 cevent cevent 41404 7月 3 13:47 redis.conf -rwxrwxr-x. 1 cevent cevent 271 9月 8 2015 runtest -rwxrwxr-x. 1 cevent cevent 280 9月 8 2015 runtest-cluster -rwxrwxr-x. 1 cevent cevent 281 9月 8 2015 runtest-sentinel -rw-rw-r--. 1 cevent cevent 7109 9月 8 2015 sentinel.conf drwxrwxr-x. 2 cevent cevent 4096 7月 1 17:52 src drwxrwxr-x. 10 cevent cevent 4096 9月 8 2015 tests drwxrwxr-x. 5 cevent cevent 4096 9月 8 2015 utils [cevent@hadoop213 redis-3.0.4]$ rm -rf dump.rdb 删除 [cevent@hadoop213 redis-3.0.4]$ cp dump_bk.rdb dump.rdb 复制 [cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf [cevent@hadoop213 redis-3.0.4]$ redis-cli -p 6379 127.0.0.1:6379> keys * 1) "user" 2) "set05" 3) "k6" 4) "k3" 5) "k11" 6) "k2" 7) "customer" 8) "list04".

    1.7配置位置

    120 ################################ SNAPSHOTTING 快照数据备份相关设置############################# ### 121 # 122 # Save the DB on disk: redis中会将snapshotting中的DB保存在硬盘中 123 # 124 # save <seconds> <changes> 命令 125 # 126 # Will save the DB if both the given number of seconds and the given 保存DB动作 127 # number of write operations against the DB occurred. 128 # 129 # In the example below the behaviour will be to save: 130 # after 900 sec (15 min) if at least 1 key changed 如果在900s/15分钟内,1个key被改变,就会进行dump.rdb存储 131 # after 300 sec (5 min) if at least 10 keys changed如果在300s/5分钟内,10个key被改变,就会进行dump.rdb存储 132 # after 60 sec if at least 10000 keys changed 如果在60s/1分钟内,keys值由1万次变更,进行dump.rdb存储 133 # 134 # Note: you can disable saving completely by commenting out all "save" lin es. 可以禁用 135 # 136 # It is also possible to remove all the previously configured save 137 # points by adding a save directive with a single empty string argument 138 # like in the following example: 139 # 140 # save "" 禁用设置 141 142 save 900 1 默认 143 save 300 10 144 save 60 10000 145 146 # By default Redis will stop accepting writes if RDB snapshots are enabled 147 # (at least one save point) and the latest background save failed. 148 # This will make the user aware (in a hard way) that data is not persisting 149 # on disk properly, otherwise chances are that no one will notice and some 150 # disaster will happen. 151 # 152 # If the background saving process will start working again Redis will 153 # automatically allow writes again. 154 # 155 # However if you have setup your proper monitoring of the Redis server 156 # and persistence, you may want to disable this feature so that Redis will 157 # continue to work as usual even if there are problems with disk, 158 # permissions, and so forth. 159 stop-writes-on-bgsave-error yes 出错后立即保存 160 161 # Compress string objects using LZF when dump .rdb databases? 是否启动压缩算法 162 # For default that's set to 'yes' as it's almost always a win. 163 # If you want to save some CPU in the saving child set it to 'no' but 164 # the dataset will likely be bigger if you have compressible values or keys. 165 rdbcompression yes 166 167 # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. 检查64位算法校验 168 # This makes the format more resistant to corruption but there is a performa nce 169 # hit to pay (around 10%) when saving and loading RDB files, so you can disa ble it 170 # for maximum performances. 171 # 172 # RDB files created with checksum disabled have a checksum of zero that will 173 # tell the loading code to skip the check. 174 rdbchecksum yes 175 176 # The filename where to dump the DB 177 dbfilename dump.rdb 查询rdb数据位置 178 179 # The working directory. 180 # 181 # The DB will be written inside this directory, with the filename specified 182 # above using the 'dbfilename' configuration directive. 183 # 184 # The Append Only File will also be created inside this directory. 185 # 186 # Note that you must specify a directory here, not a file name. 187 dir ./ 默认dir目录位置

    1.8如何触发RDB快照

    127.0.0.1:6379> set 200703 200703cevent OK 127.0.0.1:6379> save 执行save会立即更新生成rdb文件 OK 文件更新:-rw-rw-r--. 1 cevent cevent 544 7月 3 14:26 dump.rdb

    2.AOF(Append Only File)

    2.1配置位置

    484 ############################## APPEND ONLY MODE ############################ ### 485 486 # By default Redis asynchronously dumps the dataset on disk. This mode is 487 # good enough in many applications, but an issue with the Redis process or 488 # a power outage may result into a few minutes of writes lost (depending on 489 # the configured save points). 490 # 491 # The Append Only File is an alternative persistence mode that provides 492 # much better durability. For instance using the default data fsync policy 493 # (see later in the config file) Redis can lose just one second of writes in a 494 # dramatic event like a server power outage, or a single write if something 495 # wrong with the Redis process itself happens, but the operating system is 496 # still running correctly. 497 # 498 # AOF and RDB persistence can be enabled at the same time without problems. Aof和rdb同时共存 499 # If the AOF is enabled on startup Redis will load the AOF, that is the file 500 # with the better durability guarantees. 501 # 502 # Please check http://redis.io/topics/persistence for more information. 503 504 appendonly no 修改为yes。启用aof 505 506 # The name of the append only file (default: "appendonly.aof") 507 508 appendfilename "appendonly.aof" 默认文件名 509 510 # The fsync() call tells the Operating System to actually write data on disk 511 # instead of waiting for more data in the output buffer. Some OS will really flush 512 # data on disk, some other OS will just try to do it ASAP. 513 # 514 # Redis supports three different modes: 515 # 516 # no: don't fsync, just let the OS flush the data when it wants. Faster. 517 # always: fsync after every write to the append only log. Slow, Safest. 518 # everysec: fsync only one time every second. Compromise. 519 # 520 # The default is "everysec", as that's usually the right compromise between 默认每秒记录一次 异步记录 521 # speed and data safety. It's up to you to understand if you can relax this to 522 # "no" that will let the operating system flush the output buffer when 523 # it wants, for better performances (but if you can live with the idea of 524 # some data loss consider the default persistence mode that's snapshotting), 525 # or on the contrary, use "always" that's very slow but a bit safer than 526 # everysec. 527 # 528 # More details please check the following article: 529 # http://antirez.com/post/redis-persistence-demystified.html 530 # 531 # If unsure, use "everysec". 532 533 # appendfsync always 534 appendfsync everysec 535 # appendfsync no 536 537 # When the AOF fsync policy is set to always or everysec, and a background 538 # saving process (a background save or AOF log background rewriting) is 539 # performing a lot of I/O against the disk, in some Linux configurations 540 # Redis may block too long on the fsync() call. Note that there is no fix fo r 541 # this currently, as even performing fsync in a different thread will block 542 # our synchronous write(2) call. 543 # 544 # In order to mitigate this problem it's possible to use the following optio n 545 # that will prevent fsync() from being called in the main process while a 546 # BGSAVE or BGREWRITEAOF is in progress. 547 # 548 # This means that while another child is saving, the durability of Redis is 549 # the same as "appendfsync none". In practical terms, this means that it is 550 # possible to lose up to 30 seconds of log in the worst scenario (with the 551 # default Linux settings). 552 # 553 # If you have latency problems turn this to "yes". Otherwise leave it as 554 # "no" that is the safest pick from the point of view of durability. 555 556 no-appendfsync-on-rewrite no 557 558 # Automatic rewrite of the append only file. 559 # Redis is able to automatically rewrite the log file implicitly calling 560 # BGREWRITEAOF when the AOF log size grows by the specified percentage. 561 # 562 # This is how it works: Redis remembers the size of the AOF file after the 563 # latest rewrite (if no rewrite has happened since the restart, the size of 564 # the AOF at startup is used). 565 # 566 # This base size is compared to the current size. If the current size is 567 # bigger than the specified percentage, the rewrite is triggered. Also 568 # you need to specify a minimal size for the AOF file to be rewritten, this 569 # is useful to avoid rewriting the AOF file even if the percentage increase 570 # is reached but it is still pretty small. 571 # 572 # Specify a percentage of zero in order to disable the automatic AOF 573 # rewrite feature. 574 575 auto-aof-rewrite-percentage 100 100%即是放大一倍 576 auto-aof-rewrite-min-size 64mb rewrite的大小为64MB 577 578 # An AOF file may be found to be truncated at the end during the Redis 579 # startup process, when the AOF data gets loaded back into memory. 580 # This may happen when the system where Redis is running 581 # crashes, especially when an ext4 filesystem is mounted without the 582 # data=ordered option (however this can't happen when Redis itself 583 # crashes or aborts but the operating system still works correctly). 584 # 585 # Redis can either exit with an error when this happens, or load as much 586 # data as possible (the default now) and start if the AOF file is found 587 # to be truncated at the end. The following option controls this behavior. 588 # 589 # If aof-load-truncated is set to yes, a truncated AOF file is loaded and 590 # the Redis server starts emitting a log to inform the user of the event. 591 # Otherwise if the option is set to no, the server aborts with an error 592 # and refuses to start. When the option is set to no, the user requires 593 # to fix the AOF file using the "redis-check-aof" utility before to restart 594 # the server. 595 # 596 # Note that if the AOF file will be found to be corrupted in the middle 597 # the server will still exit with an error. This option only applies when 598 # Redis will try to read more data from the AOF file but not enough bytes 599 # will be found. 600 aof-load-truncated yes

    2.2AOF启动/修复/恢复:2.2.1恢复准备,删除所有rdb

    [cevent@hadoop213 redis-3.0.4]$ rm -rf dump_bk.rdb [cevent@hadoop213 redis-3.0.4]$ rm -rf dump.rdb [cevent@hadoop213 redis-3.0.4]$ ll 总用量 148 -rw-rw-r--. 1 cevent cevent 31391 9月 8 2015 00-RELEASENOTES -rw-rw-r--. 1 cevent cevent 53 9月 8 2015 BUGS -rw-rw-r--. 1 cevent cevent 1439 9月 8 2015 CONTRIBUTING -rw-rw-r--. 1 cevent cevent 1487 9月 8 2015 COPYING drwxrwxr-x. 6 cevent cevent 4096 7月 1 17:51 deps -rw-rw-r--. 1 cevent cevent 11 9月 8 2015 INSTALL -rw-rw-r--. 1 cevent cevent 151 9月 8 2015 Makefile -rw-rw-r--. 1 cevent cevent 4223 9月 8 2015 MANIFESTO -rw-rw-r--. 1 cevent cevent 5201 9月 8 2015 README -rw-rw-r--. 1 cevent cevent 41405 7月 3 15:03 redis.conf -rwxrwxr-x. 1 cevent cevent 271 9月 8 2015 runtest -rwxrwxr-x. 1 cevent cevent 280 9月 8 2015 runtest-cluster -rwxrwxr-x. 1 cevent cevent 281 9月 8 2015 runtest-sentinel -rw-rw-r--. 1 cevent cevent 7109 9月 8 2015 sentinel.conf drwxrwxr-x. 2 cevent cevent 4096 7月 1 17:52 src drwxrwxr-x. 10 cevent cevent 4096 9月 8 2015 tests drwxrwxr-x. 5 cevent cevent 4096 9月 8 2015 utils

    2.3启动redis,创建数据,制造偶然机器故障

    [cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf [cevent@hadoop213 redis-3.0.4]$ redis-cli -p 6379 127.0.0.1:6379> keys * (empty list or set) 127.0.0.1:6379> set k1 v1 OK 127.0.0.1:6379> set k2 v2 OK 127.0.0.1:6379> set k4 v4 OK 127.0.0.1:6379> set k5 v5 OK 127.0.0.1:6379> set k6 v6 OK 127.0.0.1:6379> keys * 1) "k5" 2) "k1" 3) "k4" 4) "k2" 5) "k6" 127.0.0.1:6379> flushall 清洗数据 OK 127.0.0.1:6379> keys * (empty list or set) 127.0.0.1:6379> shutdown 关闭服务器 not connected> exit

    2.4查看aof

    查看ao [cevent@hadoop213 ~]$ cd /opt/module/redis-3.0.4/ [cevent@hadoop213 redis-3.0.4]$ ll 总用量 148 -rw-rw-r--. 1 cevent cevent 31391 9月 8 2015 00-RELEASENOTES -rw-r--r--. 1 cevent cevent 0 7月 3 20:37 appendonly.aof -rw-rw-r--. 1 cevent cevent 53 9月 8 2015 BUGS -rw-rw-r--. 1 cevent cevent 1439 9月 8 2015 CONTRIBUTING -rw-rw-r--. 1 cevent cevent 1487 9月 8 2015 COPYING drwxrwxr-x. 6 cevent cevent 4096 7月 1 17:51 deps -rw-rw-r--. 1 cevent cevent 11 9月 8 2015 INSTALL -rw-rw-r--. 1 cevent cevent 151 9月 8 2015 Makefile -rw-rw-r--. 1 cevent cevent 4223 9月 8 2015 MANIFESTO -rw-rw-r--. 1 cevent cevent 5201 9月 8 2015 README -rw-rw-r--. 1 cevent cevent 41405 7月 3 15:03 redis.conf -rwxrwxr-x. 1 cevent cevent 271 9月 8 2015 runtest -rwxrwxr-x. 1 cevent cevent 280 9月 8 2015 runtest-cluster -rwxrwxr-x. 1 cevent cevent 281 9月 8 2015 runtest-sentinel -rw-rw-r--. 1 cevent cevent 7109 9月 8 2015 sentinel.conf drwxrwxr-x. 2 cevent cevent 4096 7月 1 17:52 src drwxrwxr-x. 10 cevent cevent 4096 9月 8 2015 tests drwxrwxr-x. 5 cevent cevent 4096 9月 8 2015 utils

    2.5删除再次生成的rdb,只剩下aof

    [cevent@hadoop213 redis-3.0.4]$ rm -rf dump.rdb [cevent@hadoop213 redis-3.0.4]$ ll 总用量 152 -rw-rw-r--. 1 cevent cevent 31391 9月 8 2015 00-RELEASENOTES -rw-r--r--. 1 cevent cevent 186 7月 3 20:43 appendonly.aof -rw-rw-r--. 1 cevent cevent 53 9月 8 2015 BUGS -rw-rw-r--. 1 cevent cevent 1439 9月 8 2015 CONTRIBUTING -rw-rw-r--. 1 cevent cevent 1487 9月 8 2015 COPYING drwxrwxr-x. 6 cevent cevent 4096 7月 1 17:51 deps -rw-rw-r--. 1 cevent cevent 11 9月 8 2015 INSTALL -rw-rw-r--. 1 cevent cevent 151 9月 8 2015 Makefile -rw-rw-r--. 1 cevent cevent 4223 9月 8 2015 MANIFESTO -rw-rw-r--. 1 cevent cevent 5201 9月 8 2015 README -rw-rw-r--. 1 cevent cevent 41405 7月 3 15:03 redis.conf -rwxrwxr-x. 1 cevent cevent 271 9月 8 2015 runtest -rwxrwxr-x. 1 cevent cevent 280 9月 8 2015 runtest-cluster -rwxrwxr-x. 1 cevent cevent 281 9月 8 2015 runtest-sentinel -rw-rw-r--. 1 cevent cevent 7109 9月 8 2015 sentinel.conf drwxrwxr-x. 2 cevent cevent 4096 7月 1 17:52 src drwxrwxr-x. 10 cevent cevent 4096 9月 8 2015 tests drwxrwxr-x. 5 cevent cevent 4096 9月 8 2015 utils [cevent@hadoop213 redis-3.0.4]$ cat appendonly.aof 查看aof记录 *2 $6 SELECT $1 0 *3 $3 set $2 k1 ……k6 $2 v6 *1 $8 Flushall [cevent@hadoop213 redis-3.0.4]$ vim appendonly.aof 删除flushall命令 *2 $6 SELECT $1 0 *3 $3 set $2 k1 $2 v1 *3 $3 set $2 k2 $2 v2 *3 $3 set $2 k4 $2 v4 *3 $3 set $2 k5 $2 v5 *3 $3 set $2 k6 $2 v6 *1 $8 (如果其中乱码,直接删除乱码即可。 ***如果开了aof,aof中乱码,不能够启动redis,redis默认开启先查询aop [cevent@hadoop213 redis-3.0.4]$ redis-check-aof --fix appendonly.aof 自动修复aof AOF analyzed: size=168, ok_up_to=168, diff=0 AOF is valid 也可以check-dump 进行修复 ) [cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf 启动redis [cevent@hadoop213 redis-3.0.4]$ redis-cli -p 6379 启动客户端 127.0.0.1:6379> keys * 1) "k4" 2) "k6" 3) "k5" 4) "k1" 5) "k2" (数据恢复)

    3.redis事务

    3.1Case1:正常执行

    127.0.0.1:6379> multi 开启事务 OK 127.0.0.1:6379> set k1 v1 排列事务 QUEUED 队列形成 127.0.0.1:6379> set k2 v2 QUEUED 127.0.0.1:6379> set k3 v3 QUEUED 127.0.0.1:6379> get k2 QUEUED 127.0.0.1:6379> exec 执行事务队列 1) OK 2) OK 3) OK 4) "v2" 127.0.0.1:6379> keys * 1) "k1" 2) "k3" 3) "k2"

    3.2Case2:放弃事务

    127.0.0.1:6379> FLUSHALL 清洗数据 OK 127.0.0.1:6379> multi 开启事务 OK 127.0.0.1:6379> set k11 v11 QUEUED 127.0.0.1:6379> set k22 v22 QUEUED 127.0.0.1:6379> set k33 k33 QUEUED 127.0.0.1:6379> discard 放弃事务 OK 127.0.0.1:6379> get k2 没有执行队列 (nil)

    3.3Case3:全体连坐(一错全错,不执行)

    Case3:全体连坐(一错全错,不执行) 127.0.0.1:6379> multi 开启事务 OK 127.0.0.1:6379> set k1 v1 QUEUED 127.0.0.1:6379> set k2 编写错误 (error) ERR wrong number of arguments for 'set' command 127.0.0.1:6379> set k3 v3 QUEUED 127.0.0.1:6379> exec 无法执行 (error) EXECABORT Transaction discarded because of previous errors.

    3.4Case4:冤头债主

    127.0.0.1:6379> multi OK 127.0.0.1:6379> incr k1 QUEUED 127.0.0.1:6379> set k2 22 QUEUED 127.0.0.1:6379> set k3 33 QUEUED 127.0.0.1:6379> get k3 QUEUED 127.0.0.1:6379> exec 执行 (错误的跳过-incr增减运行后出异常,运行后异常的会跳过。如果是set,属于运行前异常,不运行整个队列) 1) (error) ERR value is not an integer or out of range 2) OK 3) OK 4) "33" 127.0.0.1:6379> get k3 "33"

    3.5Case5:watch监控

    127.0.0.1:6379> set balance 100 创建余额 OK 127.0.0.1:6379> set debt 0 欠款 OK 127.0.0.1:6379> keys * 1) "debt" 2) "balance" 127.0.0.1:6379> watch balance 监控余额 OK 127.0.0.1:6379> multi OK 127.0.0.1:6379> decrby balance 20 减余额 QUEUED 127.0.0.1:6379> incrby debt 20 增欠款 QUEUED 127.0.0.1:6379> exec 执行 1) (integer) 80 2) (integer) 20 查询 127.0.0.1:6379> get balance "80"

    3.6有加塞篡改watch

    3.7unwatch:如果数据值被更改,为了保障multi队列的执行,需要先unwatch取消监控,再执行multi,可以exec实现

    127.0.0.1:6379> watch balance 开启监视 OK 127.0.0.1:6379> set balance 12000 属性被更改 OK 127.0.0.1:6379> unwatch 结束watch OK 127.0.0.1:6379> multi 开启队列 OK 127.0.0.1:6379> decrby balance 15000 减少余额 QUEUED 127.0.0.1:6379> incrby debt 15000 增加欠款 QUEUED 127.0.0.1:6379> exec 执行 1) (integer) -3000 2) (integer) 15020
    Processed: 0.044, SQL: 9